Sep 11 00:28:46.809835 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:28:46.809858 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:28:46.809867 kernel: BIOS-provided physical RAM map: Sep 11 00:28:46.809873 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 11 00:28:46.809879 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 11 00:28:46.809886 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 11 00:28:46.809893 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 11 00:28:46.809901 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 11 00:28:46.809908 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 11 00:28:46.809914 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 11 00:28:46.809920 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:28:46.809926 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 11 00:28:46.809932 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:28:46.809939 kernel: NX (Execute Disable) protection: active Sep 11 00:28:46.809948 kernel: APIC: Static calls initialized Sep 11 00:28:46.809955 kernel: SMBIOS 2.8 present. Sep 11 00:28:46.809962 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 11 00:28:46.809969 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:28:46.809975 kernel: Hypervisor detected: KVM Sep 11 00:28:46.809982 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:28:46.809989 kernel: kvm-clock: using sched offset of 3264587412 cycles Sep 11 00:28:46.809996 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:28:46.810003 kernel: tsc: Detected 2794.748 MHz processor Sep 11 00:28:46.810012 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:28:46.810020 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:28:46.810027 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 11 00:28:46.810034 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 11 00:28:46.810041 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:28:46.810048 kernel: Using GB pages for direct mapping Sep 11 00:28:46.810055 kernel: ACPI: Early table checksum verification disabled Sep 11 00:28:46.810062 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 11 00:28:46.810069 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:28:46.810078 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:28:46.810085 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:28:46.810092 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 11 00:28:46.810099 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:28:46.810106 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:28:46.810113 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:28:46.810120 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:28:46.810127 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 11 00:28:46.810138 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 11 00:28:46.810145 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 11 00:28:46.810152 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 11 00:28:46.810159 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 11 00:28:46.810166 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 11 00:28:46.810173 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 11 00:28:46.810182 kernel: No NUMA configuration found Sep 11 00:28:46.810189 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 11 00:28:46.810197 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 11 00:28:46.810204 kernel: Zone ranges: Sep 11 00:28:46.810213 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:28:46.810220 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 11 00:28:46.810229 kernel: Normal empty Sep 11 00:28:46.810237 kernel: Device empty Sep 11 00:28:46.810244 kernel: Movable zone start for each node Sep 11 00:28:46.810251 kernel: Early memory node ranges Sep 11 00:28:46.810260 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 11 00:28:46.810267 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 11 00:28:46.810274 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 11 00:28:46.810281 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:28:46.810288 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 11 00:28:46.810295 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 11 00:28:46.810302 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:28:46.810309 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:28:46.810317 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:28:46.810325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:28:46.810333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:28:46.810340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:28:46.810347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:28:46.810354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:28:46.810361 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:28:46.810368 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:28:46.810375 kernel: TSC deadline timer available Sep 11 00:28:46.810382 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:28:46.810391 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:28:46.810398 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:28:46.810405 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:28:46.810412 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:28:46.810419 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:28:46.810426 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:28:46.810433 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:28:46.810440 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:28:46.810447 kernel: kvm-guest: setup PV sched yield Sep 11 00:28:46.810454 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 11 00:28:46.810463 kernel: Booting paravirtualized kernel on KVM Sep 11 00:28:46.810480 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:28:46.810488 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:28:46.810503 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:28:46.810518 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:28:46.810525 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:28:46.810550 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:28:46.810557 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:28:46.810566 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:28:46.810577 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:28:46.810584 kernel: random: crng init done Sep 11 00:28:46.810595 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:28:46.810603 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:28:46.810610 kernel: Fallback order for Node 0: 0 Sep 11 00:28:46.810617 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 11 00:28:46.810624 kernel: Policy zone: DMA32 Sep 11 00:28:46.810631 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:28:46.810640 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:28:46.810648 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:28:46.810655 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:28:46.810662 kernel: Dynamic Preempt: voluntary Sep 11 00:28:46.810669 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:28:46.810677 kernel: rcu: RCU event tracing is enabled. Sep 11 00:28:46.810684 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:28:46.810691 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:28:46.810699 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:28:46.810707 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:28:46.810715 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:28:46.810722 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:28:46.810738 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:28:46.810745 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:28:46.810752 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:28:46.810760 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:28:46.810767 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:28:46.810783 kernel: Console: colour VGA+ 80x25 Sep 11 00:28:46.810790 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:28:46.810798 kernel: ACPI: Core revision 20240827 Sep 11 00:28:46.810806 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:28:46.810815 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:28:46.810822 kernel: x2apic enabled Sep 11 00:28:46.810830 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:28:46.810837 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:28:46.810845 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:28:46.810854 kernel: kvm-guest: setup PV IPIs Sep 11 00:28:46.810861 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:28:46.810869 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:28:46.810876 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 11 00:28:46.810884 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:28:46.810891 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:28:46.810898 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:28:46.810906 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:28:46.810915 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:28:46.810923 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:28:46.810930 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:28:46.810937 kernel: active return thunk: retbleed_return_thunk Sep 11 00:28:46.810945 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:28:46.810952 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:28:46.810960 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:28:46.810967 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:28:46.810975 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:28:46.810986 kernel: active return thunk: srso_return_thunk Sep 11 00:28:46.810997 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:28:46.811007 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:28:46.811017 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:28:46.811025 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:28:46.811033 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:28:46.811040 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:28:46.811048 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:28:46.811058 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:28:46.811065 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:28:46.811072 kernel: landlock: Up and running. Sep 11 00:28:46.811079 kernel: SELinux: Initializing. Sep 11 00:28:46.811087 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:28:46.811094 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:28:46.811102 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:28:46.811109 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:28:46.811117 kernel: ... version: 0 Sep 11 00:28:46.811126 kernel: ... bit width: 48 Sep 11 00:28:46.811133 kernel: ... generic registers: 6 Sep 11 00:28:46.811140 kernel: ... value mask: 0000ffffffffffff Sep 11 00:28:46.811147 kernel: ... max period: 00007fffffffffff Sep 11 00:28:46.811155 kernel: ... fixed-purpose events: 0 Sep 11 00:28:46.811162 kernel: ... event mask: 000000000000003f Sep 11 00:28:46.811169 kernel: signal: max sigframe size: 1776 Sep 11 00:28:46.811177 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:28:46.811184 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:28:46.811191 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:28:46.811201 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:28:46.811208 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:28:46.811215 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:28:46.811223 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:28:46.811230 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 11 00:28:46.811238 kernel: Memory: 2430968K/2571752K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 134856K reserved, 0K cma-reserved) Sep 11 00:28:46.811245 kernel: devtmpfs: initialized Sep 11 00:28:46.811253 kernel: x86/mm: Memory block size: 128MB Sep 11 00:28:46.811260 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:28:46.811270 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:28:46.811277 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:28:46.811284 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:28:46.811292 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:28:46.811301 kernel: audit: type=2000 audit(1757550523.903:1): state=initialized audit_enabled=0 res=1 Sep 11 00:28:46.811309 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:28:46.811316 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:28:46.811324 kernel: cpuidle: using governor menu Sep 11 00:28:46.811333 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:28:46.811343 kernel: dca service started, version 1.12.1 Sep 11 00:28:46.811353 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 11 00:28:46.811361 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 11 00:28:46.811370 kernel: PCI: Using configuration type 1 for base access Sep 11 00:28:46.811380 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:28:46.811401 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:28:46.811412 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:28:46.811420 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:28:46.811437 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:28:46.811445 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:28:46.811453 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:28:46.811460 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:28:46.811471 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:28:46.811481 kernel: ACPI: Interpreter enabled Sep 11 00:28:46.811496 kernel: ACPI: PM: (supports S0 S3 S5) Sep 11 00:28:46.811506 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:28:46.811517 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:28:46.811526 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:28:46.811602 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:28:46.811609 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:28:46.811797 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:28:46.811923 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:28:46.812039 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:28:46.812049 kernel: PCI host bridge to bus 0000:00 Sep 11 00:28:46.812168 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:28:46.812305 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:28:46.812416 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:28:46.812520 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 11 00:28:46.812643 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 11 00:28:46.812756 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 11 00:28:46.812862 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:28:46.813001 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:28:46.813128 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:28:46.813244 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 11 00:28:46.813386 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 11 00:28:46.813518 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 11 00:28:46.813683 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:28:46.813836 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:28:46.813961 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 11 00:28:46.814076 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 11 00:28:46.814194 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 11 00:28:46.814343 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:28:46.814459 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 11 00:28:46.814588 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 11 00:28:46.814705 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 11 00:28:46.814842 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:28:46.814958 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 11 00:28:46.815072 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 11 00:28:46.815190 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 11 00:28:46.815319 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 11 00:28:46.815471 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:28:46.815626 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:28:46.815763 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:28:46.815883 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 11 00:28:46.816003 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 11 00:28:46.816138 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:28:46.816302 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 11 00:28:46.816318 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:28:46.816334 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:28:46.816341 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:28:46.816349 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:28:46.816356 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:28:46.816364 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:28:46.816371 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:28:46.816379 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:28:46.816386 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:28:46.816393 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:28:46.816403 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:28:46.816410 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:28:46.816418 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:28:46.816425 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:28:46.816432 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:28:46.816440 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:28:46.816447 kernel: iommu: Default domain type: Translated Sep 11 00:28:46.816454 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:28:46.816462 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:28:46.816471 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:28:46.816478 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 11 00:28:46.816485 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 11 00:28:46.816625 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:28:46.816749 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:28:46.816870 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:28:46.816880 kernel: vgaarb: loaded Sep 11 00:28:46.816888 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:28:46.816899 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:28:46.816907 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:28:46.816914 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:28:46.816922 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:28:46.816929 kernel: pnp: PnP ACPI init Sep 11 00:28:46.817059 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 11 00:28:46.817070 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:28:46.817078 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:28:46.817088 kernel: NET: Registered PF_INET protocol family Sep 11 00:28:46.817096 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:28:46.817103 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:28:46.817111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:28:46.817119 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:28:46.817126 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:28:46.817134 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:28:46.817141 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:28:46.817148 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:28:46.817158 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:28:46.817166 kernel: NET: Registered PF_XDP protocol family Sep 11 00:28:46.817275 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:28:46.817406 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:28:46.817525 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:28:46.817669 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 11 00:28:46.817805 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 11 00:28:46.817920 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 11 00:28:46.817935 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:28:46.817943 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:28:46.817951 kernel: Initialise system trusted keyrings Sep 11 00:28:46.817958 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:28:46.817966 kernel: Key type asymmetric registered Sep 11 00:28:46.817973 kernel: Asymmetric key parser 'x509' registered Sep 11 00:28:46.817981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:28:46.817988 kernel: io scheduler mq-deadline registered Sep 11 00:28:46.817997 kernel: io scheduler kyber registered Sep 11 00:28:46.818010 kernel: io scheduler bfq registered Sep 11 00:28:46.818020 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:28:46.818031 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:28:46.818042 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:28:46.818050 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:28:46.818058 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:28:46.818065 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:28:46.818073 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:28:46.818080 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:28:46.818088 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:28:46.818228 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:28:46.818359 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:28:46.818369 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:28:46.818476 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:28:46 UTC (1757550526) Sep 11 00:28:46.818604 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 11 00:28:46.818614 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:28:46.818622 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:28:46.818633 kernel: Segment Routing with IPv6 Sep 11 00:28:46.818640 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:28:46.818648 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:28:46.818655 kernel: Key type dns_resolver registered Sep 11 00:28:46.818662 kernel: IPI shorthand broadcast: enabled Sep 11 00:28:46.818670 kernel: sched_clock: Marking stable (2734001832, 110035049)->(2858669487, -14632606) Sep 11 00:28:46.818677 kernel: registered taskstats version 1 Sep 11 00:28:46.818685 kernel: Loading compiled-in X.509 certificates Sep 11 00:28:46.818692 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:28:46.818702 kernel: Demotion targets for Node 0: null Sep 11 00:28:46.818709 kernel: Key type .fscrypt registered Sep 11 00:28:46.818717 kernel: Key type fscrypt-provisioning registered Sep 11 00:28:46.818724 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:28:46.818740 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:28:46.818748 kernel: ima: No architecture policies found Sep 11 00:28:46.818755 kernel: clk: Disabling unused clocks Sep 11 00:28:46.818762 kernel: Warning: unable to open an initial console. Sep 11 00:28:46.818770 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:28:46.818781 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:28:46.818788 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:28:46.818796 kernel: Run /init as init process Sep 11 00:28:46.818803 kernel: with arguments: Sep 11 00:28:46.818810 kernel: /init Sep 11 00:28:46.818818 kernel: with environment: Sep 11 00:28:46.818825 kernel: HOME=/ Sep 11 00:28:46.818832 kernel: TERM=linux Sep 11 00:28:46.818839 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:28:46.818850 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:28:46.818871 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:28:46.818881 systemd[1]: Detected virtualization kvm. Sep 11 00:28:46.818889 systemd[1]: Detected architecture x86-64. Sep 11 00:28:46.818897 systemd[1]: Running in initrd. Sep 11 00:28:46.818907 systemd[1]: No hostname configured, using default hostname. Sep 11 00:28:46.818916 systemd[1]: Hostname set to . Sep 11 00:28:46.818924 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:28:46.818932 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:28:46.818940 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:28:46.818948 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:28:46.818957 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:28:46.818965 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:28:46.818976 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:28:46.818984 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:28:46.818994 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:28:46.819002 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:28:46.819010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:28:46.819019 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:28:46.819027 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:28:46.819037 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:28:46.819045 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:28:46.819055 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:28:46.819063 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:28:46.819071 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:28:46.819080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:28:46.819088 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:28:46.819096 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:28:46.819104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:28:46.819114 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:28:46.819123 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:28:46.819131 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:28:46.819139 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:28:46.819149 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:28:46.819160 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:28:46.819168 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:28:46.819176 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:28:46.819185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:28:46.819193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:28:46.819201 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:28:46.819212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:28:46.819220 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:28:46.819229 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:28:46.819256 systemd-journald[220]: Collecting audit messages is disabled. Sep 11 00:28:46.819285 systemd-journald[220]: Journal started Sep 11 00:28:46.819303 systemd-journald[220]: Runtime Journal (/run/log/journal/b1e0f641afe74ff8bfc3c1dc880d8672) is 6M, max 48.6M, 42.5M free. Sep 11 00:28:46.809359 systemd-modules-load[222]: Inserted module 'overlay' Sep 11 00:28:46.852170 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:28:46.852208 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:28:46.852224 kernel: Bridge firewalling registered Sep 11 00:28:46.838304 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 11 00:28:46.852573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:28:46.854481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:28:46.855909 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:28:46.859715 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:28:46.862437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:28:46.865509 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:28:46.869587 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:28:46.876330 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:28:46.879975 systemd-tmpfiles[240]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:28:46.884085 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:28:46.885278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:28:46.887762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:28:46.892404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:28:46.901301 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:28:46.923918 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:28:46.935238 systemd-resolved[259]: Positive Trust Anchors: Sep 11 00:28:46.935252 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:28:46.935282 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:28:46.937861 systemd-resolved[259]: Defaulting to hostname 'linux'. Sep 11 00:28:46.938922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:28:46.946150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:28:47.046587 kernel: SCSI subsystem initialized Sep 11 00:28:47.055577 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:28:47.067573 kernel: iscsi: registered transport (tcp) Sep 11 00:28:47.089573 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:28:47.089603 kernel: QLogic iSCSI HBA Driver Sep 11 00:28:47.111025 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:28:47.135879 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:28:47.139806 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:28:47.196181 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:28:47.198906 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:28:47.259572 kernel: raid6: avx2x4 gen() 29477 MB/s Sep 11 00:28:47.276560 kernel: raid6: avx2x2 gen() 30106 MB/s Sep 11 00:28:47.293608 kernel: raid6: avx2x1 gen() 25088 MB/s Sep 11 00:28:47.293629 kernel: raid6: using algorithm avx2x2 gen() 30106 MB/s Sep 11 00:28:47.311619 kernel: raid6: .... xor() 19476 MB/s, rmw enabled Sep 11 00:28:47.311641 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:28:47.332568 kernel: xor: automatically using best checksumming function avx Sep 11 00:28:47.497571 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:28:47.506270 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:28:47.509025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:28:47.549136 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 11 00:28:47.555740 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:28:47.557809 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:28:47.583488 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Sep 11 00:28:47.611480 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:28:47.614002 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:28:47.692710 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:28:47.696642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:28:47.731840 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:28:47.743556 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:28:47.746565 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:28:47.765632 kernel: AES CTR mode by8 optimization enabled Sep 11 00:28:47.770571 kernel: libata version 3.00 loaded. Sep 11 00:28:47.774266 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:28:47.784095 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:28:47.784123 kernel: GPT:9289727 != 19775487 Sep 11 00:28:47.784133 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:28:47.784143 kernel: GPT:9289727 != 19775487 Sep 11 00:28:47.785036 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:28:47.785069 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:28:47.787713 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:28:47.787993 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:28:47.792390 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:28:47.792667 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:28:47.792838 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:28:47.795499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:28:47.802851 kernel: scsi host0: ahci Sep 11 00:28:47.803108 kernel: scsi host1: ahci Sep 11 00:28:47.803297 kernel: scsi host2: ahci Sep 11 00:28:47.803480 kernel: scsi host3: ahci Sep 11 00:28:47.803689 kernel: scsi host4: ahci Sep 11 00:28:47.803881 kernel: scsi host5: ahci Sep 11 00:28:47.804081 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 11 00:28:47.804102 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 11 00:28:47.804118 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 11 00:28:47.804135 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 11 00:28:47.804151 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 11 00:28:47.804167 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 11 00:28:47.795649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:28:47.807503 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:28:47.811612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:28:47.814802 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:28:47.840685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:28:47.873496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:28:47.890678 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:28:47.901761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:28:47.910843 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:28:47.913336 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:28:47.917424 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:28:47.953466 disk-uuid[636]: Primary Header is updated. Sep 11 00:28:47.953466 disk-uuid[636]: Secondary Entries is updated. Sep 11 00:28:47.953466 disk-uuid[636]: Secondary Header is updated. Sep 11 00:28:47.956858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:28:47.961572 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:28:48.114661 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:28:48.114721 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:28:48.115575 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:28:48.115636 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:28:48.116569 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:28:48.117565 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:28:48.118576 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:28:48.118602 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:28:48.119026 kernel: ata3.00: applying bridge limits Sep 11 00:28:48.119572 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:28:48.120660 kernel: ata3.00: configured for UDMA/100 Sep 11 00:28:48.121578 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:28:48.173057 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:28:48.173279 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:28:48.193595 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:28:48.608168 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:28:48.610789 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:28:48.613177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:28:48.615381 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:28:48.618396 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:28:48.648326 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:28:48.963200 disk-uuid[637]: The operation has completed successfully. Sep 11 00:28:48.964502 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:28:48.993423 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:28:48.993585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:28:49.027839 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:28:49.051640 sh[665]: Success Sep 11 00:28:49.068566 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:28:49.068598 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:28:49.070191 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:28:49.079576 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:28:49.108968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:28:49.112909 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:28:49.130464 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:28:49.136563 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (677) Sep 11 00:28:49.138561 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:28:49.138582 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:28:49.143088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:28:49.143106 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:28:49.144226 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:28:49.146379 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:28:49.148673 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:28:49.151204 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:28:49.153794 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:28:49.181051 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 11 00:28:49.181092 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:28:49.181103 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:28:49.184853 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:28:49.184899 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:28:49.189602 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:28:49.190905 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:28:49.194421 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:28:49.278589 ignition[753]: Ignition 2.21.0 Sep 11 00:28:49.278601 ignition[753]: Stage: fetch-offline Sep 11 00:28:49.278633 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:28:49.278641 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:28:49.278743 ignition[753]: parsed url from cmdline: "" Sep 11 00:28:49.278747 ignition[753]: no config URL provided Sep 11 00:28:49.278752 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:28:49.278760 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:28:49.278781 ignition[753]: op(1): [started] loading QEMU firmware config module Sep 11 00:28:49.285343 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:28:49.278786 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:28:49.289604 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:28:49.295919 ignition[753]: op(1): [finished] loading QEMU firmware config module Sep 11 00:28:49.331947 systemd-networkd[855]: lo: Link UP Sep 11 00:28:49.331957 systemd-networkd[855]: lo: Gained carrier Sep 11 00:28:49.333506 systemd-networkd[855]: Enumeration completed Sep 11 00:28:49.333745 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:28:49.333867 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:28:49.333871 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:28:49.334817 systemd-networkd[855]: eth0: Link UP Sep 11 00:28:49.334970 systemd-networkd[855]: eth0: Gained carrier Sep 11 00:28:49.334977 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:28:49.335602 systemd[1]: Reached target network.target - Network. Sep 11 00:28:49.347580 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:28:49.349531 ignition[753]: parsing config with SHA512: ccb4fc2bfe21725adad2ef299b0851d82a07af90d2213937e1a2ac07e96a71fcd2a2564dff4113c814169bf4490788dea4f4b19f3d37bfa508f611e995f9c27a Sep 11 00:28:49.356432 unknown[753]: fetched base config from "system" Sep 11 00:28:49.356553 unknown[753]: fetched user config from "qemu" Sep 11 00:28:49.357065 ignition[753]: fetch-offline: fetch-offline passed Sep 11 00:28:49.357139 ignition[753]: Ignition finished successfully Sep 11 00:28:49.360391 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:28:49.361782 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:28:49.362878 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:28:49.400632 ignition[860]: Ignition 2.21.0 Sep 11 00:28:49.400645 ignition[860]: Stage: kargs Sep 11 00:28:49.400770 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:28:49.400781 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:28:49.401903 ignition[860]: kargs: kargs passed Sep 11 00:28:49.401970 ignition[860]: Ignition finished successfully Sep 11 00:28:49.406317 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:28:49.408424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:28:49.450559 ignition[869]: Ignition 2.21.0 Sep 11 00:28:49.450574 ignition[869]: Stage: disks Sep 11 00:28:49.450758 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:28:49.450769 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:28:49.452262 ignition[869]: disks: disks passed Sep 11 00:28:49.454750 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:28:49.452333 ignition[869]: Ignition finished successfully Sep 11 00:28:49.455976 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:28:49.457560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:28:49.459661 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:28:49.460687 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:28:49.462445 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:28:49.463393 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:28:49.500830 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:28:49.508753 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:28:49.510861 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:28:49.613566 kernel: EXT4-fs (vda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:28:49.614277 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:28:49.614871 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:28:49.618229 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:28:49.620559 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:28:49.621819 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:28:49.621858 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:28:49.621880 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:28:49.641765 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:28:49.643014 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:28:49.649568 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 11 00:28:49.651922 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:28:49.651945 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:28:49.655747 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:28:49.655808 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:28:49.657578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:28:49.678978 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:28:49.683284 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:28:49.688053 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:28:49.692797 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:28:49.781343 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:28:49.785020 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:28:49.787643 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:28:49.827570 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:28:49.839103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:28:49.850559 ignition[1002]: INFO : Ignition 2.21.0 Sep 11 00:28:49.850559 ignition[1002]: INFO : Stage: mount Sep 11 00:28:49.850559 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:28:49.850559 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:28:49.854337 ignition[1002]: INFO : mount: mount passed Sep 11 00:28:49.854337 ignition[1002]: INFO : Ignition finished successfully Sep 11 00:28:49.858180 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:28:49.860193 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:28:50.136791 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:28:50.138397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:28:50.168072 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 11 00:28:50.168100 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:28:50.168112 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:28:50.171919 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:28:50.171942 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:28:50.173469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:28:50.201582 ignition[1031]: INFO : Ignition 2.21.0 Sep 11 00:28:50.201582 ignition[1031]: INFO : Stage: files Sep 11 00:28:50.203445 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:28:50.203445 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:28:50.203445 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:28:50.206944 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:28:50.206944 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:28:50.210027 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:28:50.211606 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:28:50.213086 unknown[1031]: wrote ssh authorized keys file for user: core Sep 11 00:28:50.214279 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:28:50.215706 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:28:50.215706 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 11 00:28:50.267308 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:28:50.690144 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:28:50.690144 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:28:50.694314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:28:50.694314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:28:50.694314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:28:50.694314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:28:50.694314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:28:50.694314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:28:50.694314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:28:50.706444 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:28:50.706444 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:28:50.706444 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:28:50.706444 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:28:50.706444 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:28:50.706444 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 11 00:28:51.005211 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:28:51.387700 systemd-networkd[855]: eth0: Gained IPv6LL Sep 11 00:28:51.392960 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:28:51.395374 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:28:51.396924 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:28:51.403506 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:28:51.403506 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:28:51.406702 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 00:28:51.406702 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:28:51.409986 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:28:51.409986 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 00:28:51.409986 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:28:51.429215 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:28:51.434409 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:28:51.436406 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:28:51.436406 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:28:51.436406 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:28:51.436406 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:28:51.436406 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:28:51.436406 ignition[1031]: INFO : files: files passed Sep 11 00:28:51.436406 ignition[1031]: INFO : Ignition finished successfully Sep 11 00:28:51.439825 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:28:51.444874 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:28:51.464431 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:28:51.468988 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:28:51.473668 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:28:51.481011 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:28:51.485138 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:28:51.486897 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:28:51.488512 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:28:51.491939 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:28:51.493440 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:28:51.497047 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:28:51.565102 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:28:51.565276 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:28:51.567829 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:28:51.570089 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:28:51.572143 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:28:51.574755 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:28:51.611861 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:28:51.614580 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:28:51.644693 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:28:51.644905 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:28:51.647137 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:28:51.649320 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:28:51.649470 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:28:51.654216 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:28:51.654359 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:28:51.657176 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:28:51.658078 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:28:51.658411 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:28:51.658910 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:28:51.659229 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:28:51.659573 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:28:51.660058 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:28:51.660377 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:28:51.660870 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:28:51.661164 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:28:51.661278 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:28:51.677789 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:28:51.677939 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:28:51.678224 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:28:51.682912 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:28:51.685386 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:28:51.685529 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:28:51.688386 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:28:51.688527 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:28:51.689631 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:28:51.690032 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:28:51.696607 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:28:51.696762 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:28:51.699298 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:28:51.699825 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:28:51.699915 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:28:51.702664 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:28:51.702744 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:28:51.703151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:28:51.703269 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:28:51.706072 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:28:51.706175 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:28:51.711036 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:28:51.712997 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:28:51.713113 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:28:51.715789 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:28:51.717031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:28:51.717151 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:28:51.718770 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:28:51.718870 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:28:51.727918 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:28:51.728932 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:28:51.745263 ignition[1087]: INFO : Ignition 2.21.0 Sep 11 00:28:51.746354 ignition[1087]: INFO : Stage: umount Sep 11 00:28:51.746354 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:28:51.746354 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:28:51.751172 ignition[1087]: INFO : umount: umount passed Sep 11 00:28:51.751172 ignition[1087]: INFO : Ignition finished successfully Sep 11 00:28:51.751060 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:28:51.755205 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:28:51.755335 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:28:51.757346 systemd[1]: Stopped target network.target - Network. Sep 11 00:28:51.759589 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:28:51.759666 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:28:51.761470 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:28:51.761515 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:28:51.762559 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:28:51.762626 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:28:51.763784 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:28:51.763826 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:28:51.764187 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:28:51.768099 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:28:51.777623 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:28:51.777778 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:28:51.781913 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:28:51.782104 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:28:51.782227 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:28:51.786052 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:28:51.786749 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:28:51.789680 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:28:51.789750 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:28:51.790957 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:28:51.794504 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:28:51.794598 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:28:51.796922 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:28:51.796970 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:28:51.800579 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:28:51.800644 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:28:51.802587 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:28:51.802648 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:28:51.806806 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:28:51.809521 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:28:51.809615 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:28:51.818316 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:28:51.818514 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:28:51.819736 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:28:51.819780 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:28:51.821818 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:28:51.821853 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:28:51.822278 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:28:51.822324 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:28:51.827330 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:28:51.827380 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:28:51.830236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:28:51.830284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:28:51.836652 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:28:51.838901 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:28:51.838971 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:28:51.842444 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:28:51.842494 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:28:51.846128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:28:51.846209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:28:51.850743 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:28:51.850805 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:28:51.850854 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:28:51.851240 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:28:51.857697 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:28:51.867047 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:28:51.867165 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:28:51.971038 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:28:51.971195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:28:51.971673 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:28:51.971941 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:28:51.972004 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:28:51.973078 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:28:51.995981 systemd[1]: Switching root. Sep 11 00:28:52.031031 systemd-journald[220]: Journal stopped Sep 11 00:28:53.248487 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 11 00:28:53.248652 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:28:53.248680 kernel: SELinux: policy capability open_perms=1 Sep 11 00:28:53.248694 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:28:53.248705 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:28:53.248716 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:28:53.248727 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:28:53.248744 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:28:53.248755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:28:53.248766 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:28:53.248777 kernel: audit: type=1403 audit(1757550532.464:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:28:53.248801 systemd[1]: Successfully loaded SELinux policy in 48.119ms. Sep 11 00:28:53.248827 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.453ms. Sep 11 00:28:53.248841 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:28:53.248854 systemd[1]: Detected virtualization kvm. Sep 11 00:28:53.248866 systemd[1]: Detected architecture x86-64. Sep 11 00:28:53.248877 systemd[1]: Detected first boot. Sep 11 00:28:53.248889 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:28:53.248901 zram_generator::config[1134]: No configuration found. Sep 11 00:28:53.248914 kernel: Guest personality initialized and is inactive Sep 11 00:28:53.248927 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:28:53.248939 kernel: Initialized host personality Sep 11 00:28:53.248950 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:28:53.248961 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:28:53.248973 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:28:53.248985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:28:53.248997 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:28:53.249010 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:28:53.249022 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:28:53.249037 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:28:53.249049 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:28:53.249061 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:28:53.249074 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:28:53.249086 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:28:53.249099 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:28:53.249110 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:28:53.249122 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:28:53.249135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:28:53.249149 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:28:53.249161 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:28:53.249174 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:28:53.249187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:28:53.249199 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:28:53.249211 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:28:53.249223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:28:53.249237 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:28:53.249249 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:28:53.249261 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:28:53.249273 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:28:53.249285 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:28:53.249298 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:28:53.249310 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:28:53.249322 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:28:53.249334 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:28:53.249349 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:28:53.249361 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:28:53.249374 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:28:53.249389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:28:53.249404 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:28:53.249419 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:28:53.249434 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:28:53.249449 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:28:53.249463 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:28:53.249477 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:28:53.249489 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:28:53.250648 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:28:53.250662 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:28:53.250675 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:28:53.250687 systemd[1]: Reached target machines.target - Containers. Sep 11 00:28:53.250699 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:28:53.250711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:28:53.250727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:28:53.250739 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:28:53.250750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:28:53.250762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:28:53.250774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:28:53.250786 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:28:53.250798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:28:53.250810 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:28:53.250822 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:28:53.250836 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:28:53.250849 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:28:53.250861 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:28:53.250873 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:28:53.250886 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:28:53.250897 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:28:53.250909 kernel: loop: module loaded Sep 11 00:28:53.250920 kernel: fuse: init (API version 7.41) Sep 11 00:28:53.250932 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:28:53.250947 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:28:53.250959 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:28:53.250971 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:28:53.250983 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:28:53.250997 systemd[1]: Stopped verity-setup.service. Sep 11 00:28:53.251012 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:28:53.251024 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:28:53.251036 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:28:53.251048 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:28:53.251062 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:28:53.251076 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:28:53.251088 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:28:53.251101 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:28:53.251113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:28:53.251125 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:28:53.251136 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:28:53.251148 kernel: ACPI: bus type drm_connector registered Sep 11 00:28:53.251183 systemd-journald[1206]: Collecting audit messages is disabled. Sep 11 00:28:53.251208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:28:53.251220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:28:53.251232 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:28:53.251245 systemd-journald[1206]: Journal started Sep 11 00:28:53.251269 systemd-journald[1206]: Runtime Journal (/run/log/journal/b1e0f641afe74ff8bfc3c1dc880d8672) is 6M, max 48.6M, 42.5M free. Sep 11 00:28:52.998034 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:28:53.021468 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:28:53.021937 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:28:53.252915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:28:53.256056 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:28:53.256907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:28:53.257121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:28:53.258650 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:28:53.258866 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:28:53.260240 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:28:53.260454 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:28:53.261995 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:28:53.263402 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:28:53.265062 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:28:53.266748 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:28:53.282020 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:28:53.284709 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:28:53.287137 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:28:53.288343 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:28:53.288376 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:28:53.290504 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:28:53.297584 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:28:53.299754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:28:53.301017 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:28:53.303962 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:28:53.305152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:28:53.307621 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:28:53.309653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:28:53.310697 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:28:53.313671 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:28:53.316199 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:28:53.318871 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:28:53.320127 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:28:53.325165 systemd-journald[1206]: Time spent on flushing to /var/log/journal/b1e0f641afe74ff8bfc3c1dc880d8672 is 16.912ms for 985 entries. Sep 11 00:28:53.325165 systemd-journald[1206]: System Journal (/var/log/journal/b1e0f641afe74ff8bfc3c1dc880d8672) is 8M, max 195.6M, 187.6M free. Sep 11 00:28:53.354821 systemd-journald[1206]: Received client request to flush runtime journal. Sep 11 00:28:53.354864 kernel: loop0: detected capacity change from 0 to 113872 Sep 11 00:28:53.334095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:28:53.345231 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:28:53.346731 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:28:53.350671 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:28:53.357685 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:28:53.366873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:28:53.370587 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:28:53.384399 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:28:53.386207 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:28:53.389750 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:28:53.392034 kernel: loop1: detected capacity change from 0 to 146240 Sep 11 00:28:53.415845 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 11 00:28:53.415864 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 11 00:28:53.421325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:28:53.427568 kernel: loop2: detected capacity change from 0 to 229808 Sep 11 00:28:53.462585 kernel: loop3: detected capacity change from 0 to 113872 Sep 11 00:28:53.477755 kernel: loop4: detected capacity change from 0 to 146240 Sep 11 00:28:53.492570 kernel: loop5: detected capacity change from 0 to 229808 Sep 11 00:28:53.504127 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:28:53.505383 (sd-merge)[1276]: Merged extensions into '/usr'. Sep 11 00:28:53.510884 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:28:53.510903 systemd[1]: Reloading... Sep 11 00:28:53.581580 zram_generator::config[1314]: No configuration found. Sep 11 00:28:53.653116 ldconfig[1249]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:28:53.672355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:28:53.754746 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:28:53.755279 systemd[1]: Reloading finished in 243 ms. Sep 11 00:28:53.790095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:28:53.791679 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:28:53.819995 systemd[1]: Starting ensure-sysext.service... Sep 11 00:28:53.821866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:28:53.847602 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:28:53.847642 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:28:53.847942 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:28:53.848186 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:28:53.849100 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:28:53.849357 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 11 00:28:53.849428 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 11 00:28:53.853669 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:28:53.853683 systemd-tmpfiles[1340]: Skipping /boot Sep 11 00:28:53.854040 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:28:53.854059 systemd[1]: Reloading... Sep 11 00:28:53.866056 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:28:53.866067 systemd-tmpfiles[1340]: Skipping /boot Sep 11 00:28:53.906604 zram_generator::config[1370]: No configuration found. Sep 11 00:28:53.997058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:28:54.080517 systemd[1]: Reloading finished in 226 ms. Sep 11 00:28:54.105601 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:28:54.132680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:28:54.141834 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:28:54.144248 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:28:54.168614 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:28:54.172652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:28:54.176718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:28:54.179033 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:28:54.183904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:28:54.184073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:28:54.185298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:28:54.191287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:28:54.194798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:28:54.196133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:28:54.196232 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:28:54.199787 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:28:54.200819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:28:54.202105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:28:54.202309 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:28:54.205917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:28:54.214806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:28:54.217036 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:28:54.219311 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:28:54.219514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:28:54.231029 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:28:54.231259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:28:54.233762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:28:54.234571 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Sep 11 00:28:54.236727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:28:54.238576 augenrules[1440]: No rules Sep 11 00:28:54.251772 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:28:54.252918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:28:54.253060 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:28:54.255784 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:28:54.256842 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:28:54.258663 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:28:54.259040 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:28:54.260890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:28:54.263114 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:28:54.264905 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:28:54.267149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:28:54.267644 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:28:54.269146 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:28:54.270866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:28:54.271111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:28:54.272984 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:28:54.273255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:28:54.278171 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:28:54.303192 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:28:54.305630 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:28:54.306781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:28:54.307752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:28:54.311014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:28:54.317969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:28:54.322789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:28:54.325706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:28:54.325750 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:28:54.329561 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:28:54.330867 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:28:54.330902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:28:54.338177 systemd[1]: Finished ensure-sysext.service. Sep 11 00:28:54.339754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:28:54.344835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:28:54.346618 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:28:54.346851 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:28:54.348369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:28:54.348643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:28:54.349660 augenrules[1489]: /sbin/augenrules: No change Sep 11 00:28:54.351982 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:28:54.352615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:28:54.365198 augenrules[1521]: No rules Sep 11 00:28:54.365550 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:28:54.366182 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:28:54.368907 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:28:54.373751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:28:54.373819 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:28:54.376722 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:28:54.433567 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:28:54.438573 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:28:54.445702 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:28:54.448350 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:28:54.450965 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:28:54.466765 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:28:54.466992 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:28:54.476601 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:28:54.538992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:28:54.577134 systemd-resolved[1409]: Positive Trust Anchors: Sep 11 00:28:54.577153 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:28:54.577186 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:28:54.581530 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:28:54.583026 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:28:54.584701 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 11 00:28:54.586262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:28:54.587399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:28:54.588984 systemd-networkd[1499]: lo: Link UP Sep 11 00:28:54.589193 systemd-networkd[1499]: lo: Gained carrier Sep 11 00:28:54.590847 systemd-networkd[1499]: Enumeration completed Sep 11 00:28:54.590957 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:28:54.592119 systemd[1]: Reached target network.target - Network. Sep 11 00:28:54.594713 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:28:54.596804 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:28:54.597064 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:28:54.597636 systemd-networkd[1499]: eth0: Link UP Sep 11 00:28:54.597969 systemd-networkd[1499]: eth0: Gained carrier Sep 11 00:28:54.598031 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:28:54.598401 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:28:54.626706 systemd-networkd[1499]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:28:54.628268 systemd-timesyncd[1530]: Network configuration changed, trying to establish connection. Sep 11 00:28:54.628949 systemd-timesyncd[1530]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:28:54.629031 systemd-timesyncd[1530]: Initial clock synchronization to Thu 2025-09-11 00:28:54.773752 UTC. Sep 11 00:28:54.635916 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:28:54.650457 kernel: kvm_amd: TSC scaling supported Sep 11 00:28:54.650526 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:28:54.650563 kernel: kvm_amd: Nested Paging enabled Sep 11 00:28:54.650576 kernel: kvm_amd: LBR virtualization supported Sep 11 00:28:54.651630 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:28:54.651658 kernel: kvm_amd: Virtual GIF supported Sep 11 00:28:54.696578 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:28:54.709010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:28:54.710425 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:28:54.711676 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:28:54.712895 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:28:54.714185 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:28:54.715468 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:28:54.716660 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:28:54.717907 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:28:54.719192 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:28:54.719229 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:28:54.720115 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:28:54.721935 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:28:54.724452 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:28:54.727893 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:28:54.729310 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:28:54.730585 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:28:54.739908 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:28:54.741413 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:28:54.743243 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:28:54.745125 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:28:54.746124 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:28:54.747098 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:28:54.747122 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:28:54.748142 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:28:54.750227 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:28:54.752087 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:28:54.754605 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:28:54.757642 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:28:54.758656 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:28:54.759751 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:28:54.771260 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:28:54.774348 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Sep 11 00:28:54.774348 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Sep 11 00:28:54.775118 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:28:54.776756 jq[1569]: false Sep 11 00:28:54.777407 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:28:54.780659 extend-filesystems[1570]: Found /dev/vda6 Sep 11 00:28:54.781027 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:28:54.782196 oslogin_cache_refresh[1571]: Failure getting users, quitting Sep 11 00:28:54.783803 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Sep 11 00:28:54.783803 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:28:54.783803 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Sep 11 00:28:54.782214 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:28:54.782262 oslogin_cache_refresh[1571]: Refreshing group entry cache Sep 11 00:28:54.783971 extend-filesystems[1570]: Found /dev/vda9 Sep 11 00:28:54.786106 extend-filesystems[1570]: Checking size of /dev/vda9 Sep 11 00:28:54.788530 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:28:54.790007 oslogin_cache_refresh[1571]: Failure getting groups, quitting Sep 11 00:28:54.790675 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:28:54.791157 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Sep 11 00:28:54.791157 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:28:54.790018 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:28:54.791109 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:28:54.793333 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:28:54.795306 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:28:54.804000 extend-filesystems[1570]: Resized partition /dev/vda9 Sep 11 00:28:54.806194 extend-filesystems[1597]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:28:54.808153 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:28:54.810010 jq[1590]: true Sep 11 00:28:54.810616 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:28:54.810866 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:28:54.811378 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:28:54.811662 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:28:54.815825 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:28:54.814495 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:28:54.814787 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:28:54.817062 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:28:54.817333 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:28:54.827165 update_engine[1588]: I20250911 00:28:54.827102 1588 main.cc:92] Flatcar Update Engine starting Sep 11 00:28:54.856557 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:28:54.859717 tar[1598]: linux-amd64/LICENSE Sep 11 00:28:54.877443 tar[1598]: linux-amd64/helm Sep 11 00:28:54.862669 (ntainerd)[1604]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:28:54.877781 extend-filesystems[1597]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:28:54.877781 extend-filesystems[1597]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:28:54.877781 extend-filesystems[1597]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:28:54.877274 systemd-logind[1584]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:28:54.886384 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Sep 11 00:28:54.877294 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:28:54.887674 jq[1599]: true Sep 11 00:28:54.879358 systemd-logind[1584]: New seat seat0. Sep 11 00:28:54.879452 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:28:54.880392 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:28:54.884367 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:28:54.889173 dbus-daemon[1567]: [system] SELinux support is enabled Sep 11 00:28:54.889677 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:28:54.896419 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:28:54.897303 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 11 00:28:54.896450 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:28:54.897816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:28:54.897837 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:28:54.899709 update_engine[1588]: I20250911 00:28:54.899662 1588 update_check_scheduler.cc:74] Next update check in 5m19s Sep 11 00:28:54.900085 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:28:54.903866 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:28:54.936791 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:28:54.941105 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:28:54.943415 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:28:54.947969 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:28:54.997377 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:28:55.022985 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:28:55.027124 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:28:55.049466 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:28:55.049864 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:28:55.051869 containerd[1604]: time="2025-09-11T00:28:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:28:55.053139 containerd[1604]: time="2025-09-11T00:28:55.052423791Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:28:55.053526 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:28:55.066164 containerd[1604]: time="2025-09-11T00:28:55.066118998Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.763µs" Sep 11 00:28:55.066164 containerd[1604]: time="2025-09-11T00:28:55.066146931Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:28:55.066164 containerd[1604]: time="2025-09-11T00:28:55.066162985Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:28:55.066339 containerd[1604]: time="2025-09-11T00:28:55.066315860Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:28:55.066339 containerd[1604]: time="2025-09-11T00:28:55.066336483Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:28:55.066381 containerd[1604]: time="2025-09-11T00:28:55.066359473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:28:55.066468 containerd[1604]: time="2025-09-11T00:28:55.066442785Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:28:55.066468 containerd[1604]: time="2025-09-11T00:28:55.066462559Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:28:55.066728 containerd[1604]: time="2025-09-11T00:28:55.066698273Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:28:55.066728 containerd[1604]: time="2025-09-11T00:28:55.066720544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:28:55.066787 containerd[1604]: time="2025-09-11T00:28:55.066731775Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:28:55.066787 containerd[1604]: time="2025-09-11T00:28:55.066740439Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:28:55.066869 containerd[1604]: time="2025-09-11T00:28:55.066840058Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:28:55.067112 containerd[1604]: time="2025-09-11T00:28:55.067078137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:28:55.067149 containerd[1604]: time="2025-09-11T00:28:55.067121941Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:28:55.067149 containerd[1604]: time="2025-09-11T00:28:55.067132567Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:28:55.067191 containerd[1604]: time="2025-09-11T00:28:55.067176614Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:28:55.067497 containerd[1604]: time="2025-09-11T00:28:55.067455839Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:28:55.067575 containerd[1604]: time="2025-09-11T00:28:55.067552577Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:28:55.073619 containerd[1604]: time="2025-09-11T00:28:55.073588659Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:28:55.073664 containerd[1604]: time="2025-09-11T00:28:55.073643028Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:28:55.073664 containerd[1604]: time="2025-09-11T00:28:55.073655978Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:28:55.073719 containerd[1604]: time="2025-09-11T00:28:55.073666199Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:28:55.073719 containerd[1604]: time="2025-09-11T00:28:55.073679049Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:28:55.073719 containerd[1604]: time="2025-09-11T00:28:55.073689209Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:28:55.073719 containerd[1604]: time="2025-09-11T00:28:55.073701128Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:28:55.073719 containerd[1604]: time="2025-09-11T00:28:55.073712390Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:28:55.073804 containerd[1604]: time="2025-09-11T00:28:55.073722266Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:28:55.073804 containerd[1604]: time="2025-09-11T00:28:55.073732265Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:28:55.073804 containerd[1604]: time="2025-09-11T00:28:55.073741020Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:28:55.073804 containerd[1604]: time="2025-09-11T00:28:55.073753607Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073871534Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073895928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073909282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073928177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073939105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073948709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073962559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073971618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.073991463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.074001390Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.074010873Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.074075918Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.074087109Z" level=info msg="Start snapshots syncer" Sep 11 00:28:55.075201 containerd[1604]: time="2025-09-11T00:28:55.074104902Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:28:55.075497 containerd[1604]: time="2025-09-11T00:28:55.074315847Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:28:55.075497 containerd[1604]: time="2025-09-11T00:28:55.074553945Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:28:55.075617 containerd[1604]: time="2025-09-11T00:28:55.074681649Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.075981428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076045270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076064084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076075830Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076135810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076267689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076283723Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076316093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076328932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076340023Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076383706Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076401438Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:28:55.077351 containerd[1604]: time="2025-09-11T00:28:55.076409929Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:28:55.076113 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076422496Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076432868Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076442280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076455200Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076477046Z" level=info msg="runtime interface created" Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076484872Z" level=info msg="created NRI interface" Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076496305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076511166Z" level=info msg="Connect containerd service" Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.076567062Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:28:55.078213 containerd[1604]: time="2025-09-11T00:28:55.078089877Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:28:55.079756 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:28:55.084268 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:28:55.085578 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:28:55.173011 containerd[1604]: time="2025-09-11T00:28:55.172954080Z" level=info msg="Start subscribing containerd event" Sep 11 00:28:55.173121 containerd[1604]: time="2025-09-11T00:28:55.173027000Z" level=info msg="Start recovering state" Sep 11 00:28:55.173173 containerd[1604]: time="2025-09-11T00:28:55.173152792Z" level=info msg="Start event monitor" Sep 11 00:28:55.173214 containerd[1604]: time="2025-09-11T00:28:55.173189662Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:28:55.173214 containerd[1604]: time="2025-09-11T00:28:55.173200388Z" level=info msg="Start streaming server" Sep 11 00:28:55.173214 containerd[1604]: time="2025-09-11T00:28:55.173209780Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:28:55.173268 containerd[1604]: time="2025-09-11T00:28:55.173222234Z" level=info msg="runtime interface starting up..." Sep 11 00:28:55.173268 containerd[1604]: time="2025-09-11T00:28:55.173230413Z" level=info msg="starting plugins..." Sep 11 00:28:55.173268 containerd[1604]: time="2025-09-11T00:28:55.173244476Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:28:55.173570 containerd[1604]: time="2025-09-11T00:28:55.173506958Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:28:55.173720 containerd[1604]: time="2025-09-11T00:28:55.173691670Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:28:55.173871 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:28:55.175149 containerd[1604]: time="2025-09-11T00:28:55.175119405Z" level=info msg="containerd successfully booted in 0.123733s" Sep 11 00:28:55.313495 tar[1598]: linux-amd64/README.md Sep 11 00:28:55.340089 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:28:56.124060 systemd-networkd[1499]: eth0: Gained IPv6LL Sep 11 00:28:56.127102 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:28:56.128879 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:28:56.131407 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:28:56.133876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:56.141921 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:28:56.166387 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:28:56.168160 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:28:56.168450 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:28:56.171072 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:28:56.851946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:56.853609 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:28:56.855641 systemd[1]: Startup finished in 2.793s (kernel) + 5.848s (initrd) + 4.437s (userspace) = 13.079s. Sep 11 00:28:56.855682 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:28:57.276619 kubelet[1702]: E0911 00:28:57.276474 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:28:57.280887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:28:57.281097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:28:57.281486 systemd[1]: kubelet.service: Consumed 985ms CPU time, 266.8M memory peak. Sep 11 00:28:59.619001 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:28:59.620268 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:48820.service - OpenSSH per-connection server daemon (10.0.0.1:48820). Sep 11 00:28:59.684744 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 48820 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:28:59.686515 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:59.692732 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:28:59.693812 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:28:59.700479 systemd-logind[1584]: New session 1 of user core. Sep 11 00:28:59.712261 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:28:59.715807 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:28:59.734984 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:28:59.737432 systemd-logind[1584]: New session c1 of user core. Sep 11 00:28:59.891104 systemd[1720]: Queued start job for default target default.target. Sep 11 00:28:59.904770 systemd[1720]: Created slice app.slice - User Application Slice. Sep 11 00:28:59.904795 systemd[1720]: Reached target paths.target - Paths. Sep 11 00:28:59.904842 systemd[1720]: Reached target timers.target - Timers. Sep 11 00:28:59.906357 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:28:59.917264 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:28:59.917435 systemd[1720]: Reached target sockets.target - Sockets. Sep 11 00:28:59.917502 systemd[1720]: Reached target basic.target - Basic System. Sep 11 00:28:59.917585 systemd[1720]: Reached target default.target - Main User Target. Sep 11 00:28:59.917637 systemd[1720]: Startup finished in 173ms. Sep 11 00:28:59.917642 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:28:59.919230 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:28:59.986275 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:54858.service - OpenSSH per-connection server daemon (10.0.0.1:54858). Sep 11 00:29:00.045095 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 54858 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:29:00.046956 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:00.051786 systemd-logind[1584]: New session 2 of user core. Sep 11 00:29:00.065683 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:29:00.120549 sshd[1733]: Connection closed by 10.0.0.1 port 54858 Sep 11 00:29:00.120891 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:00.133844 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:54858.service: Deactivated successfully. Sep 11 00:29:00.135720 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:29:00.136454 systemd-logind[1584]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:29:00.139443 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:54860.service - OpenSSH per-connection server daemon (10.0.0.1:54860). Sep 11 00:29:00.140158 systemd-logind[1584]: Removed session 2. Sep 11 00:29:00.193264 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 54860 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:29:00.195098 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:00.199913 systemd-logind[1584]: New session 3 of user core. Sep 11 00:29:00.220720 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:29:00.271249 sshd[1741]: Connection closed by 10.0.0.1 port 54860 Sep 11 00:29:00.271645 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:00.291482 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:54860.service: Deactivated successfully. Sep 11 00:29:00.293156 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:29:00.294018 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:29:00.296636 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:54868.service - OpenSSH per-connection server daemon (10.0.0.1:54868). Sep 11 00:29:00.297310 systemd-logind[1584]: Removed session 3. Sep 11 00:29:00.342511 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 54868 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:29:00.343768 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:00.348494 systemd-logind[1584]: New session 4 of user core. Sep 11 00:29:00.357673 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:29:00.412297 sshd[1749]: Connection closed by 10.0.0.1 port 54868 Sep 11 00:29:00.412605 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:00.424234 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:54868.service: Deactivated successfully. Sep 11 00:29:00.426148 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:29:00.427036 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:29:00.429474 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:54884.service - OpenSSH per-connection server daemon (10.0.0.1:54884). Sep 11 00:29:00.430167 systemd-logind[1584]: Removed session 4. Sep 11 00:29:00.485984 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 54884 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:29:00.487464 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:00.491949 systemd-logind[1584]: New session 5 of user core. Sep 11 00:29:00.502698 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:29:00.560764 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:29:00.561055 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:29:00.585611 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 11 00:29:00.587236 sshd[1757]: Connection closed by 10.0.0.1 port 54884 Sep 11 00:29:00.587598 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:00.603310 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:54884.service: Deactivated successfully. Sep 11 00:29:00.605140 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:29:00.605935 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:29:00.608859 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:54890.service - OpenSSH per-connection server daemon (10.0.0.1:54890). Sep 11 00:29:00.609448 systemd-logind[1584]: Removed session 5. Sep 11 00:29:00.659947 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 54890 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:29:00.661244 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:00.665623 systemd-logind[1584]: New session 6 of user core. Sep 11 00:29:00.677684 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:29:00.729925 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:29:00.730208 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:29:00.896255 sudo[1768]: pam_unix(sudo:session): session closed for user root Sep 11 00:29:00.902235 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:29:00.902569 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:29:00.911660 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:29:00.954033 augenrules[1790]: No rules Sep 11 00:29:00.955831 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:29:00.956101 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:29:00.957118 sudo[1767]: pam_unix(sudo:session): session closed for user root Sep 11 00:29:00.958525 sshd[1766]: Connection closed by 10.0.0.1 port 54890 Sep 11 00:29:00.958841 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:00.971219 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:54890.service: Deactivated successfully. Sep 11 00:29:00.972971 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:29:00.973731 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:29:00.976737 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:54904.service - OpenSSH per-connection server daemon (10.0.0.1:54904). Sep 11 00:29:00.977321 systemd-logind[1584]: Removed session 6. Sep 11 00:29:01.020766 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 54904 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:29:01.022065 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:01.026789 systemd-logind[1584]: New session 7 of user core. Sep 11 00:29:01.040658 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:29:01.092604 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:29:01.092886 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:29:01.387462 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:29:01.407836 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:29:01.626270 dockerd[1823]: time="2025-09-11T00:29:01.626003708Z" level=info msg="Starting up" Sep 11 00:29:01.627481 dockerd[1823]: time="2025-09-11T00:29:01.627453256Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:29:01.964328 dockerd[1823]: time="2025-09-11T00:29:01.964288555Z" level=info msg="Loading containers: start." Sep 11 00:29:01.974569 kernel: Initializing XFRM netlink socket Sep 11 00:29:02.205129 systemd-networkd[1499]: docker0: Link UP Sep 11 00:29:02.210632 dockerd[1823]: time="2025-09-11T00:29:02.210584109Z" level=info msg="Loading containers: done." Sep 11 00:29:02.224097 dockerd[1823]: time="2025-09-11T00:29:02.223996587Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:29:02.224203 dockerd[1823]: time="2025-09-11T00:29:02.224097665Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:29:02.224234 dockerd[1823]: time="2025-09-11T00:29:02.224208245Z" level=info msg="Initializing buildkit" Sep 11 00:29:02.255070 dockerd[1823]: time="2025-09-11T00:29:02.255010269Z" level=info msg="Completed buildkit initialization" Sep 11 00:29:02.259118 dockerd[1823]: time="2025-09-11T00:29:02.259097280Z" level=info msg="Daemon has completed initialization" Sep 11 00:29:02.259168 dockerd[1823]: time="2025-09-11T00:29:02.259140357Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:29:02.259342 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:29:02.926082 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck291242343-merged.mount: Deactivated successfully. Sep 11 00:29:02.978196 containerd[1604]: time="2025-09-11T00:29:02.978149125Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 11 00:29:04.198812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount677892010.mount: Deactivated successfully. Sep 11 00:29:05.110139 containerd[1604]: time="2025-09-11T00:29:05.110081246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:05.110955 containerd[1604]: time="2025-09-11T00:29:05.110914762Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 11 00:29:05.112128 containerd[1604]: time="2025-09-11T00:29:05.112087745Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:05.114685 containerd[1604]: time="2025-09-11T00:29:05.114641341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:05.115557 containerd[1604]: time="2025-09-11T00:29:05.115521051Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.13732537s" Sep 11 00:29:05.115597 containerd[1604]: time="2025-09-11T00:29:05.115569108Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 11 00:29:05.116202 containerd[1604]: time="2025-09-11T00:29:05.116151377Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 11 00:29:06.226762 containerd[1604]: time="2025-09-11T00:29:06.226703192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:06.227633 containerd[1604]: time="2025-09-11T00:29:06.227577772Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 11 00:29:06.228729 containerd[1604]: time="2025-09-11T00:29:06.228679935Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:06.231169 containerd[1604]: time="2025-09-11T00:29:06.231133340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:06.232123 containerd[1604]: time="2025-09-11T00:29:06.232079169Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.115898592s" Sep 11 00:29:06.232123 containerd[1604]: time="2025-09-11T00:29:06.232114139Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 11 00:29:06.232698 containerd[1604]: time="2025-09-11T00:29:06.232640545Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 11 00:29:07.531508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:29:07.532988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:29:07.897227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:29:07.920849 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:29:08.052691 containerd[1604]: time="2025-09-11T00:29:08.052634354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:08.053679 containerd[1604]: time="2025-09-11T00:29:08.053655864Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 11 00:29:08.054893 containerd[1604]: time="2025-09-11T00:29:08.054862351Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:08.057443 containerd[1604]: time="2025-09-11T00:29:08.057396834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:08.058185 containerd[1604]: time="2025-09-11T00:29:08.058141458Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.825449394s" Sep 11 00:29:08.058185 containerd[1604]: time="2025-09-11T00:29:08.058174147Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 11 00:29:08.059660 containerd[1604]: time="2025-09-11T00:29:08.059452679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 11 00:29:08.074494 kubelet[2108]: E0911 00:29:08.074444 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:29:08.081288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:29:08.081488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:29:08.081908 systemd[1]: kubelet.service: Consumed 222ms CPU time, 111.8M memory peak. Sep 11 00:29:09.135869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502523336.mount: Deactivated successfully. Sep 11 00:29:09.416738 containerd[1604]: time="2025-09-11T00:29:09.416617774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:09.417337 containerd[1604]: time="2025-09-11T00:29:09.417261248Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 11 00:29:09.418984 containerd[1604]: time="2025-09-11T00:29:09.418926486Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:09.421071 containerd[1604]: time="2025-09-11T00:29:09.421039760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:09.421803 containerd[1604]: time="2025-09-11T00:29:09.421766664Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.362286208s" Sep 11 00:29:09.421803 containerd[1604]: time="2025-09-11T00:29:09.421798309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 11 00:29:09.422271 containerd[1604]: time="2025-09-11T00:29:09.422239346Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 11 00:29:09.954998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935900762.mount: Deactivated successfully. Sep 11 00:29:10.970622 containerd[1604]: time="2025-09-11T00:29:10.970565600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:10.971229 containerd[1604]: time="2025-09-11T00:29:10.971183000Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 11 00:29:10.972214 containerd[1604]: time="2025-09-11T00:29:10.972171647Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:10.974578 containerd[1604]: time="2025-09-11T00:29:10.974508966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:10.975520 containerd[1604]: time="2025-09-11T00:29:10.975493682Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.553225137s" Sep 11 00:29:10.975581 containerd[1604]: time="2025-09-11T00:29:10.975525802Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 11 00:29:10.976053 containerd[1604]: time="2025-09-11T00:29:10.976031269Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:29:11.499746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28087595.mount: Deactivated successfully. Sep 11 00:29:11.505768 containerd[1604]: time="2025-09-11T00:29:11.505711424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:29:11.506445 containerd[1604]: time="2025-09-11T00:29:11.506398055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:29:11.507578 containerd[1604]: time="2025-09-11T00:29:11.507521847Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:29:11.509552 containerd[1604]: time="2025-09-11T00:29:11.509486786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:29:11.510067 containerd[1604]: time="2025-09-11T00:29:11.510029103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.972653ms" Sep 11 00:29:11.510067 containerd[1604]: time="2025-09-11T00:29:11.510058482Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:29:11.510573 containerd[1604]: time="2025-09-11T00:29:11.510531526Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 11 00:29:12.016374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287520722.mount: Deactivated successfully. Sep 11 00:29:14.474172 containerd[1604]: time="2025-09-11T00:29:14.474104733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:14.474796 containerd[1604]: time="2025-09-11T00:29:14.474752572Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 11 00:29:14.475807 containerd[1604]: time="2025-09-11T00:29:14.475778710Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:14.478407 containerd[1604]: time="2025-09-11T00:29:14.478365021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:14.479408 containerd[1604]: time="2025-09-11T00:29:14.479380019Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.968806212s" Sep 11 00:29:14.479445 containerd[1604]: time="2025-09-11T00:29:14.479411039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 11 00:29:18.093716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:29:18.095365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:29:18.110271 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:29:18.110364 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:29:18.110659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:29:18.112643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:29:18.136157 systemd[1]: Reload requested from client PID 2266 ('systemctl') (unit session-7.scope)... Sep 11 00:29:18.136173 systemd[1]: Reloading... Sep 11 00:29:18.222578 zram_generator::config[2312]: No configuration found. Sep 11 00:29:18.691351 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:29:18.817574 systemd[1]: Reloading finished in 681 ms. Sep 11 00:29:18.874240 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:29:18.874338 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:29:18.874635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:29:18.874674 systemd[1]: kubelet.service: Consumed 143ms CPU time, 98.3M memory peak. Sep 11 00:29:18.876265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:29:19.081192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:29:19.085959 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:29:19.128011 kubelet[2357]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:29:19.128011 kubelet[2357]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:29:19.128011 kubelet[2357]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:29:19.128451 kubelet[2357]: I0911 00:29:19.128042 2357 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:29:20.260813 kubelet[2357]: I0911 00:29:20.260769 2357 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:29:20.260813 kubelet[2357]: I0911 00:29:20.260797 2357 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:29:20.261251 kubelet[2357]: I0911 00:29:20.260997 2357 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:29:20.287603 kubelet[2357]: I0911 00:29:20.287573 2357 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:29:20.288264 kubelet[2357]: E0911 00:29:20.288172 2357 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 11 00:29:20.295398 kubelet[2357]: I0911 00:29:20.295376 2357 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:29:20.300890 kubelet[2357]: I0911 00:29:20.300847 2357 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:29:20.301121 kubelet[2357]: I0911 00:29:20.301071 2357 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:29:20.301289 kubelet[2357]: I0911 00:29:20.301100 2357 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:29:20.301289 kubelet[2357]: I0911 00:29:20.301280 2357 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:29:20.301289 kubelet[2357]: I0911 00:29:20.301288 2357 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:29:20.302042 kubelet[2357]: I0911 00:29:20.302005 2357 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:29:20.537851 kubelet[2357]: I0911 00:29:20.537732 2357 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:29:20.537851 kubelet[2357]: I0911 00:29:20.537766 2357 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:29:20.537851 kubelet[2357]: I0911 00:29:20.537789 2357 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:29:20.537851 kubelet[2357]: I0911 00:29:20.537805 2357 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:29:20.544146 kubelet[2357]: E0911 00:29:20.544100 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:29:20.544255 kubelet[2357]: E0911 00:29:20.544202 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:29:20.545900 kubelet[2357]: I0911 00:29:20.545868 2357 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:29:20.546612 kubelet[2357]: I0911 00:29:20.546572 2357 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:29:20.547141 kubelet[2357]: W0911 00:29:20.547122 2357 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:29:20.549860 kubelet[2357]: I0911 00:29:20.549845 2357 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:29:20.549905 kubelet[2357]: I0911 00:29:20.549892 2357 server.go:1289] "Started kubelet" Sep 11 00:29:20.550007 kubelet[2357]: I0911 00:29:20.549956 2357 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:29:20.550764 kubelet[2357]: I0911 00:29:20.550532 2357 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:29:20.550967 kubelet[2357]: I0911 00:29:20.550953 2357 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:29:20.551288 kubelet[2357]: I0911 00:29:20.551269 2357 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:29:20.552146 kubelet[2357]: I0911 00:29:20.551826 2357 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:29:20.552890 kubelet[2357]: I0911 00:29:20.552514 2357 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:29:20.554319 kubelet[2357]: E0911 00:29:20.554300 2357 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:29:20.554378 kubelet[2357]: I0911 00:29:20.554332 2357 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:29:20.554634 kubelet[2357]: I0911 00:29:20.554555 2357 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:29:20.554634 kubelet[2357]: I0911 00:29:20.554606 2357 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:29:20.554953 kubelet[2357]: E0911 00:29:20.554914 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 00:29:20.555030 kubelet[2357]: E0911 00:29:20.554995 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Sep 11 00:29:20.555631 kubelet[2357]: E0911 00:29:20.553958 2357 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186412ec445cfa3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:29:20.549861947 +0000 UTC m=+1.459725567,LastTimestamp:2025-09-11 00:29:20.549861947 +0000 UTC m=+1.459725567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:29:20.556195 kubelet[2357]: I0911 00:29:20.556174 2357 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:29:20.556280 kubelet[2357]: I0911 00:29:20.556261 2357 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:29:20.556674 kubelet[2357]: E0911 00:29:20.556649 2357 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:29:20.557571 kubelet[2357]: I0911 00:29:20.557146 2357 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:29:20.568696 kubelet[2357]: I0911 00:29:20.568672 2357 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:29:20.568696 kubelet[2357]: I0911 00:29:20.568692 2357 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:29:20.568815 kubelet[2357]: I0911 00:29:20.568710 2357 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:29:20.571447 kubelet[2357]: I0911 00:29:20.571408 2357 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:29:20.572761 kubelet[2357]: I0911 00:29:20.572635 2357 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:29:20.572761 kubelet[2357]: I0911 00:29:20.572655 2357 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:29:20.572761 kubelet[2357]: I0911 00:29:20.572678 2357 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:29:20.572761 kubelet[2357]: I0911 00:29:20.572687 2357 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:29:20.573012 kubelet[2357]: E0911 00:29:20.572898 2357 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:29:20.573441 kubelet[2357]: E0911 00:29:20.573390 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 11 00:29:20.573555 kubelet[2357]: I0911 00:29:20.573464 2357 policy_none.go:49] "None policy: Start" Sep 11 00:29:20.573555 kubelet[2357]: I0911 00:29:20.573481 2357 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:29:20.573555 kubelet[2357]: I0911 00:29:20.573493 2357 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:29:20.581752 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:29:20.596846 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:29:20.612709 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:29:20.614064 kubelet[2357]: E0911 00:29:20.614020 2357 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:29:20.614360 kubelet[2357]: I0911 00:29:20.614339 2357 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:29:20.614408 kubelet[2357]: I0911 00:29:20.614359 2357 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:29:20.614834 kubelet[2357]: I0911 00:29:20.614668 2357 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:29:20.615344 kubelet[2357]: E0911 00:29:20.615312 2357 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:29:20.615384 kubelet[2357]: E0911 00:29:20.615376 2357 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:29:20.684139 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 11 00:29:20.692336 kubelet[2357]: E0911 00:29:20.692302 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:29:20.694121 systemd[1]: Created slice kubepods-burstable-pod555a6d04cdee34597a785db98090797f.slice - libcontainer container kubepods-burstable-pod555a6d04cdee34597a785db98090797f.slice. Sep 11 00:29:20.714860 kubelet[2357]: E0911 00:29:20.714835 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:29:20.715706 kubelet[2357]: I0911 00:29:20.715685 2357 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:29:20.716168 kubelet[2357]: E0911 00:29:20.716130 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 11 00:29:20.717700 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 11 00:29:20.719626 kubelet[2357]: E0911 00:29:20.719592 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:29:20.755899 kubelet[2357]: I0911 00:29:20.755844 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:20.755899 kubelet[2357]: I0911 00:29:20.755896 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:29:20.755899 kubelet[2357]: I0911 00:29:20.755914 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/555a6d04cdee34597a785db98090797f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"555a6d04cdee34597a785db98090797f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:20.756413 kubelet[2357]: I0911 00:29:20.755930 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:20.756413 kubelet[2357]: I0911 00:29:20.755945 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:20.756413 kubelet[2357]: I0911 00:29:20.755958 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:20.756413 kubelet[2357]: I0911 00:29:20.755974 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:20.756413 kubelet[2357]: I0911 00:29:20.756015 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/555a6d04cdee34597a785db98090797f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"555a6d04cdee34597a785db98090797f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:20.756614 kubelet[2357]: I0911 00:29:20.756044 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/555a6d04cdee34597a785db98090797f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"555a6d04cdee34597a785db98090797f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:20.756614 kubelet[2357]: E0911 00:29:20.756220 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Sep 11 00:29:20.918301 kubelet[2357]: I0911 00:29:20.918196 2357 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:29:20.918627 kubelet[2357]: E0911 00:29:20.918589 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 11 00:29:20.993864 containerd[1604]: time="2025-09-11T00:29:20.993817582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:21.016238 containerd[1604]: time="2025-09-11T00:29:21.016209012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:555a6d04cdee34597a785db98090797f,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:21.020882 containerd[1604]: time="2025-09-11T00:29:21.020838536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:21.157524 kubelet[2357]: E0911 00:29:21.157485 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Sep 11 00:29:21.320434 kubelet[2357]: I0911 00:29:21.320310 2357 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:29:21.320892 kubelet[2357]: E0911 00:29:21.320660 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 11 00:29:21.372756 containerd[1604]: time="2025-09-11T00:29:21.372697666Z" level=info msg="connecting to shim 5485cbd77024ea1e294cfeab76761cf9a65707c7320b29afff752086da75f5d1" address="unix:///run/containerd/s/1851b1c15c5c08797320db76ee485b4721e59e5e78d44f4c0ca7c7bc63ad2972" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:21.372892 containerd[1604]: time="2025-09-11T00:29:21.372697766Z" level=info msg="connecting to shim 34fa3cc0d60b4db9e51b0b1c36c323ea023b788a59791f6323203256f3e5d276" address="unix:///run/containerd/s/6b50e53d202e9f24ac9ca3883555a97b025559a9d52b6c9f7b701bea35c3f867" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:21.379170 containerd[1604]: time="2025-09-11T00:29:21.378771211Z" level=info msg="connecting to shim 6177de9130e8d4339a39a1a3bccb7998197d6890792ee2f2c468711c44d45d1d" address="unix:///run/containerd/s/c622a24975807c5a582b5b8f59e6f462e0374bf2f17d2c724a8a9ebc4f7271b3" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:21.401795 systemd[1]: Started cri-containerd-34fa3cc0d60b4db9e51b0b1c36c323ea023b788a59791f6323203256f3e5d276.scope - libcontainer container 34fa3cc0d60b4db9e51b0b1c36c323ea023b788a59791f6323203256f3e5d276. Sep 11 00:29:21.405813 systemd[1]: Started cri-containerd-5485cbd77024ea1e294cfeab76761cf9a65707c7320b29afff752086da75f5d1.scope - libcontainer container 5485cbd77024ea1e294cfeab76761cf9a65707c7320b29afff752086da75f5d1. Sep 11 00:29:21.411805 systemd[1]: Started cri-containerd-6177de9130e8d4339a39a1a3bccb7998197d6890792ee2f2c468711c44d45d1d.scope - libcontainer container 6177de9130e8d4339a39a1a3bccb7998197d6890792ee2f2c468711c44d45d1d. Sep 11 00:29:21.456783 containerd[1604]: time="2025-09-11T00:29:21.456737005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"34fa3cc0d60b4db9e51b0b1c36c323ea023b788a59791f6323203256f3e5d276\"" Sep 11 00:29:21.459569 containerd[1604]: time="2025-09-11T00:29:21.459522935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6177de9130e8d4339a39a1a3bccb7998197d6890792ee2f2c468711c44d45d1d\"" Sep 11 00:29:21.464208 containerd[1604]: time="2025-09-11T00:29:21.463877288Z" level=info msg="CreateContainer within sandbox \"34fa3cc0d60b4db9e51b0b1c36c323ea023b788a59791f6323203256f3e5d276\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:29:21.464208 containerd[1604]: time="2025-09-11T00:29:21.464131493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:555a6d04cdee34597a785db98090797f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5485cbd77024ea1e294cfeab76761cf9a65707c7320b29afff752086da75f5d1\"" Sep 11 00:29:21.466152 containerd[1604]: time="2025-09-11T00:29:21.466126577Z" level=info msg="CreateContainer within sandbox \"6177de9130e8d4339a39a1a3bccb7998197d6890792ee2f2c468711c44d45d1d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:29:21.468167 containerd[1604]: time="2025-09-11T00:29:21.468136273Z" level=info msg="CreateContainer within sandbox \"5485cbd77024ea1e294cfeab76761cf9a65707c7320b29afff752086da75f5d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:29:21.473591 containerd[1604]: time="2025-09-11T00:29:21.473559513Z" level=info msg="Container 305156ffd8a966f64f94245f9a190bc60fee4ca9d19327d53d054ca9baf56d06: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:21.482579 containerd[1604]: time="2025-09-11T00:29:21.482528216Z" level=info msg="CreateContainer within sandbox \"34fa3cc0d60b4db9e51b0b1c36c323ea023b788a59791f6323203256f3e5d276\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"305156ffd8a966f64f94245f9a190bc60fee4ca9d19327d53d054ca9baf56d06\"" Sep 11 00:29:21.483015 containerd[1604]: time="2025-09-11T00:29:21.482983688Z" level=info msg="StartContainer for \"305156ffd8a966f64f94245f9a190bc60fee4ca9d19327d53d054ca9baf56d06\"" Sep 11 00:29:21.484023 containerd[1604]: time="2025-09-11T00:29:21.483997117Z" level=info msg="connecting to shim 305156ffd8a966f64f94245f9a190bc60fee4ca9d19327d53d054ca9baf56d06" address="unix:///run/containerd/s/6b50e53d202e9f24ac9ca3883555a97b025559a9d52b6c9f7b701bea35c3f867" protocol=ttrpc version=3 Sep 11 00:29:21.484725 containerd[1604]: time="2025-09-11T00:29:21.484700983Z" level=info msg="Container 6e438da22d296b0c6e47e68c73117d732ba8ec82144fc9cf83ecf300865d42ba: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:21.490274 containerd[1604]: time="2025-09-11T00:29:21.490242213Z" level=info msg="Container cf34266c4937eb0f351a923e6ecebbd4979eb73497082b8f573704261e568291: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:21.495698 containerd[1604]: time="2025-09-11T00:29:21.495654995Z" level=info msg="CreateContainer within sandbox \"6177de9130e8d4339a39a1a3bccb7998197d6890792ee2f2c468711c44d45d1d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e438da22d296b0c6e47e68c73117d732ba8ec82144fc9cf83ecf300865d42ba\"" Sep 11 00:29:21.496096 containerd[1604]: time="2025-09-11T00:29:21.496083762Z" level=info msg="StartContainer for \"6e438da22d296b0c6e47e68c73117d732ba8ec82144fc9cf83ecf300865d42ba\"" Sep 11 00:29:21.497028 containerd[1604]: time="2025-09-11T00:29:21.497005042Z" level=info msg="connecting to shim 6e438da22d296b0c6e47e68c73117d732ba8ec82144fc9cf83ecf300865d42ba" address="unix:///run/containerd/s/c622a24975807c5a582b5b8f59e6f462e0374bf2f17d2c724a8a9ebc4f7271b3" protocol=ttrpc version=3 Sep 11 00:29:21.501119 containerd[1604]: time="2025-09-11T00:29:21.501088261Z" level=info msg="CreateContainer within sandbox \"5485cbd77024ea1e294cfeab76761cf9a65707c7320b29afff752086da75f5d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf34266c4937eb0f351a923e6ecebbd4979eb73497082b8f573704261e568291\"" Sep 11 00:29:21.501560 containerd[1604]: time="2025-09-11T00:29:21.501513113Z" level=info msg="StartContainer for \"cf34266c4937eb0f351a923e6ecebbd4979eb73497082b8f573704261e568291\"" Sep 11 00:29:21.502651 containerd[1604]: time="2025-09-11T00:29:21.502626839Z" level=info msg="connecting to shim cf34266c4937eb0f351a923e6ecebbd4979eb73497082b8f573704261e568291" address="unix:///run/containerd/s/1851b1c15c5c08797320db76ee485b4721e59e5e78d44f4c0ca7c7bc63ad2972" protocol=ttrpc version=3 Sep 11 00:29:21.505672 systemd[1]: Started cri-containerd-305156ffd8a966f64f94245f9a190bc60fee4ca9d19327d53d054ca9baf56d06.scope - libcontainer container 305156ffd8a966f64f94245f9a190bc60fee4ca9d19327d53d054ca9baf56d06. Sep 11 00:29:21.518680 systemd[1]: Started cri-containerd-6e438da22d296b0c6e47e68c73117d732ba8ec82144fc9cf83ecf300865d42ba.scope - libcontainer container 6e438da22d296b0c6e47e68c73117d732ba8ec82144fc9cf83ecf300865d42ba. Sep 11 00:29:21.527783 systemd[1]: Started cri-containerd-cf34266c4937eb0f351a923e6ecebbd4979eb73497082b8f573704261e568291.scope - libcontainer container cf34266c4937eb0f351a923e6ecebbd4979eb73497082b8f573704261e568291. Sep 11 00:29:21.583901 containerd[1604]: time="2025-09-11T00:29:21.583776368Z" level=info msg="StartContainer for \"cf34266c4937eb0f351a923e6ecebbd4979eb73497082b8f573704261e568291\" returns successfully" Sep 11 00:29:21.586552 containerd[1604]: time="2025-09-11T00:29:21.586512029Z" level=info msg="StartContainer for \"305156ffd8a966f64f94245f9a190bc60fee4ca9d19327d53d054ca9baf56d06\" returns successfully" Sep 11 00:29:21.587620 containerd[1604]: time="2025-09-11T00:29:21.587599380Z" level=info msg="StartContainer for \"6e438da22d296b0c6e47e68c73117d732ba8ec82144fc9cf83ecf300865d42ba\" returns successfully" Sep 11 00:29:21.593383 kubelet[2357]: E0911 00:29:21.593357 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:29:21.595653 kubelet[2357]: E0911 00:29:21.595635 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:29:21.601273 kubelet[2357]: E0911 00:29:21.601235 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:29:22.122579 kubelet[2357]: I0911 00:29:22.122545 2357 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:29:22.601707 kubelet[2357]: E0911 00:29:22.601513 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:29:22.603807 kubelet[2357]: E0911 00:29:22.603751 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:29:22.605875 kubelet[2357]: E0911 00:29:22.605023 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:29:22.616074 kubelet[2357]: E0911 00:29:22.616042 2357 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:29:22.695445 kubelet[2357]: I0911 00:29:22.695394 2357 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:29:22.695445 kubelet[2357]: E0911 00:29:22.695428 2357 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:29:22.754799 kubelet[2357]: I0911 00:29:22.754746 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:22.759529 kubelet[2357]: E0911 00:29:22.759499 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:22.759529 kubelet[2357]: I0911 00:29:22.759520 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:29:22.760761 kubelet[2357]: E0911 00:29:22.760719 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 00:29:22.760761 kubelet[2357]: I0911 00:29:22.760760 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:22.761831 kubelet[2357]: E0911 00:29:22.761799 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:23.539497 kubelet[2357]: I0911 00:29:23.539444 2357 apiserver.go:52] "Watching apiserver" Sep 11 00:29:23.555349 kubelet[2357]: I0911 00:29:23.554913 2357 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:29:23.600629 kubelet[2357]: I0911 00:29:23.600587 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:24.699733 systemd[1]: Reload requested from client PID 2643 ('systemctl') (unit session-7.scope)... Sep 11 00:29:24.699749 systemd[1]: Reloading... Sep 11 00:29:24.781620 zram_generator::config[2687]: No configuration found. Sep 11 00:29:24.874066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:29:25.004230 systemd[1]: Reloading finished in 304 ms. Sep 11 00:29:25.035413 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:29:25.050009 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:29:25.050348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:29:25.050409 systemd[1]: kubelet.service: Consumed 1.324s CPU time, 132.8M memory peak. Sep 11 00:29:25.052450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:29:25.273682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:29:25.277991 (kubelet)[2731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:29:25.319217 kubelet[2731]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:29:25.319217 kubelet[2731]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:29:25.319217 kubelet[2731]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:29:25.319684 kubelet[2731]: I0911 00:29:25.319239 2731 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:29:25.326196 kubelet[2731]: I0911 00:29:25.326155 2731 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:29:25.326196 kubelet[2731]: I0911 00:29:25.326176 2731 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:29:25.326352 kubelet[2731]: I0911 00:29:25.326337 2731 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:29:25.327439 kubelet[2731]: I0911 00:29:25.327416 2731 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 11 00:29:25.329497 kubelet[2731]: I0911 00:29:25.329424 2731 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:29:25.332907 kubelet[2731]: I0911 00:29:25.332872 2731 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:29:25.339316 kubelet[2731]: I0911 00:29:25.339272 2731 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:29:25.339633 kubelet[2731]: I0911 00:29:25.339598 2731 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:29:25.339794 kubelet[2731]: I0911 00:29:25.339630 2731 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:29:25.339884 kubelet[2731]: I0911 00:29:25.339796 2731 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:29:25.339884 kubelet[2731]: I0911 00:29:25.339806 2731 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:29:25.339884 kubelet[2731]: I0911 00:29:25.339854 2731 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:29:25.340044 kubelet[2731]: I0911 00:29:25.340029 2731 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:29:25.340044 kubelet[2731]: I0911 00:29:25.340044 2731 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:29:25.340106 kubelet[2731]: I0911 00:29:25.340070 2731 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:29:25.340106 kubelet[2731]: I0911 00:29:25.340086 2731 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:29:25.341129 kubelet[2731]: I0911 00:29:25.341068 2731 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:29:25.341835 kubelet[2731]: I0911 00:29:25.341809 2731 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:29:25.346784 kubelet[2731]: I0911 00:29:25.346757 2731 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:29:25.346856 kubelet[2731]: I0911 00:29:25.346814 2731 server.go:1289] "Started kubelet" Sep 11 00:29:25.348594 kubelet[2731]: I0911 00:29:25.348570 2731 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:29:25.349990 kubelet[2731]: I0911 00:29:25.349961 2731 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:29:25.351112 kubelet[2731]: I0911 00:29:25.350914 2731 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:29:25.351112 kubelet[2731]: I0911 00:29:25.351047 2731 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:29:25.351293 kubelet[2731]: I0911 00:29:25.351259 2731 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:29:25.351790 kubelet[2731]: I0911 00:29:25.351768 2731 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:29:25.352410 kubelet[2731]: I0911 00:29:25.351884 2731 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:29:25.352410 kubelet[2731]: I0911 00:29:25.352255 2731 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:29:25.352475 kubelet[2731]: I0911 00:29:25.352448 2731 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:29:25.357153 kubelet[2731]: E0911 00:29:25.357124 2731 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:29:25.359295 kubelet[2731]: I0911 00:29:25.358628 2731 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:29:25.359295 kubelet[2731]: I0911 00:29:25.358647 2731 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:29:25.359295 kubelet[2731]: I0911 00:29:25.358718 2731 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:29:25.366626 kubelet[2731]: I0911 00:29:25.366569 2731 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:29:25.368762 kubelet[2731]: I0911 00:29:25.368721 2731 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:29:25.368762 kubelet[2731]: I0911 00:29:25.368749 2731 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:29:25.368762 kubelet[2731]: I0911 00:29:25.368765 2731 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:29:25.368762 kubelet[2731]: I0911 00:29:25.368773 2731 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:29:25.369027 kubelet[2731]: E0911 00:29:25.368817 2731 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:29:25.391816 kubelet[2731]: I0911 00:29:25.391625 2731 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:29:25.391816 kubelet[2731]: I0911 00:29:25.391642 2731 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:29:25.391816 kubelet[2731]: I0911 00:29:25.391658 2731 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:29:25.391816 kubelet[2731]: I0911 00:29:25.391770 2731 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:29:25.391816 kubelet[2731]: I0911 00:29:25.391778 2731 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:29:25.391816 kubelet[2731]: I0911 00:29:25.391794 2731 policy_none.go:49] "None policy: Start" Sep 11 00:29:25.391816 kubelet[2731]: I0911 00:29:25.391802 2731 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:29:25.391816 kubelet[2731]: I0911 00:29:25.391811 2731 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:29:25.392115 kubelet[2731]: I0911 00:29:25.391888 2731 state_mem.go:75] "Updated machine memory state" Sep 11 00:29:25.397840 kubelet[2731]: E0911 00:29:25.397801 2731 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:29:25.398036 kubelet[2731]: I0911 00:29:25.398023 2731 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:29:25.398070 kubelet[2731]: I0911 00:29:25.398041 2731 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:29:25.398268 kubelet[2731]: I0911 00:29:25.398226 2731 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:29:25.398928 kubelet[2731]: E0911 00:29:25.398897 2731 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:29:25.469861 kubelet[2731]: I0911 00:29:25.469810 2731 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:25.469999 kubelet[2731]: I0911 00:29:25.469933 2731 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:25.470557 kubelet[2731]: I0911 00:29:25.470132 2731 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:29:25.492182 kubelet[2731]: E0911 00:29:25.491974 2731 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:25.504152 kubelet[2731]: I0911 00:29:25.504123 2731 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:29:25.532974 kubelet[2731]: I0911 00:29:25.532796 2731 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 00:29:25.532974 kubelet[2731]: I0911 00:29:25.532892 2731 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:29:25.552558 kubelet[2731]: I0911 00:29:25.552491 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:25.552558 kubelet[2731]: I0911 00:29:25.552524 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:25.552558 kubelet[2731]: I0911 00:29:25.552573 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/555a6d04cdee34597a785db98090797f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"555a6d04cdee34597a785db98090797f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:25.552798 kubelet[2731]: I0911 00:29:25.552681 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:25.552798 kubelet[2731]: I0911 00:29:25.552753 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:25.552798 kubelet[2731]: I0911 00:29:25.552781 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:29:25.552798 kubelet[2731]: I0911 00:29:25.552800 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/555a6d04cdee34597a785db98090797f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"555a6d04cdee34597a785db98090797f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:25.552934 kubelet[2731]: I0911 00:29:25.552817 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/555a6d04cdee34597a785db98090797f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"555a6d04cdee34597a785db98090797f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:25.552934 kubelet[2731]: I0911 00:29:25.552833 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:26.340669 kubelet[2731]: I0911 00:29:26.340608 2731 apiserver.go:52] "Watching apiserver" Sep 11 00:29:26.352093 kubelet[2731]: I0911 00:29:26.352027 2731 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:29:26.383947 kubelet[2731]: I0911 00:29:26.383901 2731 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:26.384311 kubelet[2731]: I0911 00:29:26.384288 2731 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:29:26.384532 kubelet[2731]: I0911 00:29:26.384509 2731 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:26.393561 kubelet[2731]: E0911 00:29:26.392169 2731 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:29:26.393698 kubelet[2731]: E0911 00:29:26.393641 2731 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:29:26.394268 kubelet[2731]: E0911 00:29:26.394232 2731 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:29:26.414778 kubelet[2731]: I0911 00:29:26.414701 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.414667258 podStartE2EDuration="1.414667258s" podCreationTimestamp="2025-09-11 00:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:26.41406067 +0000 UTC m=+1.131668560" watchObservedRunningTime="2025-09-11 00:29:26.414667258 +0000 UTC m=+1.132275148" Sep 11 00:29:26.430061 kubelet[2731]: I0911 00:29:26.430011 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.430001892 podStartE2EDuration="3.430001892s" podCreationTimestamp="2025-09-11 00:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:26.42337447 +0000 UTC m=+1.140982380" watchObservedRunningTime="2025-09-11 00:29:26.430001892 +0000 UTC m=+1.147609782" Sep 11 00:29:29.942105 kubelet[2731]: I0911 00:29:29.942071 2731 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:29:29.942722 containerd[1604]: time="2025-09-11T00:29:29.942397446Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:29:29.943013 kubelet[2731]: I0911 00:29:29.942711 2731 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:29:30.922024 kubelet[2731]: I0911 00:29:30.921904 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.9218868780000005 podStartE2EDuration="5.921886878s" podCreationTimestamp="2025-09-11 00:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:26.43014379 +0000 UTC m=+1.147751670" watchObservedRunningTime="2025-09-11 00:29:30.921886878 +0000 UTC m=+5.639494768" Sep 11 00:29:31.053751 systemd[1]: Created slice kubepods-besteffort-podf9a14993_e262_44c1_8186_bf3f61c90d45.slice - libcontainer container kubepods-besteffort-podf9a14993_e262_44c1_8186_bf3f61c90d45.slice. Sep 11 00:29:31.085489 kubelet[2731]: I0911 00:29:31.085439 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9a14993-e262-44c1-8186-bf3f61c90d45-kube-proxy\") pod \"kube-proxy-cg9vs\" (UID: \"f9a14993-e262-44c1-8186-bf3f61c90d45\") " pod="kube-system/kube-proxy-cg9vs" Sep 11 00:29:31.085489 kubelet[2731]: I0911 00:29:31.085485 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9a14993-e262-44c1-8186-bf3f61c90d45-lib-modules\") pod \"kube-proxy-cg9vs\" (UID: \"f9a14993-e262-44c1-8186-bf3f61c90d45\") " pod="kube-system/kube-proxy-cg9vs" Sep 11 00:29:31.085915 kubelet[2731]: I0911 00:29:31.085508 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9a14993-e262-44c1-8186-bf3f61c90d45-xtables-lock\") pod \"kube-proxy-cg9vs\" (UID: \"f9a14993-e262-44c1-8186-bf3f61c90d45\") " pod="kube-system/kube-proxy-cg9vs" Sep 11 00:29:31.085915 kubelet[2731]: I0911 00:29:31.085586 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqb6\" (UniqueName: \"kubernetes.io/projected/f9a14993-e262-44c1-8186-bf3f61c90d45-kube-api-access-zcqb6\") pod \"kube-proxy-cg9vs\" (UID: \"f9a14993-e262-44c1-8186-bf3f61c90d45\") " pod="kube-system/kube-proxy-cg9vs" Sep 11 00:29:31.150868 systemd[1]: Created slice kubepods-besteffort-pod66cea88e_baaa_4c44_857c_5ec5ead555a0.slice - libcontainer container kubepods-besteffort-pod66cea88e_baaa_4c44_857c_5ec5ead555a0.slice. Sep 11 00:29:31.186734 kubelet[2731]: I0911 00:29:31.186585 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/66cea88e-baaa-4c44-857c-5ec5ead555a0-var-lib-calico\") pod \"tigera-operator-755d956888-5ctf5\" (UID: \"66cea88e-baaa-4c44-857c-5ec5ead555a0\") " pod="tigera-operator/tigera-operator-755d956888-5ctf5" Sep 11 00:29:31.186734 kubelet[2731]: I0911 00:29:31.186622 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sckx\" (UniqueName: \"kubernetes.io/projected/66cea88e-baaa-4c44-857c-5ec5ead555a0-kube-api-access-8sckx\") pod \"tigera-operator-755d956888-5ctf5\" (UID: \"66cea88e-baaa-4c44-857c-5ec5ead555a0\") " pod="tigera-operator/tigera-operator-755d956888-5ctf5" Sep 11 00:29:31.364515 containerd[1604]: time="2025-09-11T00:29:31.364449584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cg9vs,Uid:f9a14993-e262-44c1-8186-bf3f61c90d45,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:31.385316 containerd[1604]: time="2025-09-11T00:29:31.385249093Z" level=info msg="connecting to shim 626d612d8c0dfac8d9d198a1b36c162c3decad95bcd6bc64ab86ce174c68af6c" address="unix:///run/containerd/s/a020004fa1f6049bcee324e774a97d7a3cbad2c6e36022556bb269e14a29734c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:31.411678 systemd[1]: Started cri-containerd-626d612d8c0dfac8d9d198a1b36c162c3decad95bcd6bc64ab86ce174c68af6c.scope - libcontainer container 626d612d8c0dfac8d9d198a1b36c162c3decad95bcd6bc64ab86ce174c68af6c. Sep 11 00:29:31.437100 containerd[1604]: time="2025-09-11T00:29:31.436972342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cg9vs,Uid:f9a14993-e262-44c1-8186-bf3f61c90d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"626d612d8c0dfac8d9d198a1b36c162c3decad95bcd6bc64ab86ce174c68af6c\"" Sep 11 00:29:31.444320 containerd[1604]: time="2025-09-11T00:29:31.444278869Z" level=info msg="CreateContainer within sandbox \"626d612d8c0dfac8d9d198a1b36c162c3decad95bcd6bc64ab86ce174c68af6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:29:31.455269 containerd[1604]: time="2025-09-11T00:29:31.455086469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-5ctf5,Uid:66cea88e-baaa-4c44-857c-5ec5ead555a0,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:29:31.455629 containerd[1604]: time="2025-09-11T00:29:31.455565871Z" level=info msg="Container 120e3b03c2280bbe3c23451a92e4acb6892eabc6526360b99c3705fb440d5907: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:31.469732 containerd[1604]: time="2025-09-11T00:29:31.469677420Z" level=info msg="CreateContainer within sandbox \"626d612d8c0dfac8d9d198a1b36c162c3decad95bcd6bc64ab86ce174c68af6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"120e3b03c2280bbe3c23451a92e4acb6892eabc6526360b99c3705fb440d5907\"" Sep 11 00:29:31.470309 containerd[1604]: time="2025-09-11T00:29:31.470263157Z" level=info msg="StartContainer for \"120e3b03c2280bbe3c23451a92e4acb6892eabc6526360b99c3705fb440d5907\"" Sep 11 00:29:31.472102 containerd[1604]: time="2025-09-11T00:29:31.472074898Z" level=info msg="connecting to shim 120e3b03c2280bbe3c23451a92e4acb6892eabc6526360b99c3705fb440d5907" address="unix:///run/containerd/s/a020004fa1f6049bcee324e774a97d7a3cbad2c6e36022556bb269e14a29734c" protocol=ttrpc version=3 Sep 11 00:29:31.484447 containerd[1604]: time="2025-09-11T00:29:31.484403896Z" level=info msg="connecting to shim 81980af3122f3ac46db6648db85857cc7394f186300760c2469ceec3b451092d" address="unix:///run/containerd/s/aea3f3f847f64c416bdc3c2cfc31f8bd01541e5458a198679523a73436120476" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:31.497695 systemd[1]: Started cri-containerd-120e3b03c2280bbe3c23451a92e4acb6892eabc6526360b99c3705fb440d5907.scope - libcontainer container 120e3b03c2280bbe3c23451a92e4acb6892eabc6526360b99c3705fb440d5907. Sep 11 00:29:31.517890 systemd[1]: Started cri-containerd-81980af3122f3ac46db6648db85857cc7394f186300760c2469ceec3b451092d.scope - libcontainer container 81980af3122f3ac46db6648db85857cc7394f186300760c2469ceec3b451092d. Sep 11 00:29:31.545296 containerd[1604]: time="2025-09-11T00:29:31.545248441Z" level=info msg="StartContainer for \"120e3b03c2280bbe3c23451a92e4acb6892eabc6526360b99c3705fb440d5907\" returns successfully" Sep 11 00:29:31.568861 containerd[1604]: time="2025-09-11T00:29:31.568808469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-5ctf5,Uid:66cea88e-baaa-4c44-857c-5ec5ead555a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"81980af3122f3ac46db6648db85857cc7394f186300760c2469ceec3b451092d\"" Sep 11 00:29:31.574839 containerd[1604]: time="2025-09-11T00:29:31.574766984Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:29:32.406942 kubelet[2731]: I0911 00:29:32.406857 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cg9vs" podStartSLOduration=2.406836721 podStartE2EDuration="2.406836721s" podCreationTimestamp="2025-09-11 00:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:32.406668545 +0000 UTC m=+7.124276435" watchObservedRunningTime="2025-09-11 00:29:32.406836721 +0000 UTC m=+7.124444611" Sep 11 00:29:32.750372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111334634.mount: Deactivated successfully. Sep 11 00:29:33.511219 containerd[1604]: time="2025-09-11T00:29:33.511163139Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:33.512028 containerd[1604]: time="2025-09-11T00:29:33.512002586Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:29:33.513263 containerd[1604]: time="2025-09-11T00:29:33.513198238Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:33.515168 containerd[1604]: time="2025-09-11T00:29:33.515121842Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:33.515899 containerd[1604]: time="2025-09-11T00:29:33.515860312Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.941053167s" Sep 11 00:29:33.515899 containerd[1604]: time="2025-09-11T00:29:33.515892617Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:29:33.520342 containerd[1604]: time="2025-09-11T00:29:33.520312393Z" level=info msg="CreateContainer within sandbox \"81980af3122f3ac46db6648db85857cc7394f186300760c2469ceec3b451092d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:29:33.528298 containerd[1604]: time="2025-09-11T00:29:33.528261423Z" level=info msg="Container 54e9680d85f86301802b561aad1795b7e508d87ff4671affa718457676292ebc: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:33.534389 containerd[1604]: time="2025-09-11T00:29:33.534343569Z" level=info msg="CreateContainer within sandbox \"81980af3122f3ac46db6648db85857cc7394f186300760c2469ceec3b451092d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54e9680d85f86301802b561aad1795b7e508d87ff4671affa718457676292ebc\"" Sep 11 00:29:33.534771 containerd[1604]: time="2025-09-11T00:29:33.534742417Z" level=info msg="StartContainer for \"54e9680d85f86301802b561aad1795b7e508d87ff4671affa718457676292ebc\"" Sep 11 00:29:33.535576 containerd[1604]: time="2025-09-11T00:29:33.535525084Z" level=info msg="connecting to shim 54e9680d85f86301802b561aad1795b7e508d87ff4671affa718457676292ebc" address="unix:///run/containerd/s/aea3f3f847f64c416bdc3c2cfc31f8bd01541e5458a198679523a73436120476" protocol=ttrpc version=3 Sep 11 00:29:33.589673 systemd[1]: Started cri-containerd-54e9680d85f86301802b561aad1795b7e508d87ff4671affa718457676292ebc.scope - libcontainer container 54e9680d85f86301802b561aad1795b7e508d87ff4671affa718457676292ebc. Sep 11 00:29:33.617995 containerd[1604]: time="2025-09-11T00:29:33.617959837Z" level=info msg="StartContainer for \"54e9680d85f86301802b561aad1795b7e508d87ff4671affa718457676292ebc\" returns successfully" Sep 11 00:29:34.408429 kubelet[2731]: I0911 00:29:34.408350 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-5ctf5" podStartSLOduration=1.4659563279999999 podStartE2EDuration="3.408329202s" podCreationTimestamp="2025-09-11 00:29:31 +0000 UTC" firstStartedPulling="2025-09-11 00:29:31.574277534 +0000 UTC m=+6.291885424" lastFinishedPulling="2025-09-11 00:29:33.516650418 +0000 UTC m=+8.234258298" observedRunningTime="2025-09-11 00:29:34.408103795 +0000 UTC m=+9.125711685" watchObservedRunningTime="2025-09-11 00:29:34.408329202 +0000 UTC m=+9.125937093" Sep 11 00:29:38.854352 sudo[1802]: pam_unix(sudo:session): session closed for user root Sep 11 00:29:38.855804 sshd[1801]: Connection closed by 10.0.0.1 port 54904 Sep 11 00:29:38.856209 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:38.860569 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:54904.service: Deactivated successfully. Sep 11 00:29:38.862707 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:29:38.862940 systemd[1]: session-7.scope: Consumed 5.555s CPU time, 227.4M memory peak. Sep 11 00:29:38.864096 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:29:38.865198 systemd-logind[1584]: Removed session 7. Sep 11 00:29:39.777651 update_engine[1588]: I20250911 00:29:39.777525 1588 update_attempter.cc:509] Updating boot flags... Sep 11 00:29:42.946427 systemd[1]: Created slice kubepods-besteffort-pod3859c0b7_91bf_46ce_8603_a2f4510ff5f5.slice - libcontainer container kubepods-besteffort-pod3859c0b7_91bf_46ce_8603_a2f4510ff5f5.slice. Sep 11 00:29:42.965183 kubelet[2731]: I0911 00:29:42.965095 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3859c0b7-91bf-46ce-8603-a2f4510ff5f5-typha-certs\") pod \"calico-typha-6d89bff7b7-85d9z\" (UID: \"3859c0b7-91bf-46ce-8603-a2f4510ff5f5\") " pod="calico-system/calico-typha-6d89bff7b7-85d9z" Sep 11 00:29:42.965836 kubelet[2731]: I0911 00:29:42.965795 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3859c0b7-91bf-46ce-8603-a2f4510ff5f5-tigera-ca-bundle\") pod \"calico-typha-6d89bff7b7-85d9z\" (UID: \"3859c0b7-91bf-46ce-8603-a2f4510ff5f5\") " pod="calico-system/calico-typha-6d89bff7b7-85d9z" Sep 11 00:29:42.965836 kubelet[2731]: I0911 00:29:42.965823 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qbq\" (UniqueName: \"kubernetes.io/projected/3859c0b7-91bf-46ce-8603-a2f4510ff5f5-kube-api-access-w9qbq\") pod \"calico-typha-6d89bff7b7-85d9z\" (UID: \"3859c0b7-91bf-46ce-8603-a2f4510ff5f5\") " pod="calico-system/calico-typha-6d89bff7b7-85d9z" Sep 11 00:29:43.229234 systemd[1]: Created slice kubepods-besteffort-pod62b09233_cd6e_472b_a738_f0e77b375bb4.slice - libcontainer container kubepods-besteffort-pod62b09233_cd6e_472b_a738_f0e77b375bb4.slice. Sep 11 00:29:43.257345 containerd[1604]: time="2025-09-11T00:29:43.257302044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d89bff7b7-85d9z,Uid:3859c0b7-91bf-46ce-8603-a2f4510ff5f5,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:43.267645 kubelet[2731]: I0911 00:29:43.267616 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/62b09233-cd6e-472b-a738-f0e77b375bb4-node-certs\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267645 kubelet[2731]: I0911 00:29:43.267648 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b09233-cd6e-472b-a738-f0e77b375bb4-tigera-ca-bundle\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267730 kubelet[2731]: I0911 00:29:43.267665 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-var-run-calico\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267730 kubelet[2731]: I0911 00:29:43.267686 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-xtables-lock\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267730 kubelet[2731]: I0911 00:29:43.267701 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-cni-bin-dir\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267730 kubelet[2731]: I0911 00:29:43.267717 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-lib-modules\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267833 kubelet[2731]: I0911 00:29:43.267732 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbsf\" (UniqueName: \"kubernetes.io/projected/62b09233-cd6e-472b-a738-f0e77b375bb4-kube-api-access-7kbsf\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267833 kubelet[2731]: I0911 00:29:43.267748 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-flexvol-driver-host\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267833 kubelet[2731]: I0911 00:29:43.267764 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-var-lib-calico\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267833 kubelet[2731]: I0911 00:29:43.267781 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-policysync\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267933 kubelet[2731]: I0911 00:29:43.267844 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-cni-log-dir\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.267933 kubelet[2731]: I0911 00:29:43.267871 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/62b09233-cd6e-472b-a738-f0e77b375bb4-cni-net-dir\") pod \"calico-node-wph4m\" (UID: \"62b09233-cd6e-472b-a738-f0e77b375bb4\") " pod="calico-system/calico-node-wph4m" Sep 11 00:29:43.296641 containerd[1604]: time="2025-09-11T00:29:43.295464231Z" level=info msg="connecting to shim 19caf7788518a26965cd91986c2b5a30d73a8093b93c36f30ca6066306b5938e" address="unix:///run/containerd/s/809e4f786f8434aab9062f8fdb176e249c07f9f5f42ed9f2e633147961f20e5d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:43.323733 systemd[1]: Started cri-containerd-19caf7788518a26965cd91986c2b5a30d73a8093b93c36f30ca6066306b5938e.scope - libcontainer container 19caf7788518a26965cd91986c2b5a30d73a8093b93c36f30ca6066306b5938e. Sep 11 00:29:43.370305 kubelet[2731]: E0911 00:29:43.370241 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.370305 kubelet[2731]: W0911 00:29:43.370267 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.371505 containerd[1604]: time="2025-09-11T00:29:43.371440426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d89bff7b7-85d9z,Uid:3859c0b7-91bf-46ce-8603-a2f4510ff5f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"19caf7788518a26965cd91986c2b5a30d73a8093b93c36f30ca6066306b5938e\"" Sep 11 00:29:43.373225 kubelet[2731]: E0911 00:29:43.373139 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.373277 containerd[1604]: time="2025-09-11T00:29:43.373203041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:29:43.374402 kubelet[2731]: E0911 00:29:43.373413 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.374402 kubelet[2731]: W0911 00:29:43.373425 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.374402 kubelet[2731]: E0911 00:29:43.373434 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.379185 kubelet[2731]: E0911 00:29:43.375691 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.379185 kubelet[2731]: W0911 00:29:43.375708 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.379185 kubelet[2731]: E0911 00:29:43.375718 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.379185 kubelet[2731]: E0911 00:29:43.375924 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.379185 kubelet[2731]: W0911 00:29:43.375931 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.379185 kubelet[2731]: E0911 00:29:43.375939 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.379185 kubelet[2731]: E0911 00:29:43.376151 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.379185 kubelet[2731]: W0911 00:29:43.376158 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.379185 kubelet[2731]: E0911 00:29:43.376165 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.379185 kubelet[2731]: E0911 00:29:43.376348 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.379447 kubelet[2731]: W0911 00:29:43.376355 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.379447 kubelet[2731]: E0911 00:29:43.376363 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.379447 kubelet[2731]: E0911 00:29:43.376572 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.379447 kubelet[2731]: W0911 00:29:43.376580 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.379447 kubelet[2731]: E0911 00:29:43.376588 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.379447 kubelet[2731]: E0911 00:29:43.376772 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.379447 kubelet[2731]: W0911 00:29:43.376779 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.379447 kubelet[2731]: E0911 00:29:43.376787 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.379447 kubelet[2731]: E0911 00:29:43.377008 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.379447 kubelet[2731]: W0911 00:29:43.377015 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.379692 kubelet[2731]: E0911 00:29:43.377023 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.379692 kubelet[2731]: E0911 00:29:43.377276 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.379692 kubelet[2731]: W0911 00:29:43.377282 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.379692 kubelet[2731]: E0911 00:29:43.377289 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.381734 kubelet[2731]: E0911 00:29:43.381659 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.381734 kubelet[2731]: W0911 00:29:43.381698 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.382090 kubelet[2731]: E0911 00:29:43.381717 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.385150 kubelet[2731]: E0911 00:29:43.385124 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.385150 kubelet[2731]: W0911 00:29:43.385138 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.385150 kubelet[2731]: E0911 00:29:43.385147 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.518002 kubelet[2731]: E0911 00:29:43.517880 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8vdh" podUID="f9f745c4-024b-492d-9356-4d4c5f1d59a7" Sep 11 00:29:43.533642 containerd[1604]: time="2025-09-11T00:29:43.533597841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wph4m,Uid:62b09233-cd6e-472b-a738-f0e77b375bb4,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:43.554359 containerd[1604]: time="2025-09-11T00:29:43.554305829Z" level=info msg="connecting to shim fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e" address="unix:///run/containerd/s/b74704b9300c4b1c227bdddfd641e7f11962fba515f5d6468b27dd3899b68459" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:43.559069 kubelet[2731]: E0911 00:29:43.559045 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.559069 kubelet[2731]: W0911 00:29:43.559064 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.559170 kubelet[2731]: E0911 00:29:43.559083 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.559318 kubelet[2731]: E0911 00:29:43.559301 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.559318 kubelet[2731]: W0911 00:29:43.559313 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.559389 kubelet[2731]: E0911 00:29:43.559322 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.559611 kubelet[2731]: E0911 00:29:43.559595 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.559611 kubelet[2731]: W0911 00:29:43.559606 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.559611 kubelet[2731]: E0911 00:29:43.559615 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.559958 kubelet[2731]: E0911 00:29:43.559924 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.559958 kubelet[2731]: W0911 00:29:43.559936 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.559958 kubelet[2731]: E0911 00:29:43.559944 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.560267 kubelet[2731]: E0911 00:29:43.560246 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.560267 kubelet[2731]: W0911 00:29:43.560258 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.560267 kubelet[2731]: E0911 00:29:43.560268 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.560503 kubelet[2731]: E0911 00:29:43.560484 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.560503 kubelet[2731]: W0911 00:29:43.560497 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.560503 kubelet[2731]: E0911 00:29:43.560505 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.560722 kubelet[2731]: E0911 00:29:43.560695 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.560722 kubelet[2731]: W0911 00:29:43.560707 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.560722 kubelet[2731]: E0911 00:29:43.560714 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.560924 kubelet[2731]: E0911 00:29:43.560908 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.560924 kubelet[2731]: W0911 00:29:43.560919 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.560924 kubelet[2731]: E0911 00:29:43.560926 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.561142 kubelet[2731]: E0911 00:29:43.561126 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.561142 kubelet[2731]: W0911 00:29:43.561137 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.561197 kubelet[2731]: E0911 00:29:43.561144 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.561346 kubelet[2731]: E0911 00:29:43.561322 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.561346 kubelet[2731]: W0911 00:29:43.561341 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.561396 kubelet[2731]: E0911 00:29:43.561355 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.561554 kubelet[2731]: E0911 00:29:43.561524 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.561554 kubelet[2731]: W0911 00:29:43.561550 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.561607 kubelet[2731]: E0911 00:29:43.561558 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.561751 kubelet[2731]: E0911 00:29:43.561736 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.561751 kubelet[2731]: W0911 00:29:43.561746 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.561802 kubelet[2731]: E0911 00:29:43.561753 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.561986 kubelet[2731]: E0911 00:29:43.561967 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.561986 kubelet[2731]: W0911 00:29:43.561979 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.562038 kubelet[2731]: E0911 00:29:43.561989 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.562211 kubelet[2731]: E0911 00:29:43.562195 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.562211 kubelet[2731]: W0911 00:29:43.562207 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.562271 kubelet[2731]: E0911 00:29:43.562217 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.562687 kubelet[2731]: E0911 00:29:43.562666 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.562687 kubelet[2731]: W0911 00:29:43.562685 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.562752 kubelet[2731]: E0911 00:29:43.562694 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.562915 kubelet[2731]: E0911 00:29:43.562898 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.562915 kubelet[2731]: W0911 00:29:43.562909 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.562915 kubelet[2731]: E0911 00:29:43.562917 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.563146 kubelet[2731]: E0911 00:29:43.563129 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.563146 kubelet[2731]: W0911 00:29:43.563141 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.563197 kubelet[2731]: E0911 00:29:43.563151 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.563341 kubelet[2731]: E0911 00:29:43.563326 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.563341 kubelet[2731]: W0911 00:29:43.563336 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.563397 kubelet[2731]: E0911 00:29:43.563343 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.563528 kubelet[2731]: E0911 00:29:43.563512 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.563528 kubelet[2731]: W0911 00:29:43.563523 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.563589 kubelet[2731]: E0911 00:29:43.563529 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.563754 kubelet[2731]: E0911 00:29:43.563737 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.563754 kubelet[2731]: W0911 00:29:43.563748 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.563754 kubelet[2731]: E0911 00:29:43.563755 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.570321 kubelet[2731]: E0911 00:29:43.570292 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.570321 kubelet[2731]: W0911 00:29:43.570306 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.570321 kubelet[2731]: E0911 00:29:43.570318 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.570413 kubelet[2731]: I0911 00:29:43.570350 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f9f745c4-024b-492d-9356-4d4c5f1d59a7-socket-dir\") pod \"csi-node-driver-n8vdh\" (UID: \"f9f745c4-024b-492d-9356-4d4c5f1d59a7\") " pod="calico-system/csi-node-driver-n8vdh" Sep 11 00:29:43.570615 kubelet[2731]: E0911 00:29:43.570597 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.570615 kubelet[2731]: W0911 00:29:43.570609 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.570663 kubelet[2731]: E0911 00:29:43.570617 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.570663 kubelet[2731]: I0911 00:29:43.570630 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljqhb\" (UniqueName: \"kubernetes.io/projected/f9f745c4-024b-492d-9356-4d4c5f1d59a7-kube-api-access-ljqhb\") pod \"csi-node-driver-n8vdh\" (UID: \"f9f745c4-024b-492d-9356-4d4c5f1d59a7\") " pod="calico-system/csi-node-driver-n8vdh" Sep 11 00:29:43.570840 kubelet[2731]: E0911 00:29:43.570814 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.570840 kubelet[2731]: W0911 00:29:43.570827 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.570840 kubelet[2731]: E0911 00:29:43.570834 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.570919 kubelet[2731]: I0911 00:29:43.570846 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f9f745c4-024b-492d-9356-4d4c5f1d59a7-registration-dir\") pod \"csi-node-driver-n8vdh\" (UID: \"f9f745c4-024b-492d-9356-4d4c5f1d59a7\") " pod="calico-system/csi-node-driver-n8vdh" Sep 11 00:29:43.571071 kubelet[2731]: E0911 00:29:43.571053 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.571071 kubelet[2731]: W0911 00:29:43.571066 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.571071 kubelet[2731]: E0911 00:29:43.571073 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.571146 kubelet[2731]: I0911 00:29:43.571086 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9f745c4-024b-492d-9356-4d4c5f1d59a7-kubelet-dir\") pod \"csi-node-driver-n8vdh\" (UID: \"f9f745c4-024b-492d-9356-4d4c5f1d59a7\") " pod="calico-system/csi-node-driver-n8vdh" Sep 11 00:29:43.571340 kubelet[2731]: E0911 00:29:43.571322 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.571340 kubelet[2731]: W0911 00:29:43.571334 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.571340 kubelet[2731]: E0911 00:29:43.571341 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.571410 kubelet[2731]: I0911 00:29:43.571361 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f9f745c4-024b-492d-9356-4d4c5f1d59a7-varrun\") pod \"csi-node-driver-n8vdh\" (UID: \"f9f745c4-024b-492d-9356-4d4c5f1d59a7\") " pod="calico-system/csi-node-driver-n8vdh" Sep 11 00:29:43.571625 kubelet[2731]: E0911 00:29:43.571574 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.571625 kubelet[2731]: W0911 00:29:43.571585 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.571625 kubelet[2731]: E0911 00:29:43.571593 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.571850 kubelet[2731]: E0911 00:29:43.571833 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.571850 kubelet[2731]: W0911 00:29:43.571844 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.571850 kubelet[2731]: E0911 00:29:43.571852 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.572062 kubelet[2731]: E0911 00:29:43.572044 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.572062 kubelet[2731]: W0911 00:29:43.572056 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.572138 kubelet[2731]: E0911 00:29:43.572065 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.572293 kubelet[2731]: E0911 00:29:43.572277 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.572293 kubelet[2731]: W0911 00:29:43.572287 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.572341 kubelet[2731]: E0911 00:29:43.572296 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.572497 kubelet[2731]: E0911 00:29:43.572481 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.572497 kubelet[2731]: W0911 00:29:43.572491 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.572571 kubelet[2731]: E0911 00:29:43.572498 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.572720 kubelet[2731]: E0911 00:29:43.572703 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.572720 kubelet[2731]: W0911 00:29:43.572714 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.572720 kubelet[2731]: E0911 00:29:43.572721 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.572914 kubelet[2731]: E0911 00:29:43.572899 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.572914 kubelet[2731]: W0911 00:29:43.572909 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.572965 kubelet[2731]: E0911 00:29:43.572916 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.573099 kubelet[2731]: E0911 00:29:43.573084 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.573099 kubelet[2731]: W0911 00:29:43.573094 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.573167 kubelet[2731]: E0911 00:29:43.573101 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.573603 kubelet[2731]: E0911 00:29:43.573579 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.573633 kubelet[2731]: W0911 00:29:43.573601 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.573633 kubelet[2731]: E0911 00:29:43.573624 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.573871 kubelet[2731]: E0911 00:29:43.573858 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.573871 kubelet[2731]: W0911 00:29:43.573868 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.573921 kubelet[2731]: E0911 00:29:43.573877 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.579706 systemd[1]: Started cri-containerd-fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e.scope - libcontainer container fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e. Sep 11 00:29:43.613963 containerd[1604]: time="2025-09-11T00:29:43.613900616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wph4m,Uid:62b09233-cd6e-472b-a738-f0e77b375bb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e\"" Sep 11 00:29:43.672659 kubelet[2731]: E0911 00:29:43.672623 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.672659 kubelet[2731]: W0911 00:29:43.672650 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.672812 kubelet[2731]: E0911 00:29:43.672674 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.673638 kubelet[2731]: E0911 00:29:43.673622 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.673638 kubelet[2731]: W0911 00:29:43.673635 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.673694 kubelet[2731]: E0911 00:29:43.673648 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.673947 kubelet[2731]: E0911 00:29:43.673919 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.673985 kubelet[2731]: W0911 00:29:43.673946 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.673985 kubelet[2731]: E0911 00:29:43.673970 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.674284 kubelet[2731]: E0911 00:29:43.674264 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.674284 kubelet[2731]: W0911 00:29:43.674279 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.674348 kubelet[2731]: E0911 00:29:43.674290 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.674561 kubelet[2731]: E0911 00:29:43.674531 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.674561 kubelet[2731]: W0911 00:29:43.674554 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.674561 kubelet[2731]: E0911 00:29:43.674562 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.674772 kubelet[2731]: E0911 00:29:43.674757 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.674772 kubelet[2731]: W0911 00:29:43.674767 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.674820 kubelet[2731]: E0911 00:29:43.674775 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.675032 kubelet[2731]: E0911 00:29:43.675004 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.675032 kubelet[2731]: W0911 00:29:43.675015 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.675032 kubelet[2731]: E0911 00:29:43.675024 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.675255 kubelet[2731]: E0911 00:29:43.675229 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.675255 kubelet[2731]: W0911 00:29:43.675238 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.675255 kubelet[2731]: E0911 00:29:43.675250 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.675454 kubelet[2731]: E0911 00:29:43.675436 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.675454 kubelet[2731]: W0911 00:29:43.675448 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.675576 kubelet[2731]: E0911 00:29:43.675457 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.675710 kubelet[2731]: E0911 00:29:43.675693 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.675710 kubelet[2731]: W0911 00:29:43.675706 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.675751 kubelet[2731]: E0911 00:29:43.675717 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.675915 kubelet[2731]: E0911 00:29:43.675899 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.675915 kubelet[2731]: W0911 00:29:43.675910 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.675961 kubelet[2731]: E0911 00:29:43.675920 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.676114 kubelet[2731]: E0911 00:29:43.676099 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.676114 kubelet[2731]: W0911 00:29:43.676109 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.676169 kubelet[2731]: E0911 00:29:43.676117 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.676355 kubelet[2731]: E0911 00:29:43.676331 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.676384 kubelet[2731]: W0911 00:29:43.676353 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.676384 kubelet[2731]: E0911 00:29:43.676373 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.676600 kubelet[2731]: E0911 00:29:43.676583 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.676600 kubelet[2731]: W0911 00:29:43.676595 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.676649 kubelet[2731]: E0911 00:29:43.676603 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.676774 kubelet[2731]: E0911 00:29:43.676759 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.676774 kubelet[2731]: W0911 00:29:43.676769 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.676825 kubelet[2731]: E0911 00:29:43.676776 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.676956 kubelet[2731]: E0911 00:29:43.676941 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.676956 kubelet[2731]: W0911 00:29:43.676952 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.677000 kubelet[2731]: E0911 00:29:43.676959 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.677126 kubelet[2731]: E0911 00:29:43.677109 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.677126 kubelet[2731]: W0911 00:29:43.677118 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.677126 kubelet[2731]: E0911 00:29:43.677125 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.677367 kubelet[2731]: E0911 00:29:43.677350 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.677367 kubelet[2731]: W0911 00:29:43.677361 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.677367 kubelet[2731]: E0911 00:29:43.677369 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.677573 kubelet[2731]: E0911 00:29:43.677558 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.677573 kubelet[2731]: W0911 00:29:43.677568 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.677630 kubelet[2731]: E0911 00:29:43.677575 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.677746 kubelet[2731]: E0911 00:29:43.677732 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.677746 kubelet[2731]: W0911 00:29:43.677741 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.677791 kubelet[2731]: E0911 00:29:43.677750 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.677919 kubelet[2731]: E0911 00:29:43.677906 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.677919 kubelet[2731]: W0911 00:29:43.677916 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.677970 kubelet[2731]: E0911 00:29:43.677923 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.678138 kubelet[2731]: E0911 00:29:43.678120 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.678138 kubelet[2731]: W0911 00:29:43.678133 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.678192 kubelet[2731]: E0911 00:29:43.678143 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.678338 kubelet[2731]: E0911 00:29:43.678322 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.678338 kubelet[2731]: W0911 00:29:43.678332 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.678402 kubelet[2731]: E0911 00:29:43.678355 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.678587 kubelet[2731]: E0911 00:29:43.678571 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.678587 kubelet[2731]: W0911 00:29:43.678582 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.678640 kubelet[2731]: E0911 00:29:43.678589 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.678875 kubelet[2731]: E0911 00:29:43.678856 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.678875 kubelet[2731]: W0911 00:29:43.678869 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.678875 kubelet[2731]: E0911 00:29:43.678877 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:43.685748 kubelet[2731]: E0911 00:29:43.685723 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:43.685748 kubelet[2731]: W0911 00:29:43.685735 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:43.685748 kubelet[2731]: E0911 00:29:43.685745 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:45.370071 kubelet[2731]: E0911 00:29:45.370020 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8vdh" podUID="f9f745c4-024b-492d-9356-4d4c5f1d59a7" Sep 11 00:29:46.016396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237332020.mount: Deactivated successfully. Sep 11 00:29:46.485564 containerd[1604]: time="2025-09-11T00:29:46.485382374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:46.486449 containerd[1604]: time="2025-09-11T00:29:46.486399824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:29:46.487702 containerd[1604]: time="2025-09-11T00:29:46.487662939Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:46.489802 containerd[1604]: time="2025-09-11T00:29:46.489757201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:46.490317 containerd[1604]: time="2025-09-11T00:29:46.490282122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.117058073s" Sep 11 00:29:46.490366 containerd[1604]: time="2025-09-11T00:29:46.490321178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:29:46.491362 containerd[1604]: time="2025-09-11T00:29:46.491339971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:29:46.507093 containerd[1604]: time="2025-09-11T00:29:46.507050359Z" level=info msg="CreateContainer within sandbox \"19caf7788518a26965cd91986c2b5a30d73a8093b93c36f30ca6066306b5938e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:29:46.515704 containerd[1604]: time="2025-09-11T00:29:46.515648968Z" level=info msg="Container 7e10ede634e9407f03c8fa55112f022598404bcb6361436d2052c6c21f4b8c4e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:46.526452 containerd[1604]: time="2025-09-11T00:29:46.526404386Z" level=info msg="CreateContainer within sandbox \"19caf7788518a26965cd91986c2b5a30d73a8093b93c36f30ca6066306b5938e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7e10ede634e9407f03c8fa55112f022598404bcb6361436d2052c6c21f4b8c4e\"" Sep 11 00:29:46.527049 containerd[1604]: time="2025-09-11T00:29:46.526989170Z" level=info msg="StartContainer for \"7e10ede634e9407f03c8fa55112f022598404bcb6361436d2052c6c21f4b8c4e\"" Sep 11 00:29:46.528463 containerd[1604]: time="2025-09-11T00:29:46.528437914Z" level=info msg="connecting to shim 7e10ede634e9407f03c8fa55112f022598404bcb6361436d2052c6c21f4b8c4e" address="unix:///run/containerd/s/809e4f786f8434aab9062f8fdb176e249c07f9f5f42ed9f2e633147961f20e5d" protocol=ttrpc version=3 Sep 11 00:29:46.558718 systemd[1]: Started cri-containerd-7e10ede634e9407f03c8fa55112f022598404bcb6361436d2052c6c21f4b8c4e.scope - libcontainer container 7e10ede634e9407f03c8fa55112f022598404bcb6361436d2052c6c21f4b8c4e. Sep 11 00:29:46.609234 containerd[1604]: time="2025-09-11T00:29:46.609187364Z" level=info msg="StartContainer for \"7e10ede634e9407f03c8fa55112f022598404bcb6361436d2052c6c21f4b8c4e\" returns successfully" Sep 11 00:29:47.369934 kubelet[2731]: E0911 00:29:47.369879 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8vdh" podUID="f9f745c4-024b-492d-9356-4d4c5f1d59a7" Sep 11 00:29:47.441316 kubelet[2731]: I0911 00:29:47.441253 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d89bff7b7-85d9z" podStartSLOduration=2.322815453 podStartE2EDuration="5.441239272s" podCreationTimestamp="2025-09-11 00:29:42 +0000 UTC" firstStartedPulling="2025-09-11 00:29:43.372719387 +0000 UTC m=+18.090327277" lastFinishedPulling="2025-09-11 00:29:46.491143205 +0000 UTC m=+21.208751096" observedRunningTime="2025-09-11 00:29:47.440904068 +0000 UTC m=+22.158511948" watchObservedRunningTime="2025-09-11 00:29:47.441239272 +0000 UTC m=+22.158847163" Sep 11 00:29:47.490494 kubelet[2731]: E0911 00:29:47.490458 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.490494 kubelet[2731]: W0911 00:29:47.490483 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.490805 kubelet[2731]: E0911 00:29:47.490507 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.490805 kubelet[2731]: E0911 00:29:47.490787 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.490805 kubelet[2731]: W0911 00:29:47.490797 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.490805 kubelet[2731]: E0911 00:29:47.490808 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.491128 kubelet[2731]: E0911 00:29:47.491113 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.491218 kubelet[2731]: W0911 00:29:47.491199 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.491218 kubelet[2731]: E0911 00:29:47.491214 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.491512 kubelet[2731]: E0911 00:29:47.491498 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.491512 kubelet[2731]: W0911 00:29:47.491508 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.491633 kubelet[2731]: E0911 00:29:47.491519 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.491793 kubelet[2731]: E0911 00:29:47.491774 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.491793 kubelet[2731]: W0911 00:29:47.491788 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.491793 kubelet[2731]: E0911 00:29:47.491800 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.492255 kubelet[2731]: E0911 00:29:47.492235 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.492255 kubelet[2731]: W0911 00:29:47.492248 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.492349 kubelet[2731]: E0911 00:29:47.492260 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.492527 kubelet[2731]: E0911 00:29:47.492501 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.492527 kubelet[2731]: W0911 00:29:47.492514 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.492527 kubelet[2731]: E0911 00:29:47.492524 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.492761 kubelet[2731]: E0911 00:29:47.492746 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.492761 kubelet[2731]: W0911 00:29:47.492757 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.492843 kubelet[2731]: E0911 00:29:47.492768 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.493001 kubelet[2731]: E0911 00:29:47.492976 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.493001 kubelet[2731]: W0911 00:29:47.492986 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.493001 kubelet[2731]: E0911 00:29:47.492996 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.493186 kubelet[2731]: E0911 00:29:47.493170 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.493186 kubelet[2731]: W0911 00:29:47.493180 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.493265 kubelet[2731]: E0911 00:29:47.493189 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.493383 kubelet[2731]: E0911 00:29:47.493368 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.493383 kubelet[2731]: W0911 00:29:47.493378 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.493463 kubelet[2731]: E0911 00:29:47.493389 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.493608 kubelet[2731]: E0911 00:29:47.493592 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.493608 kubelet[2731]: W0911 00:29:47.493603 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.493690 kubelet[2731]: E0911 00:29:47.493613 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.493813 kubelet[2731]: E0911 00:29:47.493797 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.493813 kubelet[2731]: W0911 00:29:47.493807 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.493884 kubelet[2731]: E0911 00:29:47.493816 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.494007 kubelet[2731]: E0911 00:29:47.493991 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.494007 kubelet[2731]: W0911 00:29:47.494001 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.494081 kubelet[2731]: E0911 00:29:47.494012 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.494203 kubelet[2731]: E0911 00:29:47.494188 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.494203 kubelet[2731]: W0911 00:29:47.494198 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.494271 kubelet[2731]: E0911 00:29:47.494207 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.504665 kubelet[2731]: E0911 00:29:47.504637 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.504665 kubelet[2731]: W0911 00:29:47.504658 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.504786 kubelet[2731]: E0911 00:29:47.504677 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.504952 kubelet[2731]: E0911 00:29:47.504915 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.504952 kubelet[2731]: W0911 00:29:47.504932 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.504952 kubelet[2731]: E0911 00:29:47.504946 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.505196 kubelet[2731]: E0911 00:29:47.505171 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.505196 kubelet[2731]: W0911 00:29:47.505185 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.505196 kubelet[2731]: E0911 00:29:47.505195 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.505398 kubelet[2731]: E0911 00:29:47.505377 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.505398 kubelet[2731]: W0911 00:29:47.505388 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.505398 kubelet[2731]: E0911 00:29:47.505394 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.505582 kubelet[2731]: E0911 00:29:47.505564 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.505582 kubelet[2731]: W0911 00:29:47.505574 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.505582 kubelet[2731]: E0911 00:29:47.505582 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.505783 kubelet[2731]: E0911 00:29:47.505771 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.505783 kubelet[2731]: W0911 00:29:47.505779 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.505829 kubelet[2731]: E0911 00:29:47.505786 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.508659 kubelet[2731]: E0911 00:29:47.508627 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.508659 kubelet[2731]: W0911 00:29:47.508656 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.508762 kubelet[2731]: E0911 00:29:47.508679 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.508968 kubelet[2731]: E0911 00:29:47.508949 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.509100 kubelet[2731]: W0911 00:29:47.509031 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.509100 kubelet[2731]: E0911 00:29:47.509048 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.509406 kubelet[2731]: E0911 00:29:47.509360 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.509406 kubelet[2731]: W0911 00:29:47.509372 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.509406 kubelet[2731]: E0911 00:29:47.509384 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.509851 kubelet[2731]: E0911 00:29:47.509807 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.509851 kubelet[2731]: W0911 00:29:47.509818 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.509851 kubelet[2731]: E0911 00:29:47.509827 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.510274 kubelet[2731]: E0911 00:29:47.510182 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.510274 kubelet[2731]: W0911 00:29:47.510203 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.510274 kubelet[2731]: E0911 00:29:47.510214 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.510394 kubelet[2731]: E0911 00:29:47.510382 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.510394 kubelet[2731]: W0911 00:29:47.510390 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.510462 kubelet[2731]: E0911 00:29:47.510399 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.510666 kubelet[2731]: E0911 00:29:47.510648 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.510666 kubelet[2731]: W0911 00:29:47.510659 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.510730 kubelet[2731]: E0911 00:29:47.510668 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.511163 kubelet[2731]: E0911 00:29:47.511144 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.511163 kubelet[2731]: W0911 00:29:47.511159 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.511236 kubelet[2731]: E0911 00:29:47.511170 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.511392 kubelet[2731]: E0911 00:29:47.511375 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.511392 kubelet[2731]: W0911 00:29:47.511388 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.511514 kubelet[2731]: E0911 00:29:47.511398 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.511690 kubelet[2731]: E0911 00:29:47.511671 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.511690 kubelet[2731]: W0911 00:29:47.511687 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.511752 kubelet[2731]: E0911 00:29:47.511703 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.511928 kubelet[2731]: E0911 00:29:47.511912 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.511928 kubelet[2731]: W0911 00:29:47.511923 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.511983 kubelet[2731]: E0911 00:29:47.511931 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.512196 kubelet[2731]: E0911 00:29:47.512183 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:29:47.512196 kubelet[2731]: W0911 00:29:47.512194 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:29:47.512254 kubelet[2731]: E0911 00:29:47.512203 2731 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:29:47.978588 containerd[1604]: time="2025-09-11T00:29:47.978518157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:47.979225 containerd[1604]: time="2025-09-11T00:29:47.979191533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:29:47.980299 containerd[1604]: time="2025-09-11T00:29:47.980266968Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:47.982732 containerd[1604]: time="2025-09-11T00:29:47.982272111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:47.982732 containerd[1604]: time="2025-09-11T00:29:47.982636259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.491271042s" Sep 11 00:29:47.982732 containerd[1604]: time="2025-09-11T00:29:47.982668701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:29:47.987580 containerd[1604]: time="2025-09-11T00:29:47.987513795Z" level=info msg="CreateContainer within sandbox \"fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:29:47.997950 containerd[1604]: time="2025-09-11T00:29:47.997834916Z" level=info msg="Container 54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:48.007003 containerd[1604]: time="2025-09-11T00:29:48.006962667Z" level=info msg="CreateContainer within sandbox \"fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23\"" Sep 11 00:29:48.007339 containerd[1604]: time="2025-09-11T00:29:48.007296591Z" level=info msg="StartContainer for \"54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23\"" Sep 11 00:29:48.008998 containerd[1604]: time="2025-09-11T00:29:48.008968812Z" level=info msg="connecting to shim 54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23" address="unix:///run/containerd/s/b74704b9300c4b1c227bdddfd641e7f11962fba515f5d6468b27dd3899b68459" protocol=ttrpc version=3 Sep 11 00:29:48.036964 systemd[1]: Started cri-containerd-54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23.scope - libcontainer container 54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23. Sep 11 00:29:48.088578 systemd[1]: cri-containerd-54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23.scope: Deactivated successfully. Sep 11 00:29:48.093097 containerd[1604]: time="2025-09-11T00:29:48.093051935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23\" id:\"54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23\" pid:3448 exited_at:{seconds:1757550588 nanos:92325829}" Sep 11 00:29:48.167921 containerd[1604]: time="2025-09-11T00:29:48.167860612Z" level=info msg="received exit event container_id:\"54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23\" id:\"54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23\" pid:3448 exited_at:{seconds:1757550588 nanos:92325829}" Sep 11 00:29:48.169510 containerd[1604]: time="2025-09-11T00:29:48.169449713Z" level=info msg="StartContainer for \"54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23\" returns successfully" Sep 11 00:29:48.192694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54c3837cf1d9999c2b085a37d0b823a7009ee77ec2e091f6b8434477ea7d2e23-rootfs.mount: Deactivated successfully. Sep 11 00:29:48.435216 kubelet[2731]: I0911 00:29:48.435177 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:49.369668 kubelet[2731]: E0911 00:29:49.369626 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8vdh" podUID="f9f745c4-024b-492d-9356-4d4c5f1d59a7" Sep 11 00:29:49.438748 containerd[1604]: time="2025-09-11T00:29:49.438714411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:29:51.369855 kubelet[2731]: E0911 00:29:51.369782 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8vdh" podUID="f9f745c4-024b-492d-9356-4d4c5f1d59a7" Sep 11 00:29:51.674619 kubelet[2731]: I0911 00:29:51.674572 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:53.369609 kubelet[2731]: E0911 00:29:53.369342 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8vdh" podUID="f9f745c4-024b-492d-9356-4d4c5f1d59a7" Sep 11 00:29:53.904843 containerd[1604]: time="2025-09-11T00:29:53.904762402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:53.905877 containerd[1604]: time="2025-09-11T00:29:53.905839312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:29:53.907455 containerd[1604]: time="2025-09-11T00:29:53.907406477Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:53.911449 containerd[1604]: time="2025-09-11T00:29:53.911398509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:53.912081 containerd[1604]: time="2025-09-11T00:29:53.912034608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.473283718s" Sep 11 00:29:53.912081 containerd[1604]: time="2025-09-11T00:29:53.912076496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:29:54.082868 containerd[1604]: time="2025-09-11T00:29:54.082802794Z" level=info msg="CreateContainer within sandbox \"fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:29:54.201214 containerd[1604]: time="2025-09-11T00:29:54.201094351Z" level=info msg="Container cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:54.222029 containerd[1604]: time="2025-09-11T00:29:54.221974414Z" level=info msg="CreateContainer within sandbox \"fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1\"" Sep 11 00:29:54.222656 containerd[1604]: time="2025-09-11T00:29:54.222611306Z" level=info msg="StartContainer for \"cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1\"" Sep 11 00:29:54.224124 containerd[1604]: time="2025-09-11T00:29:54.224075616Z" level=info msg="connecting to shim cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1" address="unix:///run/containerd/s/b74704b9300c4b1c227bdddfd641e7f11962fba515f5d6468b27dd3899b68459" protocol=ttrpc version=3 Sep 11 00:29:54.248726 systemd[1]: Started cri-containerd-cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1.scope - libcontainer container cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1. Sep 11 00:29:54.480515 containerd[1604]: time="2025-09-11T00:29:54.480385628Z" level=info msg="StartContainer for \"cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1\" returns successfully" Sep 11 00:29:55.369738 kubelet[2731]: E0911 00:29:55.369695 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8vdh" podUID="f9f745c4-024b-492d-9356-4d4c5f1d59a7" Sep 11 00:29:55.768994 containerd[1604]: time="2025-09-11T00:29:55.768954871Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:29:55.771914 systemd[1]: cri-containerd-cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1.scope: Deactivated successfully. Sep 11 00:29:55.772265 systemd[1]: cri-containerd-cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1.scope: Consumed 557ms CPU time, 180.8M memory peak, 3.5M read from disk, 171.3M written to disk. Sep 11 00:29:55.772716 containerd[1604]: time="2025-09-11T00:29:55.772685671Z" level=info msg="received exit event container_id:\"cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1\" id:\"cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1\" pid:3511 exited_at:{seconds:1757550595 nanos:772438410}" Sep 11 00:29:55.772900 containerd[1604]: time="2025-09-11T00:29:55.772863157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1\" id:\"cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1\" pid:3511 exited_at:{seconds:1757550595 nanos:772438410}" Sep 11 00:29:55.793896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdca00006d098fac4dfbcb99b2a66b88052483f4017c0d169a90aa682a6060b1-rootfs.mount: Deactivated successfully. Sep 11 00:29:55.866816 kubelet[2731]: I0911 00:29:55.866764 2731 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:29:56.187178 systemd[1]: Created slice kubepods-burstable-podb67b4d0d_825c_42d3_af2e_2c137271f3b5.slice - libcontainer container kubepods-burstable-podb67b4d0d_825c_42d3_af2e_2c137271f3b5.slice. Sep 11 00:29:56.198105 systemd[1]: Created slice kubepods-besteffort-pod519de761_8031_4231_9a05_7b978de8549f.slice - libcontainer container kubepods-besteffort-pod519de761_8031_4231_9a05_7b978de8549f.slice. Sep 11 00:29:56.206095 systemd[1]: Created slice kubepods-besteffort-pod3a54724d_d025_4e6c_9af5_25bad0eafbe3.slice - libcontainer container kubepods-besteffort-pod3a54724d_d025_4e6c_9af5_25bad0eafbe3.slice. Sep 11 00:29:56.212517 systemd[1]: Created slice kubepods-burstable-pod9830b188_c7bf_4ce0_9254_c8625fed4c6b.slice - libcontainer container kubepods-burstable-pod9830b188_c7bf_4ce0_9254_c8625fed4c6b.slice. Sep 11 00:29:56.217890 systemd[1]: Created slice kubepods-besteffort-pod2c8e5679_b12e_4f76_a56f_60c242698950.slice - libcontainer container kubepods-besteffort-pod2c8e5679_b12e_4f76_a56f_60c242698950.slice. Sep 11 00:29:56.224429 systemd[1]: Created slice kubepods-besteffort-pod1e02c891_beca_4653_95fe_399613c2df8b.slice - libcontainer container kubepods-besteffort-pod1e02c891_beca_4653_95fe_399613c2df8b.slice. Sep 11 00:29:56.229938 systemd[1]: Created slice kubepods-besteffort-pod723835f4_9195_4d03_9f2f_7e23eaa56a2c.slice - libcontainer container kubepods-besteffort-pod723835f4_9195_4d03_9f2f_7e23eaa56a2c.slice. Sep 11 00:29:56.269713 kubelet[2731]: I0911 00:29:56.269658 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rtr8\" (UniqueName: \"kubernetes.io/projected/9830b188-c7bf-4ce0-9254-c8625fed4c6b-kube-api-access-5rtr8\") pod \"coredns-674b8bbfcf-8znxw\" (UID: \"9830b188-c7bf-4ce0-9254-c8625fed4c6b\") " pod="kube-system/coredns-674b8bbfcf-8znxw" Sep 11 00:29:56.269713 kubelet[2731]: I0911 00:29:56.269694 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a54724d-d025-4e6c-9af5-25bad0eafbe3-tigera-ca-bundle\") pod \"calico-kube-controllers-9b784dfb7-75x9b\" (UID: \"3a54724d-d025-4e6c-9af5-25bad0eafbe3\") " pod="calico-system/calico-kube-controllers-9b784dfb7-75x9b" Sep 11 00:29:56.269713 kubelet[2731]: I0911 00:29:56.269710 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwx72\" (UniqueName: \"kubernetes.io/projected/1e02c891-beca-4653-95fe-399613c2df8b-kube-api-access-qwx72\") pod \"calico-apiserver-8f8b48fb4-vh4qx\" (UID: \"1e02c891-beca-4653-95fe-399613c2df8b\") " pod="calico-apiserver/calico-apiserver-8f8b48fb4-vh4qx" Sep 11 00:29:56.269713 kubelet[2731]: I0911 00:29:56.269725 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/723835f4-9195-4d03-9f2f-7e23eaa56a2c-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-276b9\" (UID: \"723835f4-9195-4d03-9f2f-7e23eaa56a2c\") " pod="calico-system/goldmane-54d579b49d-276b9" Sep 11 00:29:56.269967 kubelet[2731]: I0911 00:29:56.269742 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/519de761-8031-4231-9a05-7b978de8549f-whisker-backend-key-pair\") pod \"whisker-6984b7656d-xkzbz\" (UID: \"519de761-8031-4231-9a05-7b978de8549f\") " pod="calico-system/whisker-6984b7656d-xkzbz" Sep 11 00:29:56.269967 kubelet[2731]: I0911 00:29:56.269757 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/519de761-8031-4231-9a05-7b978de8549f-whisker-ca-bundle\") pod \"whisker-6984b7656d-xkzbz\" (UID: \"519de761-8031-4231-9a05-7b978de8549f\") " pod="calico-system/whisker-6984b7656d-xkzbz" Sep 11 00:29:56.269967 kubelet[2731]: I0911 00:29:56.269773 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvfb\" (UniqueName: \"kubernetes.io/projected/3a54724d-d025-4e6c-9af5-25bad0eafbe3-kube-api-access-bhvfb\") pod \"calico-kube-controllers-9b784dfb7-75x9b\" (UID: \"3a54724d-d025-4e6c-9af5-25bad0eafbe3\") " pod="calico-system/calico-kube-controllers-9b784dfb7-75x9b" Sep 11 00:29:56.269967 kubelet[2731]: I0911 00:29:56.269831 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b67b4d0d-825c-42d3-af2e-2c137271f3b5-config-volume\") pod \"coredns-674b8bbfcf-th2qt\" (UID: \"b67b4d0d-825c-42d3-af2e-2c137271f3b5\") " pod="kube-system/coredns-674b8bbfcf-th2qt" Sep 11 00:29:56.269967 kubelet[2731]: I0911 00:29:56.269873 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9830b188-c7bf-4ce0-9254-c8625fed4c6b-config-volume\") pod \"coredns-674b8bbfcf-8znxw\" (UID: \"9830b188-c7bf-4ce0-9254-c8625fed4c6b\") " pod="kube-system/coredns-674b8bbfcf-8znxw" Sep 11 00:29:56.270117 kubelet[2731]: I0911 00:29:56.269921 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcjzz\" (UniqueName: \"kubernetes.io/projected/519de761-8031-4231-9a05-7b978de8549f-kube-api-access-bcjzz\") pod \"whisker-6984b7656d-xkzbz\" (UID: \"519de761-8031-4231-9a05-7b978de8549f\") " pod="calico-system/whisker-6984b7656d-xkzbz" Sep 11 00:29:56.270117 kubelet[2731]: I0911 00:29:56.269938 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/723835f4-9195-4d03-9f2f-7e23eaa56a2c-config\") pod \"goldmane-54d579b49d-276b9\" (UID: \"723835f4-9195-4d03-9f2f-7e23eaa56a2c\") " pod="calico-system/goldmane-54d579b49d-276b9" Sep 11 00:29:56.270117 kubelet[2731]: I0911 00:29:56.269955 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/723835f4-9195-4d03-9f2f-7e23eaa56a2c-goldmane-key-pair\") pod \"goldmane-54d579b49d-276b9\" (UID: \"723835f4-9195-4d03-9f2f-7e23eaa56a2c\") " pod="calico-system/goldmane-54d579b49d-276b9" Sep 11 00:29:56.270117 kubelet[2731]: I0911 00:29:56.269979 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7nkp\" (UniqueName: \"kubernetes.io/projected/2c8e5679-b12e-4f76-a56f-60c242698950-kube-api-access-q7nkp\") pod \"calico-apiserver-8f8b48fb4-dq26j\" (UID: \"2c8e5679-b12e-4f76-a56f-60c242698950\") " pod="calico-apiserver/calico-apiserver-8f8b48fb4-dq26j" Sep 11 00:29:56.270117 kubelet[2731]: I0911 00:29:56.269996 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd2lc\" (UniqueName: \"kubernetes.io/projected/b67b4d0d-825c-42d3-af2e-2c137271f3b5-kube-api-access-pd2lc\") pod \"coredns-674b8bbfcf-th2qt\" (UID: \"b67b4d0d-825c-42d3-af2e-2c137271f3b5\") " pod="kube-system/coredns-674b8bbfcf-th2qt" Sep 11 00:29:56.270266 kubelet[2731]: I0911 00:29:56.270017 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2c8e5679-b12e-4f76-a56f-60c242698950-calico-apiserver-certs\") pod \"calico-apiserver-8f8b48fb4-dq26j\" (UID: \"2c8e5679-b12e-4f76-a56f-60c242698950\") " pod="calico-apiserver/calico-apiserver-8f8b48fb4-dq26j" Sep 11 00:29:56.270266 kubelet[2731]: I0911 00:29:56.270048 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e02c891-beca-4653-95fe-399613c2df8b-calico-apiserver-certs\") pod \"calico-apiserver-8f8b48fb4-vh4qx\" (UID: \"1e02c891-beca-4653-95fe-399613c2df8b\") " pod="calico-apiserver/calico-apiserver-8f8b48fb4-vh4qx" Sep 11 00:29:56.270266 kubelet[2731]: I0911 00:29:56.270068 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cjvk\" (UniqueName: \"kubernetes.io/projected/723835f4-9195-4d03-9f2f-7e23eaa56a2c-kube-api-access-9cjvk\") pod \"goldmane-54d579b49d-276b9\" (UID: \"723835f4-9195-4d03-9f2f-7e23eaa56a2c\") " pod="calico-system/goldmane-54d579b49d-276b9" Sep 11 00:29:56.488886 containerd[1604]: time="2025-09-11T00:29:56.488753067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:29:56.494335 containerd[1604]: time="2025-09-11T00:29:56.494289288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th2qt,Uid:b67b4d0d-825c-42d3-af2e-2c137271f3b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:56.503361 containerd[1604]: time="2025-09-11T00:29:56.503299877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6984b7656d-xkzbz,Uid:519de761-8031-4231-9a05-7b978de8549f,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:56.510108 containerd[1604]: time="2025-09-11T00:29:56.510066392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b784dfb7-75x9b,Uid:3a54724d-d025-4e6c-9af5-25bad0eafbe3,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:56.516350 containerd[1604]: time="2025-09-11T00:29:56.516248438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8znxw,Uid:9830b188-c7bf-4ce0-9254-c8625fed4c6b,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:56.522728 containerd[1604]: time="2025-09-11T00:29:56.522663010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f8b48fb4-dq26j,Uid:2c8e5679-b12e-4f76-a56f-60c242698950,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:29:56.529738 containerd[1604]: time="2025-09-11T00:29:56.529692284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f8b48fb4-vh4qx,Uid:1e02c891-beca-4653-95fe-399613c2df8b,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:29:56.535208 containerd[1604]: time="2025-09-11T00:29:56.535160605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-276b9,Uid:723835f4-9195-4d03-9f2f-7e23eaa56a2c,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:56.618873 containerd[1604]: time="2025-09-11T00:29:56.618736423Z" level=error msg="Failed to destroy network for sandbox \"60fc091fa9913be4a1d53bdaf58664c824c2c5dcfd2e4099492ae2646f1fb005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.626610 containerd[1604]: time="2025-09-11T00:29:56.626567252Z" level=error msg="Failed to destroy network for sandbox \"f4e5370cea19e11da30378b76b398bb5d14e740e7d08616217ce26b342ee3cf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.633987 containerd[1604]: time="2025-09-11T00:29:56.633766315Z" level=error msg="Failed to destroy network for sandbox \"b4b4ec05b481a763a82e8b644f0c9aff7ea36de8e321800945e2ed412781e6a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.660005 containerd[1604]: time="2025-09-11T00:29:56.659978021Z" level=error msg="Failed to destroy network for sandbox \"2312e6dd9d83f3a7c6e85fe00ac418ddd9c20fcf5c01c42c4e0104e0968f81be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.660574 containerd[1604]: time="2025-09-11T00:29:56.660440370Z" level=error msg="Failed to destroy network for sandbox \"05f70ab0262fcaa0046bbc5b53ea1ffda501e87c3104cfe15a55068d9011d07a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.660781 containerd[1604]: time="2025-09-11T00:29:56.660757300Z" level=error msg="Failed to destroy network for sandbox \"bca28702098163ff978ee45332e3f3f7bccb67b90add2335a80e3d896e8ba48c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.686166 containerd[1604]: time="2025-09-11T00:29:56.678809162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8znxw,Uid:9830b188-c7bf-4ce0-9254-c8625fed4c6b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2312e6dd9d83f3a7c6e85fe00ac418ddd9c20fcf5c01c42c4e0104e0968f81be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.686560 containerd[1604]: time="2025-09-11T00:29:56.678842721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b784dfb7-75x9b,Uid:3a54724d-d025-4e6c-9af5-25bad0eafbe3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fc091fa9913be4a1d53bdaf58664c824c2c5dcfd2e4099492ae2646f1fb005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.686730 containerd[1604]: time="2025-09-11T00:29:56.680776729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f8b48fb4-dq26j,Uid:2c8e5679-b12e-4f76-a56f-60c242698950,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f70ab0262fcaa0046bbc5b53ea1ffda501e87c3104cfe15a55068d9011d07a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.686792 containerd[1604]: time="2025-09-11T00:29:56.678861339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6984b7656d-xkzbz,Uid:519de761-8031-4231-9a05-7b978de8549f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e5370cea19e11da30378b76b398bb5d14e740e7d08616217ce26b342ee3cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.686874 containerd[1604]: time="2025-09-11T00:29:56.678858123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th2qt,Uid:b67b4d0d-825c-42d3-af2e-2c137271f3b5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b4ec05b481a763a82e8b644f0c9aff7ea36de8e321800945e2ed412781e6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.686918 containerd[1604]: time="2025-09-11T00:29:56.679897796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-276b9,Uid:723835f4-9195-4d03-9f2f-7e23eaa56a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca28702098163ff978ee45332e3f3f7bccb67b90add2335a80e3d896e8ba48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.696559 kubelet[2731]: E0911 00:29:56.696338 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f70ab0262fcaa0046bbc5b53ea1ffda501e87c3104cfe15a55068d9011d07a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.696559 kubelet[2731]: E0911 00:29:56.696449 2731 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f70ab0262fcaa0046bbc5b53ea1ffda501e87c3104cfe15a55068d9011d07a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f8b48fb4-dq26j" Sep 11 00:29:56.696559 kubelet[2731]: E0911 00:29:56.696474 2731 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f70ab0262fcaa0046bbc5b53ea1ffda501e87c3104cfe15a55068d9011d07a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f8b48fb4-dq26j" Sep 11 00:29:56.697067 kubelet[2731]: E0911 00:29:56.697027 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca28702098163ff978ee45332e3f3f7bccb67b90add2335a80e3d896e8ba48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.697067 kubelet[2731]: E0911 00:29:56.697059 2731 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca28702098163ff978ee45332e3f3f7bccb67b90add2335a80e3d896e8ba48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-276b9" Sep 11 00:29:56.697067 kubelet[2731]: E0911 00:29:56.697075 2731 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca28702098163ff978ee45332e3f3f7bccb67b90add2335a80e3d896e8ba48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-276b9" Sep 11 00:29:56.697222 kubelet[2731]: E0911 00:29:56.697101 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-276b9_calico-system(723835f4-9195-4d03-9f2f-7e23eaa56a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-276b9_calico-system(723835f4-9195-4d03-9f2f-7e23eaa56a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bca28702098163ff978ee45332e3f3f7bccb67b90add2335a80e3d896e8ba48c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-276b9" podUID="723835f4-9195-4d03-9f2f-7e23eaa56a2c" Sep 11 00:29:56.697222 kubelet[2731]: E0911 00:29:56.697132 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e5370cea19e11da30378b76b398bb5d14e740e7d08616217ce26b342ee3cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.697222 kubelet[2731]: E0911 00:29:56.697146 2731 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e5370cea19e11da30378b76b398bb5d14e740e7d08616217ce26b342ee3cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6984b7656d-xkzbz" Sep 11 00:29:56.697350 kubelet[2731]: E0911 00:29:56.697158 2731 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e5370cea19e11da30378b76b398bb5d14e740e7d08616217ce26b342ee3cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6984b7656d-xkzbz" Sep 11 00:29:56.697350 kubelet[2731]: E0911 00:29:56.697192 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6984b7656d-xkzbz_calico-system(519de761-8031-4231-9a05-7b978de8549f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6984b7656d-xkzbz_calico-system(519de761-8031-4231-9a05-7b978de8549f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4e5370cea19e11da30378b76b398bb5d14e740e7d08616217ce26b342ee3cf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6984b7656d-xkzbz" podUID="519de761-8031-4231-9a05-7b978de8549f" Sep 11 00:29:56.697350 kubelet[2731]: E0911 00:29:56.697213 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fc091fa9913be4a1d53bdaf58664c824c2c5dcfd2e4099492ae2646f1fb005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.697458 kubelet[2731]: E0911 00:29:56.697228 2731 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fc091fa9913be4a1d53bdaf58664c824c2c5dcfd2e4099492ae2646f1fb005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b784dfb7-75x9b" Sep 11 00:29:56.697458 kubelet[2731]: E0911 00:29:56.697241 2731 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fc091fa9913be4a1d53bdaf58664c824c2c5dcfd2e4099492ae2646f1fb005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b784dfb7-75x9b" Sep 11 00:29:56.697458 kubelet[2731]: E0911 00:29:56.697285 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b784dfb7-75x9b_calico-system(3a54724d-d025-4e6c-9af5-25bad0eafbe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b784dfb7-75x9b_calico-system(3a54724d-d025-4e6c-9af5-25bad0eafbe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60fc091fa9913be4a1d53bdaf58664c824c2c5dcfd2e4099492ae2646f1fb005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b784dfb7-75x9b" podUID="3a54724d-d025-4e6c-9af5-25bad0eafbe3" Sep 11 00:29:56.697569 kubelet[2731]: E0911 00:29:56.697308 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b4ec05b481a763a82e8b644f0c9aff7ea36de8e321800945e2ed412781e6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.697569 kubelet[2731]: E0911 00:29:56.697326 2731 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b4ec05b481a763a82e8b644f0c9aff7ea36de8e321800945e2ed412781e6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-th2qt" Sep 11 00:29:56.697569 kubelet[2731]: E0911 00:29:56.697339 2731 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b4ec05b481a763a82e8b644f0c9aff7ea36de8e321800945e2ed412781e6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-th2qt" Sep 11 00:29:56.697648 kubelet[2731]: E0911 00:29:56.697374 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-th2qt_kube-system(b67b4d0d-825c-42d3-af2e-2c137271f3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-th2qt_kube-system(b67b4d0d-825c-42d3-af2e-2c137271f3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4b4ec05b481a763a82e8b644f0c9aff7ea36de8e321800945e2ed412781e6a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-th2qt" podUID="b67b4d0d-825c-42d3-af2e-2c137271f3b5" Sep 11 00:29:56.697648 kubelet[2731]: E0911 00:29:56.697395 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2312e6dd9d83f3a7c6e85fe00ac418ddd9c20fcf5c01c42c4e0104e0968f81be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.697648 kubelet[2731]: E0911 00:29:56.697407 2731 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2312e6dd9d83f3a7c6e85fe00ac418ddd9c20fcf5c01c42c4e0104e0968f81be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8znxw" Sep 11 00:29:56.697732 kubelet[2731]: E0911 00:29:56.697419 2731 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2312e6dd9d83f3a7c6e85fe00ac418ddd9c20fcf5c01c42c4e0104e0968f81be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8znxw" Sep 11 00:29:56.697732 kubelet[2731]: E0911 00:29:56.697443 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8znxw_kube-system(9830b188-c7bf-4ce0-9254-c8625fed4c6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8znxw_kube-system(9830b188-c7bf-4ce0-9254-c8625fed4c6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2312e6dd9d83f3a7c6e85fe00ac418ddd9c20fcf5c01c42c4e0104e0968f81be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8znxw" podUID="9830b188-c7bf-4ce0-9254-c8625fed4c6b" Sep 11 00:29:56.697956 kubelet[2731]: E0911 00:29:56.696523 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f8b48fb4-dq26j_calico-apiserver(2c8e5679-b12e-4f76-a56f-60c242698950)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f8b48fb4-dq26j_calico-apiserver(2c8e5679-b12e-4f76-a56f-60c242698950)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05f70ab0262fcaa0046bbc5b53ea1ffda501e87c3104cfe15a55068d9011d07a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f8b48fb4-dq26j" podUID="2c8e5679-b12e-4f76-a56f-60c242698950" Sep 11 00:29:56.702950 containerd[1604]: time="2025-09-11T00:29:56.702906750Z" level=error msg="Failed to destroy network for sandbox \"06160273672ee767a3bf9a54edada790f19d77c132fb58c97ee0a98842d3185e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.704206 containerd[1604]: time="2025-09-11T00:29:56.704150493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f8b48fb4-vh4qx,Uid:1e02c891-beca-4653-95fe-399613c2df8b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06160273672ee767a3bf9a54edada790f19d77c132fb58c97ee0a98842d3185e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.704384 kubelet[2731]: E0911 00:29:56.704329 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06160273672ee767a3bf9a54edada790f19d77c132fb58c97ee0a98842d3185e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:56.704441 kubelet[2731]: E0911 00:29:56.704388 2731 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06160273672ee767a3bf9a54edada790f19d77c132fb58c97ee0a98842d3185e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f8b48fb4-vh4qx" Sep 11 00:29:56.704441 kubelet[2731]: E0911 00:29:56.704408 2731 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06160273672ee767a3bf9a54edada790f19d77c132fb58c97ee0a98842d3185e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f8b48fb4-vh4qx" Sep 11 00:29:56.704501 kubelet[2731]: E0911 00:29:56.704457 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f8b48fb4-vh4qx_calico-apiserver(1e02c891-beca-4653-95fe-399613c2df8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f8b48fb4-vh4qx_calico-apiserver(1e02c891-beca-4653-95fe-399613c2df8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06160273672ee767a3bf9a54edada790f19d77c132fb58c97ee0a98842d3185e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f8b48fb4-vh4qx" podUID="1e02c891-beca-4653-95fe-399613c2df8b" Sep 11 00:29:57.378496 systemd[1]: Created slice kubepods-besteffort-podf9f745c4_024b_492d_9356_4d4c5f1d59a7.slice - libcontainer container kubepods-besteffort-podf9f745c4_024b_492d_9356_4d4c5f1d59a7.slice. Sep 11 00:29:57.380466 containerd[1604]: time="2025-09-11T00:29:57.380425338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8vdh,Uid:f9f745c4-024b-492d-9356-4d4c5f1d59a7,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:57.427180 containerd[1604]: time="2025-09-11T00:29:57.427105872Z" level=error msg="Failed to destroy network for sandbox \"6ddd8cc874b8b86cc75d003e7dbca58211d9f0b08e49e474c87056746b305af3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:57.428450 containerd[1604]: time="2025-09-11T00:29:57.428355142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8vdh,Uid:f9f745c4-024b-492d-9356-4d4c5f1d59a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ddd8cc874b8b86cc75d003e7dbca58211d9f0b08e49e474c87056746b305af3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:57.428705 kubelet[2731]: E0911 00:29:57.428661 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ddd8cc874b8b86cc75d003e7dbca58211d9f0b08e49e474c87056746b305af3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:29:57.428771 kubelet[2731]: E0911 00:29:57.428729 2731 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ddd8cc874b8b86cc75d003e7dbca58211d9f0b08e49e474c87056746b305af3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8vdh" Sep 11 00:29:57.428806 kubelet[2731]: E0911 00:29:57.428768 2731 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ddd8cc874b8b86cc75d003e7dbca58211d9f0b08e49e474c87056746b305af3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8vdh" Sep 11 00:29:57.428891 kubelet[2731]: E0911 00:29:57.428828 2731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n8vdh_calico-system(f9f745c4-024b-492d-9356-4d4c5f1d59a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n8vdh_calico-system(f9f745c4-024b-492d-9356-4d4c5f1d59a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ddd8cc874b8b86cc75d003e7dbca58211d9f0b08e49e474c87056746b305af3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8vdh" podUID="f9f745c4-024b-492d-9356-4d4c5f1d59a7" Sep 11 00:29:57.429492 systemd[1]: run-netns-cni\x2d7d7de4de\x2dedb3\x2de7de\x2d738d\x2d39b3f6098879.mount: Deactivated successfully. Sep 11 00:30:02.933600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511511262.mount: Deactivated successfully. Sep 11 00:30:05.529950 containerd[1604]: time="2025-09-11T00:30:05.527736917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:05.531039 containerd[1604]: time="2025-09-11T00:30:05.530995224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:30:05.532554 containerd[1604]: time="2025-09-11T00:30:05.532171916Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:05.537806 containerd[1604]: time="2025-09-11T00:30:05.536327943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:05.537806 containerd[1604]: time="2025-09-11T00:30:05.536970288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.048178131s" Sep 11 00:30:05.537806 containerd[1604]: time="2025-09-11T00:30:05.537004918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:30:05.569289 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:48824.service - OpenSSH per-connection server daemon (10.0.0.1:48824). Sep 11 00:30:05.573222 containerd[1604]: time="2025-09-11T00:30:05.573176223Z" level=info msg="CreateContainer within sandbox \"fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:30:05.600797 containerd[1604]: time="2025-09-11T00:30:05.600751483Z" level=info msg="Container e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:05.603045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149821908.mount: Deactivated successfully. Sep 11 00:30:05.612448 containerd[1604]: time="2025-09-11T00:30:05.612409070Z" level=info msg="CreateContainer within sandbox \"fb99dbab4dbec75436513c238222f9cff5ff0829e585f85da98e1cff3576c93e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0\"" Sep 11 00:30:05.613552 containerd[1604]: time="2025-09-11T00:30:05.612953928Z" level=info msg="StartContainer for \"e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0\"" Sep 11 00:30:05.614410 containerd[1604]: time="2025-09-11T00:30:05.614382237Z" level=info msg="connecting to shim e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0" address="unix:///run/containerd/s/b74704b9300c4b1c227bdddfd641e7f11962fba515f5d6468b27dd3899b68459" protocol=ttrpc version=3 Sep 11 00:30:05.620555 sshd[3820]: Accepted publickey for core from 10.0.0.1 port 48824 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:05.622470 sshd-session[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:05.630827 systemd-logind[1584]: New session 8 of user core. Sep 11 00:30:05.639694 systemd[1]: Started cri-containerd-e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0.scope - libcontainer container e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0. Sep 11 00:30:05.640765 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:30:05.687396 containerd[1604]: time="2025-09-11T00:30:05.687356762Z" level=info msg="StartContainer for \"e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0\" returns successfully" Sep 11 00:30:05.778433 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:30:05.778574 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:30:05.782576 sshd[3841]: Connection closed by 10.0.0.1 port 48824 Sep 11 00:30:05.782876 sshd-session[3820]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:05.788307 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:48824.service: Deactivated successfully. Sep 11 00:30:05.790282 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:30:05.792188 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:30:05.793699 systemd-logind[1584]: Removed session 8. Sep 11 00:30:05.933296 kubelet[2731]: I0911 00:30:05.933239 2731 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/519de761-8031-4231-9a05-7b978de8549f-whisker-backend-key-pair\") pod \"519de761-8031-4231-9a05-7b978de8549f\" (UID: \"519de761-8031-4231-9a05-7b978de8549f\") " Sep 11 00:30:05.934308 kubelet[2731]: I0911 00:30:05.934043 2731 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/519de761-8031-4231-9a05-7b978de8549f-whisker-ca-bundle\") pod \"519de761-8031-4231-9a05-7b978de8549f\" (UID: \"519de761-8031-4231-9a05-7b978de8549f\") " Sep 11 00:30:05.934611 kubelet[2731]: I0911 00:30:05.934579 2731 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcjzz\" (UniqueName: \"kubernetes.io/projected/519de761-8031-4231-9a05-7b978de8549f-kube-api-access-bcjzz\") pod \"519de761-8031-4231-9a05-7b978de8549f\" (UID: \"519de761-8031-4231-9a05-7b978de8549f\") " Sep 11 00:30:05.934943 kubelet[2731]: I0911 00:30:05.934883 2731 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519de761-8031-4231-9a05-7b978de8549f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "519de761-8031-4231-9a05-7b978de8549f" (UID: "519de761-8031-4231-9a05-7b978de8549f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:30:05.937859 kubelet[2731]: I0911 00:30:05.937809 2731 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/519de761-8031-4231-9a05-7b978de8549f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "519de761-8031-4231-9a05-7b978de8549f" (UID: "519de761-8031-4231-9a05-7b978de8549f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:30:05.938646 kubelet[2731]: I0911 00:30:05.938595 2731 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/519de761-8031-4231-9a05-7b978de8549f-kube-api-access-bcjzz" (OuterVolumeSpecName: "kube-api-access-bcjzz") pod "519de761-8031-4231-9a05-7b978de8549f" (UID: "519de761-8031-4231-9a05-7b978de8549f"). InnerVolumeSpecName "kube-api-access-bcjzz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:30:05.939360 systemd[1]: var-lib-kubelet-pods-519de761\x2d8031\x2d4231\x2d9a05\x2d7b978de8549f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbcjzz.mount: Deactivated successfully. Sep 11 00:30:05.939485 systemd[1]: var-lib-kubelet-pods-519de761\x2d8031\x2d4231\x2d9a05\x2d7b978de8549f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:30:06.035792 kubelet[2731]: I0911 00:30:06.035693 2731 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/519de761-8031-4231-9a05-7b978de8549f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:30:06.035792 kubelet[2731]: I0911 00:30:06.035721 2731 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/519de761-8031-4231-9a05-7b978de8549f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:30:06.035792 kubelet[2731]: I0911 00:30:06.035729 2731 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bcjzz\" (UniqueName: \"kubernetes.io/projected/519de761-8031-4231-9a05-7b978de8549f-kube-api-access-bcjzz\") on node \"localhost\" DevicePath \"\"" Sep 11 00:30:06.514953 systemd[1]: Removed slice kubepods-besteffort-pod519de761_8031_4231_9a05_7b978de8549f.slice - libcontainer container kubepods-besteffort-pod519de761_8031_4231_9a05_7b978de8549f.slice. Sep 11 00:30:06.527777 kubelet[2731]: I0911 00:30:06.527710 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wph4m" podStartSLOduration=1.60360983 podStartE2EDuration="23.527686369s" podCreationTimestamp="2025-09-11 00:29:43 +0000 UTC" firstStartedPulling="2025-09-11 00:29:43.616366778 +0000 UTC m=+18.333974668" lastFinishedPulling="2025-09-11 00:30:05.540443317 +0000 UTC m=+40.258051207" observedRunningTime="2025-09-11 00:30:06.526147383 +0000 UTC m=+41.243755283" watchObservedRunningTime="2025-09-11 00:30:06.527686369 +0000 UTC m=+41.245294259" Sep 11 00:30:06.587270 systemd[1]: Created slice kubepods-besteffort-podfe67cb06_2a4e_48ed_9090_2bb0c2799f24.slice - libcontainer container kubepods-besteffort-podfe67cb06_2a4e_48ed_9090_2bb0c2799f24.slice. Sep 11 00:30:06.639798 kubelet[2731]: I0911 00:30:06.639734 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe67cb06-2a4e-48ed-9090-2bb0c2799f24-whisker-backend-key-pair\") pod \"whisker-658957dc95-9xjbv\" (UID: \"fe67cb06-2a4e-48ed-9090-2bb0c2799f24\") " pod="calico-system/whisker-658957dc95-9xjbv" Sep 11 00:30:06.639798 kubelet[2731]: I0911 00:30:06.639787 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe67cb06-2a4e-48ed-9090-2bb0c2799f24-whisker-ca-bundle\") pod \"whisker-658957dc95-9xjbv\" (UID: \"fe67cb06-2a4e-48ed-9090-2bb0c2799f24\") " pod="calico-system/whisker-658957dc95-9xjbv" Sep 11 00:30:06.639798 kubelet[2731]: I0911 00:30:06.639810 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9ln\" (UniqueName: \"kubernetes.io/projected/fe67cb06-2a4e-48ed-9090-2bb0c2799f24-kube-api-access-vb9ln\") pod \"whisker-658957dc95-9xjbv\" (UID: \"fe67cb06-2a4e-48ed-9090-2bb0c2799f24\") " pod="calico-system/whisker-658957dc95-9xjbv" Sep 11 00:30:06.891491 containerd[1604]: time="2025-09-11T00:30:06.891352245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658957dc95-9xjbv,Uid:fe67cb06-2a4e-48ed-9090-2bb0c2799f24,Namespace:calico-system,Attempt:0,}" Sep 11 00:30:07.316145 systemd-networkd[1499]: cali9b1468a01e8: Link UP Sep 11 00:30:07.316714 systemd-networkd[1499]: cali9b1468a01e8: Gained carrier Sep 11 00:30:07.337307 containerd[1604]: 2025-09-11 00:30:06.994 [INFO][3908] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:30:07.337307 containerd[1604]: 2025-09-11 00:30:07.165 [INFO][3908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--658957dc95--9xjbv-eth0 whisker-658957dc95- calico-system fe67cb06-2a4e-48ed-9090-2bb0c2799f24 925 0 2025-09-11 00:30:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:658957dc95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-658957dc95-9xjbv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9b1468a01e8 [] [] }} ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Namespace="calico-system" Pod="whisker-658957dc95-9xjbv" WorkloadEndpoint="localhost-k8s-whisker--658957dc95--9xjbv-" Sep 11 00:30:07.337307 containerd[1604]: 2025-09-11 00:30:07.165 [INFO][3908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Namespace="calico-system" Pod="whisker-658957dc95-9xjbv" WorkloadEndpoint="localhost-k8s-whisker--658957dc95--9xjbv-eth0" Sep 11 00:30:07.337307 containerd[1604]: 2025-09-11 00:30:07.267 [INFO][4019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" HandleID="k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Workload="localhost-k8s-whisker--658957dc95--9xjbv-eth0" Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.267 [INFO][4019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" HandleID="k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Workload="localhost-k8s-whisker--658957dc95--9xjbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-658957dc95-9xjbv", "timestamp":"2025-09-11 00:30:07.267231578 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.267 [INFO][4019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.267 [INFO][4019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.268 [INFO][4019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.279 [INFO][4019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" host="localhost" Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.284 [INFO][4019] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.289 [INFO][4019] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.291 [INFO][4019] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.293 [INFO][4019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:07.337596 containerd[1604]: 2025-09-11 00:30:07.293 [INFO][4019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" host="localhost" Sep 11 00:30:07.337869 containerd[1604]: 2025-09-11 00:30:07.294 [INFO][4019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4 Sep 11 00:30:07.337869 containerd[1604]: 2025-09-11 00:30:07.298 [INFO][4019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" host="localhost" Sep 11 00:30:07.337869 containerd[1604]: 2025-09-11 00:30:07.303 [INFO][4019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" host="localhost" Sep 11 00:30:07.337869 containerd[1604]: 2025-09-11 00:30:07.303 [INFO][4019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" host="localhost" Sep 11 00:30:07.337869 containerd[1604]: 2025-09-11 00:30:07.303 [INFO][4019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:30:07.337869 containerd[1604]: 2025-09-11 00:30:07.303 [INFO][4019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" HandleID="k8s-pod-network.01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Workload="localhost-k8s-whisker--658957dc95--9xjbv-eth0" Sep 11 00:30:07.338023 containerd[1604]: 2025-09-11 00:30:07.307 [INFO][3908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Namespace="calico-system" Pod="whisker-658957dc95-9xjbv" WorkloadEndpoint="localhost-k8s-whisker--658957dc95--9xjbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--658957dc95--9xjbv-eth0", GenerateName:"whisker-658957dc95-", Namespace:"calico-system", SelfLink:"", UID:"fe67cb06-2a4e-48ed-9090-2bb0c2799f24", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"658957dc95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-658957dc95-9xjbv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9b1468a01e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:07.338023 containerd[1604]: 2025-09-11 00:30:07.307 [INFO][3908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Namespace="calico-system" Pod="whisker-658957dc95-9xjbv" WorkloadEndpoint="localhost-k8s-whisker--658957dc95--9xjbv-eth0" Sep 11 00:30:07.338116 containerd[1604]: 2025-09-11 00:30:07.307 [INFO][3908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b1468a01e8 ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Namespace="calico-system" Pod="whisker-658957dc95-9xjbv" WorkloadEndpoint="localhost-k8s-whisker--658957dc95--9xjbv-eth0" Sep 11 00:30:07.338116 containerd[1604]: 2025-09-11 00:30:07.317 [INFO][3908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Namespace="calico-system" Pod="whisker-658957dc95-9xjbv" WorkloadEndpoint="localhost-k8s-whisker--658957dc95--9xjbv-eth0" Sep 11 00:30:07.338163 containerd[1604]: 2025-09-11 00:30:07.320 [INFO][3908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Namespace="calico-system" Pod="whisker-658957dc95-9xjbv" WorkloadEndpoint="localhost-k8s-whisker--658957dc95--9xjbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--658957dc95--9xjbv-eth0", GenerateName:"whisker-658957dc95-", Namespace:"calico-system", SelfLink:"", UID:"fe67cb06-2a4e-48ed-9090-2bb0c2799f24", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"658957dc95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4", Pod:"whisker-658957dc95-9xjbv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9b1468a01e8", MAC:"fe:2d:90:50:f5:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:07.338227 containerd[1604]: 2025-09-11 00:30:07.334 [INFO][3908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" Namespace="calico-system" Pod="whisker-658957dc95-9xjbv" WorkloadEndpoint="localhost-k8s-whisker--658957dc95--9xjbv-eth0" Sep 11 00:30:07.372982 kubelet[2731]: I0911 00:30:07.372919 2731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="519de761-8031-4231-9a05-7b978de8549f" path="/var/lib/kubelet/pods/519de761-8031-4231-9a05-7b978de8549f/volumes" Sep 11 00:30:08.370902 containerd[1604]: time="2025-09-11T00:30:08.370732347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b784dfb7-75x9b,Uid:3a54724d-d025-4e6c-9af5-25bad0eafbe3,Namespace:calico-system,Attempt:0,}" Sep 11 00:30:08.372549 containerd[1604]: time="2025-09-11T00:30:08.371723745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f8b48fb4-dq26j,Uid:2c8e5679-b12e-4f76-a56f-60c242698950,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:30:08.797269 systemd-networkd[1499]: vxlan.calico: Link UP Sep 11 00:30:08.797279 systemd-networkd[1499]: vxlan.calico: Gained carrier Sep 11 00:30:08.832251 systemd-networkd[1499]: cali9b1468a01e8: Gained IPv6LL Sep 11 00:30:08.941724 systemd-networkd[1499]: cali27d7d51cd4c: Link UP Sep 11 00:30:08.944733 systemd-networkd[1499]: cali27d7d51cd4c: Gained carrier Sep 11 00:30:08.964706 containerd[1604]: time="2025-09-11T00:30:08.964660588Z" level=info msg="connecting to shim 01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4" address="unix:///run/containerd/s/880493fe9bbdb752a6f755e40a51bb20121a8a87ec1e18d50193caed61c94e31" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:08.978106 containerd[1604]: 2025-09-11 00:30:08.811 [INFO][4081] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0 calico-kube-controllers-9b784dfb7- calico-system 3a54724d-d025-4e6c-9af5-25bad0eafbe3 818 0 2025-09-11 00:29:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9b784dfb7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9b784dfb7-75x9b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali27d7d51cd4c [] [] }} ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Namespace="calico-system" Pod="calico-kube-controllers-9b784dfb7-75x9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-" Sep 11 00:30:08.978106 containerd[1604]: 2025-09-11 00:30:08.812 [INFO][4081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Namespace="calico-system" Pod="calico-kube-controllers-9b784dfb7-75x9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" Sep 11 00:30:08.978106 containerd[1604]: 2025-09-11 00:30:08.862 [INFO][4132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" HandleID="k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Workload="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.862 [INFO][4132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" HandleID="k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Workload="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9b784dfb7-75x9b", "timestamp":"2025-09-11 00:30:08.862734664 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.865 [INFO][4132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.866 [INFO][4132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.866 [INFO][4132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.883 [INFO][4132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" host="localhost" Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.900 [INFO][4132] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.907 [INFO][4132] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.911 [INFO][4132] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.913 [INFO][4132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:08.978303 containerd[1604]: 2025-09-11 00:30:08.913 [INFO][4132] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" host="localhost" Sep 11 00:30:08.978552 containerd[1604]: 2025-09-11 00:30:08.915 [INFO][4132] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6 Sep 11 00:30:08.978552 containerd[1604]: 2025-09-11 00:30:08.921 [INFO][4132] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" host="localhost" Sep 11 00:30:08.978552 containerd[1604]: 2025-09-11 00:30:08.928 [INFO][4132] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" host="localhost" Sep 11 00:30:08.978552 containerd[1604]: 2025-09-11 00:30:08.928 [INFO][4132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" host="localhost" Sep 11 00:30:08.978552 containerd[1604]: 2025-09-11 00:30:08.928 [INFO][4132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:30:08.978552 containerd[1604]: 2025-09-11 00:30:08.928 [INFO][4132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" HandleID="k8s-pod-network.839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Workload="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" Sep 11 00:30:08.978676 containerd[1604]: 2025-09-11 00:30:08.938 [INFO][4081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Namespace="calico-system" Pod="calico-kube-controllers-9b784dfb7-75x9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0", GenerateName:"calico-kube-controllers-9b784dfb7-", Namespace:"calico-system", SelfLink:"", UID:"3a54724d-d025-4e6c-9af5-25bad0eafbe3", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b784dfb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9b784dfb7-75x9b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27d7d51cd4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:08.978725 containerd[1604]: 2025-09-11 00:30:08.938 [INFO][4081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Namespace="calico-system" Pod="calico-kube-controllers-9b784dfb7-75x9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" Sep 11 00:30:08.978725 containerd[1604]: 2025-09-11 00:30:08.938 [INFO][4081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27d7d51cd4c ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Namespace="calico-system" Pod="calico-kube-controllers-9b784dfb7-75x9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" Sep 11 00:30:08.978725 containerd[1604]: 2025-09-11 00:30:08.943 [INFO][4081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Namespace="calico-system" Pod="calico-kube-controllers-9b784dfb7-75x9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" Sep 11 00:30:08.978810 containerd[1604]: 2025-09-11 00:30:08.945 [INFO][4081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Namespace="calico-system" Pod="calico-kube-controllers-9b784dfb7-75x9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0", GenerateName:"calico-kube-controllers-9b784dfb7-", Namespace:"calico-system", SelfLink:"", UID:"3a54724d-d025-4e6c-9af5-25bad0eafbe3", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b784dfb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6", Pod:"calico-kube-controllers-9b784dfb7-75x9b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali27d7d51cd4c", MAC:"52:5c:39:8e:80:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:08.978872 containerd[1604]: 2025-09-11 00:30:08.962 [INFO][4081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" Namespace="calico-system" Pod="calico-kube-controllers-9b784dfb7-75x9b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b784dfb7--75x9b-eth0" Sep 11 00:30:08.987806 systemd[1]: Started cri-containerd-01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4.scope - libcontainer container 01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4. Sep 11 00:30:08.988127 containerd[1604]: time="2025-09-11T00:30:08.988103561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0\" id:\"d2bf860ac90fdd83c99bc1776f8d02198c6bb57e577d2c54ee82880d5117ae35\" pid:4160 exit_status:1 exited_at:{seconds:1757550608 nanos:987878750}" Sep 11 00:30:09.016610 containerd[1604]: time="2025-09-11T00:30:09.016554657Z" level=info msg="connecting to shim 839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6" address="unix:///run/containerd/s/f7248b700309bdc1e425f72ad86ce39d4f2237954c0fc30885842d986f31dc34" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:09.036261 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:30:09.053475 systemd-networkd[1499]: calid57ab062853: Link UP Sep 11 00:30:09.054612 systemd-networkd[1499]: calid57ab062853: Gained carrier Sep 11 00:30:09.061746 systemd[1]: Started cri-containerd-839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6.scope - libcontainer container 839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6. Sep 11 00:30:09.074168 containerd[1604]: 2025-09-11 00:30:08.845 [INFO][4092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0 calico-apiserver-8f8b48fb4- calico-apiserver 2c8e5679-b12e-4f76-a56f-60c242698950 819 0 2025-09-11 00:29:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f8b48fb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f8b48fb4-dq26j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid57ab062853 [] [] }} ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-dq26j" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-" Sep 11 00:30:09.074168 containerd[1604]: 2025-09-11 00:30:08.845 [INFO][4092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-dq26j" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" Sep 11 00:30:09.074168 containerd[1604]: 2025-09-11 00:30:08.886 [INFO][4155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" HandleID="k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Workload="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:08.886 [INFO][4155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" HandleID="k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Workload="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8f8b48fb4-dq26j", "timestamp":"2025-09-11 00:30:08.885192502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:08.886 [INFO][4155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:08.930 [INFO][4155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:08.930 [INFO][4155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:08.987 [INFO][4155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" host="localhost" Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:09.004 [INFO][4155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:09.013 [INFO][4155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:09.016 [INFO][4155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:09.020 [INFO][4155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:09.074328 containerd[1604]: 2025-09-11 00:30:09.020 [INFO][4155] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" host="localhost" Sep 11 00:30:09.075432 containerd[1604]: 2025-09-11 00:30:09.023 [INFO][4155] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c Sep 11 00:30:09.075432 containerd[1604]: 2025-09-11 00:30:09.028 [INFO][4155] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" host="localhost" Sep 11 00:30:09.075432 containerd[1604]: 2025-09-11 00:30:09.036 [INFO][4155] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" host="localhost" Sep 11 00:30:09.075432 containerd[1604]: 2025-09-11 00:30:09.036 [INFO][4155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" host="localhost" Sep 11 00:30:09.075432 containerd[1604]: 2025-09-11 00:30:09.036 [INFO][4155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:30:09.075432 containerd[1604]: 2025-09-11 00:30:09.036 [INFO][4155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" HandleID="k8s-pod-network.da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Workload="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" Sep 11 00:30:09.075625 containerd[1604]: 2025-09-11 00:30:09.048 [INFO][4092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-dq26j" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0", GenerateName:"calico-apiserver-8f8b48fb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c8e5679-b12e-4f76-a56f-60c242698950", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f8b48fb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f8b48fb4-dq26j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid57ab062853", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:09.075703 containerd[1604]: 2025-09-11 00:30:09.048 [INFO][4092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-dq26j" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" Sep 11 00:30:09.075703 containerd[1604]: 2025-09-11 00:30:09.048 [INFO][4092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid57ab062853 ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-dq26j" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" Sep 11 00:30:09.075703 containerd[1604]: 2025-09-11 00:30:09.055 [INFO][4092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-dq26j" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" Sep 11 00:30:09.075777 containerd[1604]: 2025-09-11 00:30:09.055 [INFO][4092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-dq26j" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0", GenerateName:"calico-apiserver-8f8b48fb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c8e5679-b12e-4f76-a56f-60c242698950", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f8b48fb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c", Pod:"calico-apiserver-8f8b48fb4-dq26j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid57ab062853", MAC:"7e:f0:e4:07:7f:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:09.075830 containerd[1604]: 2025-09-11 00:30:09.070 [INFO][4092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-dq26j" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--dq26j-eth0" Sep 11 00:30:09.094728 containerd[1604]: time="2025-09-11T00:30:09.093491243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658957dc95-9xjbv,Uid:fe67cb06-2a4e-48ed-9090-2bb0c2799f24,Namespace:calico-system,Attempt:0,} returns sandbox id \"01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4\"" Sep 11 00:30:09.099310 containerd[1604]: time="2025-09-11T00:30:09.099273635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:30:09.099787 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:30:09.131647 containerd[1604]: time="2025-09-11T00:30:09.131597668Z" level=info msg="connecting to shim da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c" address="unix:///run/containerd/s/3c552c0dc60c8686a18a1ec6c43333bf4756a126fee0848932e4826d0ded39cd" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:09.147660 containerd[1604]: time="2025-09-11T00:30:09.147029859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b784dfb7-75x9b,Uid:3a54724d-d025-4e6c-9af5-25bad0eafbe3,Namespace:calico-system,Attempt:0,} returns sandbox id \"839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6\"" Sep 11 00:30:09.168781 systemd[1]: Started cri-containerd-da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c.scope - libcontainer container da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c. Sep 11 00:30:09.171474 containerd[1604]: time="2025-09-11T00:30:09.171410536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0\" id:\"e636a0a056ba375b635890b03978c482665c5d062d0608322f9d1cc7da8e360d\" pid:4249 exit_status:1 exited_at:{seconds:1757550609 nanos:171004534}" Sep 11 00:30:09.186839 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:30:09.281416 containerd[1604]: time="2025-09-11T00:30:09.281371427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f8b48fb4-dq26j,Uid:2c8e5679-b12e-4f76-a56f-60c242698950,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c\"" Sep 11 00:30:09.370454 containerd[1604]: time="2025-09-11T00:30:09.370311735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8vdh,Uid:f9f745c4-024b-492d-9356-4d4c5f1d59a7,Namespace:calico-system,Attempt:0,}" Sep 11 00:30:09.371020 containerd[1604]: time="2025-09-11T00:30:09.370820563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th2qt,Uid:b67b4d0d-825c-42d3-af2e-2c137271f3b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:30:09.371020 containerd[1604]: time="2025-09-11T00:30:09.370855803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-276b9,Uid:723835f4-9195-4d03-9f2f-7e23eaa56a2c,Namespace:calico-system,Attempt:0,}" Sep 11 00:30:09.479708 systemd-networkd[1499]: calie4e153928fa: Link UP Sep 11 00:30:09.480145 systemd-networkd[1499]: calie4e153928fa: Gained carrier Sep 11 00:30:09.491104 containerd[1604]: 2025-09-11 00:30:09.413 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--th2qt-eth0 coredns-674b8bbfcf- kube-system b67b4d0d-825c-42d3-af2e-2c137271f3b5 809 0 2025-09-11 00:29:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-th2qt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4e153928fa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Namespace="kube-system" Pod="coredns-674b8bbfcf-th2qt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--th2qt-" Sep 11 00:30:09.491104 containerd[1604]: 2025-09-11 00:30:09.413 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Namespace="kube-system" Pod="coredns-674b8bbfcf-th2qt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" Sep 11 00:30:09.491104 containerd[1604]: 2025-09-11 00:30:09.444 [INFO][4432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" HandleID="k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Workload="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.444 [INFO][4432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" HandleID="k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Workload="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-th2qt", "timestamp":"2025-09-11 00:30:09.444547376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.444 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.444 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.444 [INFO][4432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.451 [INFO][4432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" host="localhost" Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.457 [INFO][4432] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.460 [INFO][4432] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.462 [INFO][4432] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.464 [INFO][4432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:09.491307 containerd[1604]: 2025-09-11 00:30:09.464 [INFO][4432] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" host="localhost" Sep 11 00:30:09.491615 containerd[1604]: 2025-09-11 00:30:09.465 [INFO][4432] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db Sep 11 00:30:09.491615 containerd[1604]: 2025-09-11 00:30:09.469 [INFO][4432] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" host="localhost" Sep 11 00:30:09.491615 containerd[1604]: 2025-09-11 00:30:09.474 [INFO][4432] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" host="localhost" Sep 11 00:30:09.491615 containerd[1604]: 2025-09-11 00:30:09.474 [INFO][4432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" host="localhost" Sep 11 00:30:09.491615 containerd[1604]: 2025-09-11 00:30:09.474 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:30:09.491615 containerd[1604]: 2025-09-11 00:30:09.474 [INFO][4432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" HandleID="k8s-pod-network.2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Workload="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" Sep 11 00:30:09.491770 containerd[1604]: 2025-09-11 00:30:09.477 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Namespace="kube-system" Pod="coredns-674b8bbfcf-th2qt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--th2qt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b67b4d0d-825c-42d3-af2e-2c137271f3b5", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-th2qt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4e153928fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:09.491861 containerd[1604]: 2025-09-11 00:30:09.477 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Namespace="kube-system" Pod="coredns-674b8bbfcf-th2qt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" Sep 11 00:30:09.491861 containerd[1604]: 2025-09-11 00:30:09.477 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4e153928fa ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Namespace="kube-system" Pod="coredns-674b8bbfcf-th2qt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" Sep 11 00:30:09.491861 containerd[1604]: 2025-09-11 00:30:09.480 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Namespace="kube-system" Pod="coredns-674b8bbfcf-th2qt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" Sep 11 00:30:09.491954 containerd[1604]: 2025-09-11 00:30:09.480 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Namespace="kube-system" Pod="coredns-674b8bbfcf-th2qt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--th2qt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b67b4d0d-825c-42d3-af2e-2c137271f3b5", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db", Pod:"coredns-674b8bbfcf-th2qt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4e153928fa", MAC:"fe:fb:ea:3f:b9:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:09.491954 containerd[1604]: 2025-09-11 00:30:09.488 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" Namespace="kube-system" Pod="coredns-674b8bbfcf-th2qt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--th2qt-eth0" Sep 11 00:30:09.524284 containerd[1604]: time="2025-09-11T00:30:09.524239367Z" level=info msg="connecting to shim 2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db" address="unix:///run/containerd/s/185bf7ddf7b4be8cd16068606c558d165a73ff06350815b722e841cb74be66b5" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:09.545713 systemd[1]: Started cri-containerd-2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db.scope - libcontainer container 2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db. Sep 11 00:30:09.558448 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:30:09.593500 systemd-networkd[1499]: cali475dd401e01: Link UP Sep 11 00:30:09.596236 systemd-networkd[1499]: cali475dd401e01: Gained carrier Sep 11 00:30:09.600149 containerd[1604]: time="2025-09-11T00:30:09.600106574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th2qt,Uid:b67b4d0d-825c-42d3-af2e-2c137271f3b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db\"" Sep 11 00:30:09.608285 containerd[1604]: time="2025-09-11T00:30:09.608248366Z" level=info msg="CreateContainer within sandbox \"2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.415 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n8vdh-eth0 csi-node-driver- calico-system f9f745c4-024b-492d-9356-4d4c5f1d59a7 698 0 2025-09-11 00:29:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n8vdh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali475dd401e01 [] [] }} ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Namespace="calico-system" Pod="csi-node-driver-n8vdh" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8vdh-" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.415 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Namespace="calico-system" Pod="csi-node-driver-n8vdh" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8vdh-eth0" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.445 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" HandleID="k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Workload="localhost-k8s-csi--node--driver--n8vdh-eth0" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.445 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" HandleID="k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Workload="localhost-k8s-csi--node--driver--n8vdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cf00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n8vdh", "timestamp":"2025-09-11 00:30:09.44557817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.445 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.474 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.474 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.553 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.558 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.564 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.565 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.567 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.567 [INFO][4434] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.568 [INFO][4434] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657 Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.571 [INFO][4434] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.578 [INFO][4434] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.578 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" host="localhost" Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.578 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:30:09.610704 containerd[1604]: 2025-09-11 00:30:09.578 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" HandleID="k8s-pod-network.1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Workload="localhost-k8s-csi--node--driver--n8vdh-eth0" Sep 11 00:30:09.612631 containerd[1604]: 2025-09-11 00:30:09.588 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Namespace="calico-system" Pod="csi-node-driver-n8vdh" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8vdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n8vdh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f9f745c4-024b-492d-9356-4d4c5f1d59a7", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n8vdh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali475dd401e01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:09.612631 containerd[1604]: 2025-09-11 00:30:09.589 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Namespace="calico-system" Pod="csi-node-driver-n8vdh" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8vdh-eth0" Sep 11 00:30:09.612631 containerd[1604]: 2025-09-11 00:30:09.589 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali475dd401e01 ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Namespace="calico-system" Pod="csi-node-driver-n8vdh" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8vdh-eth0" Sep 11 00:30:09.612631 containerd[1604]: 2025-09-11 00:30:09.597 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Namespace="calico-system" Pod="csi-node-driver-n8vdh" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8vdh-eth0" Sep 11 00:30:09.612631 containerd[1604]: 2025-09-11 00:30:09.598 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Namespace="calico-system" Pod="csi-node-driver-n8vdh" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8vdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n8vdh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f9f745c4-024b-492d-9356-4d4c5f1d59a7", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657", Pod:"csi-node-driver-n8vdh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali475dd401e01", MAC:"02:81:6e:92:0b:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:09.612631 containerd[1604]: 2025-09-11 00:30:09.608 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" Namespace="calico-system" Pod="csi-node-driver-n8vdh" WorkloadEndpoint="localhost-k8s-csi--node--driver--n8vdh-eth0" Sep 11 00:30:09.624051 containerd[1604]: time="2025-09-11T00:30:09.623968424Z" level=info msg="Container 5a18a25c369d3c4418bc1ae524befd832a8d5205e72c7e84ee335c19aafb51c8: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:09.632102 containerd[1604]: time="2025-09-11T00:30:09.632074384Z" level=info msg="CreateContainer within sandbox \"2bcbbbdae62c85c9f51538c2af1a83fa8d632677ee81bb6577050db98b4b10db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a18a25c369d3c4418bc1ae524befd832a8d5205e72c7e84ee335c19aafb51c8\"" Sep 11 00:30:09.632583 containerd[1604]: time="2025-09-11T00:30:09.632516389Z" level=info msg="StartContainer for \"5a18a25c369d3c4418bc1ae524befd832a8d5205e72c7e84ee335c19aafb51c8\"" Sep 11 00:30:09.633413 containerd[1604]: time="2025-09-11T00:30:09.633385949Z" level=info msg="connecting to shim 5a18a25c369d3c4418bc1ae524befd832a8d5205e72c7e84ee335c19aafb51c8" address="unix:///run/containerd/s/185bf7ddf7b4be8cd16068606c558d165a73ff06350815b722e841cb74be66b5" protocol=ttrpc version=3 Sep 11 00:30:09.639185 containerd[1604]: time="2025-09-11T00:30:09.639143701Z" level=info msg="connecting to shim 1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657" address="unix:///run/containerd/s/138d738f216b68d8723984e7adf298e3040acf1fa9d52d65c76de436a8c21570" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:09.655681 systemd[1]: Started cri-containerd-5a18a25c369d3c4418bc1ae524befd832a8d5205e72c7e84ee335c19aafb51c8.scope - libcontainer container 5a18a25c369d3c4418bc1ae524befd832a8d5205e72c7e84ee335c19aafb51c8. Sep 11 00:30:09.659495 systemd[1]: Started cri-containerd-1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657.scope - libcontainer container 1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657. Sep 11 00:30:09.676312 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:30:09.693239 systemd-networkd[1499]: cali59188b69815: Link UP Sep 11 00:30:09.694008 systemd-networkd[1499]: cali59188b69815: Gained carrier Sep 11 00:30:09.706337 containerd[1604]: time="2025-09-11T00:30:09.706303684Z" level=info msg="StartContainer for \"5a18a25c369d3c4418bc1ae524befd832a8d5205e72c7e84ee335c19aafb51c8\" returns successfully" Sep 11 00:30:09.706452 containerd[1604]: time="2025-09-11T00:30:09.706371238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8vdh,Uid:f9f745c4-024b-492d-9356-4d4c5f1d59a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657\"" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.417 [INFO][4392] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--276b9-eth0 goldmane-54d579b49d- calico-system 723835f4-9195-4d03-9f2f-7e23eaa56a2c 821 0 2025-09-11 00:29:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-276b9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali59188b69815 [] [] }} ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Namespace="calico-system" Pod="goldmane-54d579b49d-276b9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--276b9-" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.417 [INFO][4392] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Namespace="calico-system" Pod="goldmane-54d579b49d-276b9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--276b9-eth0" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.446 [INFO][4444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" HandleID="k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Workload="localhost-k8s-goldmane--54d579b49d--276b9-eth0" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.446 [INFO][4444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" HandleID="k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Workload="localhost-k8s-goldmane--54d579b49d--276b9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305f30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-276b9", "timestamp":"2025-09-11 00:30:09.44661258 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.446 [INFO][4444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.578 [INFO][4444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.578 [INFO][4444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.654 [INFO][4444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.660 [INFO][4444] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.667 [INFO][4444] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.669 [INFO][4444] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.672 [INFO][4444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.672 [INFO][4444] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.673 [INFO][4444] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.678 [INFO][4444] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.684 [INFO][4444] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.684 [INFO][4444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" host="localhost" Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.684 [INFO][4444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:30:09.714598 containerd[1604]: 2025-09-11 00:30:09.684 [INFO][4444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" HandleID="k8s-pod-network.00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Workload="localhost-k8s-goldmane--54d579b49d--276b9-eth0" Sep 11 00:30:09.715293 containerd[1604]: 2025-09-11 00:30:09.688 [INFO][4392] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Namespace="calico-system" Pod="goldmane-54d579b49d-276b9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--276b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--276b9-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"723835f4-9195-4d03-9f2f-7e23eaa56a2c", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-276b9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali59188b69815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:09.715293 containerd[1604]: 2025-09-11 00:30:09.688 [INFO][4392] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Namespace="calico-system" Pod="goldmane-54d579b49d-276b9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--276b9-eth0" Sep 11 00:30:09.715293 containerd[1604]: 2025-09-11 00:30:09.688 [INFO][4392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59188b69815 ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Namespace="calico-system" Pod="goldmane-54d579b49d-276b9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--276b9-eth0" Sep 11 00:30:09.715293 containerd[1604]: 2025-09-11 00:30:09.694 [INFO][4392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Namespace="calico-system" Pod="goldmane-54d579b49d-276b9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--276b9-eth0" Sep 11 00:30:09.715293 containerd[1604]: 2025-09-11 00:30:09.694 [INFO][4392] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Namespace="calico-system" Pod="goldmane-54d579b49d-276b9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--276b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--276b9-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"723835f4-9195-4d03-9f2f-7e23eaa56a2c", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e", Pod:"goldmane-54d579b49d-276b9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali59188b69815", MAC:"92:00:d9:0a:40:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:09.715293 containerd[1604]: 2025-09-11 00:30:09.705 [INFO][4392] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" Namespace="calico-system" Pod="goldmane-54d579b49d-276b9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--276b9-eth0" Sep 11 00:30:09.755785 containerd[1604]: time="2025-09-11T00:30:09.755664689Z" level=info msg="connecting to shim 00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e" address="unix:///run/containerd/s/7c8419bfb21e458787407ba11a50df33b498e348e429ebaee1b2339e053be11c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:09.795837 systemd[1]: Started cri-containerd-00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e.scope - libcontainer container 00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e. Sep 11 00:30:09.818419 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:30:09.872358 containerd[1604]: time="2025-09-11T00:30:09.872317010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-276b9,Uid:723835f4-9195-4d03-9f2f-7e23eaa56a2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e\"" Sep 11 00:30:10.171770 systemd-networkd[1499]: calid57ab062853: Gained IPv6LL Sep 11 00:30:10.235708 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Sep 11 00:30:10.370372 containerd[1604]: time="2025-09-11T00:30:10.370325111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f8b48fb4-vh4qx,Uid:1e02c891-beca-4653-95fe-399613c2df8b,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:30:10.370497 containerd[1604]: time="2025-09-11T00:30:10.370434189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8znxw,Uid:9830b188-c7bf-4ce0-9254-c8625fed4c6b,Namespace:kube-system,Attempt:0,}" Sep 11 00:30:10.427705 systemd-networkd[1499]: cali27d7d51cd4c: Gained IPv6LL Sep 11 00:30:10.471872 systemd-networkd[1499]: cali53d2e211dd4: Link UP Sep 11 00:30:10.472525 systemd-networkd[1499]: cali53d2e211dd4: Gained carrier Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.402 [INFO][4655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0 calico-apiserver-8f8b48fb4- calico-apiserver 1e02c891-beca-4653-95fe-399613c2df8b 820 0 2025-09-11 00:29:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f8b48fb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f8b48fb4-vh4qx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali53d2e211dd4 [] [] }} ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-vh4qx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.402 [INFO][4655] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-vh4qx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.436 [INFO][4685] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" HandleID="k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Workload="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.436 [INFO][4685] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" HandleID="k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Workload="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8f8b48fb4-vh4qx", "timestamp":"2025-09-11 00:30:10.43668437 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.436 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.437 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.437 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.443 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.446 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.450 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.451 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.453 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.453 [INFO][4685] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.456 [INFO][4685] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2 Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.460 [INFO][4685] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.465 [INFO][4685] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.465 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" host="localhost" Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.466 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:30:10.483955 containerd[1604]: 2025-09-11 00:30:10.466 [INFO][4685] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" HandleID="k8s-pod-network.475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Workload="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" Sep 11 00:30:10.484761 containerd[1604]: 2025-09-11 00:30:10.468 [INFO][4655] cni-plugin/k8s.go 418: Populated endpoint ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-vh4qx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0", GenerateName:"calico-apiserver-8f8b48fb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e02c891-beca-4653-95fe-399613c2df8b", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f8b48fb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f8b48fb4-vh4qx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53d2e211dd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:10.484761 containerd[1604]: 2025-09-11 00:30:10.469 [INFO][4655] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-vh4qx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" Sep 11 00:30:10.484761 containerd[1604]: 2025-09-11 00:30:10.469 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53d2e211dd4 ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-vh4qx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" Sep 11 00:30:10.484761 containerd[1604]: 2025-09-11 00:30:10.472 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-vh4qx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" Sep 11 00:30:10.484761 containerd[1604]: 2025-09-11 00:30:10.473 [INFO][4655] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-vh4qx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0", GenerateName:"calico-apiserver-8f8b48fb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e02c891-beca-4653-95fe-399613c2df8b", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f8b48fb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2", Pod:"calico-apiserver-8f8b48fb4-vh4qx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53d2e211dd4", MAC:"1e:aa:ec:97:15:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:10.484761 containerd[1604]: 2025-09-11 00:30:10.481 [INFO][4655] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" Namespace="calico-apiserver" Pod="calico-apiserver-8f8b48fb4-vh4qx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f8b48fb4--vh4qx-eth0" Sep 11 00:30:10.505599 containerd[1604]: time="2025-09-11T00:30:10.505129776Z" level=info msg="connecting to shim 475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2" address="unix:///run/containerd/s/24b630f830ea6036c5f390ffbc3fcc422ccd3392cdea2eb6d20797839358b146" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:10.531783 systemd[1]: Started cri-containerd-475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2.scope - libcontainer container 475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2. Sep 11 00:30:10.545560 kubelet[2731]: I0911 00:30:10.545275 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-th2qt" podStartSLOduration=39.545258052 podStartE2EDuration="39.545258052s" podCreationTimestamp="2025-09-11 00:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:30:10.529910681 +0000 UTC m=+45.247518561" watchObservedRunningTime="2025-09-11 00:30:10.545258052 +0000 UTC m=+45.262865942" Sep 11 00:30:10.555767 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:30:10.591633 containerd[1604]: time="2025-09-11T00:30:10.591593117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f8b48fb4-vh4qx,Uid:1e02c891-beca-4653-95fe-399613c2df8b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2\"" Sep 11 00:30:10.595079 systemd-networkd[1499]: cali07328576e31: Link UP Sep 11 00:30:10.596320 systemd-networkd[1499]: cali07328576e31: Gained carrier Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.410 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8znxw-eth0 coredns-674b8bbfcf- kube-system 9830b188-c7bf-4ce0-9254-c8625fed4c6b 817 0 2025-09-11 00:29:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8znxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali07328576e31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Namespace="kube-system" Pod="coredns-674b8bbfcf-8znxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8znxw-" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.410 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Namespace="kube-system" Pod="coredns-674b8bbfcf-8znxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.442 [INFO][4692] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" HandleID="k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Workload="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.442 [INFO][4692] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" HandleID="k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Workload="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8znxw", "timestamp":"2025-09-11 00:30:10.442204678 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.442 [INFO][4692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.466 [INFO][4692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.466 [INFO][4692] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.547 [INFO][4692] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.562 [INFO][4692] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.567 [INFO][4692] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.569 [INFO][4692] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.571 [INFO][4692] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.571 [INFO][4692] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.572 [INFO][4692] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8 Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.577 [INFO][4692] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.585 [INFO][4692] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.585 [INFO][4692] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" host="localhost" Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.586 [INFO][4692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:30:10.610945 containerd[1604]: 2025-09-11 00:30:10.586 [INFO][4692] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" HandleID="k8s-pod-network.bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Workload="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" Sep 11 00:30:10.611488 containerd[1604]: 2025-09-11 00:30:10.590 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Namespace="kube-system" Pod="coredns-674b8bbfcf-8znxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8znxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9830b188-c7bf-4ce0-9254-c8625fed4c6b", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8znxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07328576e31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:10.611488 containerd[1604]: 2025-09-11 00:30:10.590 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Namespace="kube-system" Pod="coredns-674b8bbfcf-8znxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" Sep 11 00:30:10.611488 containerd[1604]: 2025-09-11 00:30:10.590 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07328576e31 ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Namespace="kube-system" Pod="coredns-674b8bbfcf-8znxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" Sep 11 00:30:10.611488 containerd[1604]: 2025-09-11 00:30:10.597 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Namespace="kube-system" Pod="coredns-674b8bbfcf-8znxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" Sep 11 00:30:10.611488 containerd[1604]: 2025-09-11 00:30:10.597 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Namespace="kube-system" Pod="coredns-674b8bbfcf-8znxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8znxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9830b188-c7bf-4ce0-9254-c8625fed4c6b", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8", Pod:"coredns-674b8bbfcf-8znxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07328576e31", MAC:"02:fc:d2:4b:13:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:30:10.611488 containerd[1604]: 2025-09-11 00:30:10.607 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" Namespace="kube-system" Pod="coredns-674b8bbfcf-8znxw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8znxw-eth0" Sep 11 00:30:10.633411 containerd[1604]: time="2025-09-11T00:30:10.633352803Z" level=info msg="connecting to shim bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8" address="unix:///run/containerd/s/d4d1e53c90bb9cb14a02ef9feee81bd87b49d1d460090874b45e3bff0ec385d9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:30:10.657673 systemd[1]: Started cri-containerd-bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8.scope - libcontainer container bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8. Sep 11 00:30:10.670983 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:30:10.702242 containerd[1604]: time="2025-09-11T00:30:10.702156104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8znxw,Uid:9830b188-c7bf-4ce0-9254-c8625fed4c6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8\"" Sep 11 00:30:10.710374 containerd[1604]: time="2025-09-11T00:30:10.710347147Z" level=info msg="CreateContainer within sandbox \"bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:30:10.720101 containerd[1604]: time="2025-09-11T00:30:10.720062256Z" level=info msg="Container fa461c5231b662d52f24e39f049a7d9bfb2041c9800127002e66144220efdbbd: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:10.726271 containerd[1604]: time="2025-09-11T00:30:10.726249526Z" level=info msg="CreateContainer within sandbox \"bd0684554f7f56c12eec4b80b5eec3396d0214d9e72b8e0ca7edc64fce2234b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa461c5231b662d52f24e39f049a7d9bfb2041c9800127002e66144220efdbbd\"" Sep 11 00:30:10.726861 containerd[1604]: time="2025-09-11T00:30:10.726835327Z" level=info msg="StartContainer for \"fa461c5231b662d52f24e39f049a7d9bfb2041c9800127002e66144220efdbbd\"" Sep 11 00:30:10.727808 containerd[1604]: time="2025-09-11T00:30:10.727783872Z" level=info msg="connecting to shim fa461c5231b662d52f24e39f049a7d9bfb2041c9800127002e66144220efdbbd" address="unix:///run/containerd/s/d4d1e53c90bb9cb14a02ef9feee81bd87b49d1d460090874b45e3bff0ec385d9" protocol=ttrpc version=3 Sep 11 00:30:10.745648 systemd[1]: Started cri-containerd-fa461c5231b662d52f24e39f049a7d9bfb2041c9800127002e66144220efdbbd.scope - libcontainer container fa461c5231b662d52f24e39f049a7d9bfb2041c9800127002e66144220efdbbd. Sep 11 00:30:10.776658 containerd[1604]: time="2025-09-11T00:30:10.776622429Z" level=info msg="StartContainer for \"fa461c5231b662d52f24e39f049a7d9bfb2041c9800127002e66144220efdbbd\" returns successfully" Sep 11 00:30:10.796800 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:55720.service - OpenSSH per-connection server daemon (10.0.0.1:55720). Sep 11 00:30:10.850101 sshd[4851]: Accepted publickey for core from 10.0.0.1 port 55720 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:10.851734 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:10.856018 systemd-logind[1584]: New session 9 of user core. Sep 11 00:30:10.862682 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:30:10.939694 systemd-networkd[1499]: calie4e153928fa: Gained IPv6LL Sep 11 00:30:10.986169 sshd[4855]: Connection closed by 10.0.0.1 port 55720 Sep 11 00:30:10.987766 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:10.992072 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:55720.service: Deactivated successfully. Sep 11 00:30:10.994216 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:30:10.995190 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:30:10.996426 systemd-logind[1584]: Removed session 9. Sep 11 00:30:11.067669 systemd-networkd[1499]: cali475dd401e01: Gained IPv6LL Sep 11 00:30:11.302403 containerd[1604]: time="2025-09-11T00:30:11.302302107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:11.303146 containerd[1604]: time="2025-09-11T00:30:11.303094707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:30:11.304301 containerd[1604]: time="2025-09-11T00:30:11.304276925Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:11.306886 containerd[1604]: time="2025-09-11T00:30:11.306847402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:11.308563 containerd[1604]: time="2025-09-11T00:30:11.308486061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.209173488s" Sep 11 00:30:11.308563 containerd[1604]: time="2025-09-11T00:30:11.308525580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:30:11.309746 containerd[1604]: time="2025-09-11T00:30:11.309709220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:30:11.332382 containerd[1604]: time="2025-09-11T00:30:11.332335879Z" level=info msg="CreateContainer within sandbox \"01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:30:11.387325 containerd[1604]: time="2025-09-11T00:30:11.387278773Z" level=info msg="Container fd1ca9606d62d62ec7c3f9939219d0bc01bd54381af1609c9070322b13f78562: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:11.397169 containerd[1604]: time="2025-09-11T00:30:11.397126354Z" level=info msg="CreateContainer within sandbox \"01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fd1ca9606d62d62ec7c3f9939219d0bc01bd54381af1609c9070322b13f78562\"" Sep 11 00:30:11.397678 containerd[1604]: time="2025-09-11T00:30:11.397647423Z" level=info msg="StartContainer for \"fd1ca9606d62d62ec7c3f9939219d0bc01bd54381af1609c9070322b13f78562\"" Sep 11 00:30:11.398809 containerd[1604]: time="2025-09-11T00:30:11.398761906Z" level=info msg="connecting to shim fd1ca9606d62d62ec7c3f9939219d0bc01bd54381af1609c9070322b13f78562" address="unix:///run/containerd/s/880493fe9bbdb752a6f755e40a51bb20121a8a87ec1e18d50193caed61c94e31" protocol=ttrpc version=3 Sep 11 00:30:11.423739 systemd[1]: Started cri-containerd-fd1ca9606d62d62ec7c3f9939219d0bc01bd54381af1609c9070322b13f78562.scope - libcontainer container fd1ca9606d62d62ec7c3f9939219d0bc01bd54381af1609c9070322b13f78562. Sep 11 00:30:11.470959 containerd[1604]: time="2025-09-11T00:30:11.470916739Z" level=info msg="StartContainer for \"fd1ca9606d62d62ec7c3f9939219d0bc01bd54381af1609c9070322b13f78562\" returns successfully" Sep 11 00:30:11.536556 kubelet[2731]: I0911 00:30:11.535961 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8znxw" podStartSLOduration=40.535943435 podStartE2EDuration="40.535943435s" podCreationTimestamp="2025-09-11 00:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:30:11.535864388 +0000 UTC m=+46.253472278" watchObservedRunningTime="2025-09-11 00:30:11.535943435 +0000 UTC m=+46.253551325" Sep 11 00:30:11.581247 systemd-networkd[1499]: cali59188b69815: Gained IPv6LL Sep 11 00:30:11.643940 systemd-networkd[1499]: cali53d2e211dd4: Gained IPv6LL Sep 11 00:30:12.347774 systemd-networkd[1499]: cali07328576e31: Gained IPv6LL Sep 11 00:30:14.793819 containerd[1604]: time="2025-09-11T00:30:14.793761873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:14.794785 containerd[1604]: time="2025-09-11T00:30:14.794761117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:30:14.796480 containerd[1604]: time="2025-09-11T00:30:14.796446252Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:14.798735 containerd[1604]: time="2025-09-11T00:30:14.798708934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:14.799319 containerd[1604]: time="2025-09-11T00:30:14.799267173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.489523865s" Sep 11 00:30:14.799364 containerd[1604]: time="2025-09-11T00:30:14.799316531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:30:14.800140 containerd[1604]: time="2025-09-11T00:30:14.800110206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:30:14.810459 containerd[1604]: time="2025-09-11T00:30:14.810421398Z" level=info msg="CreateContainer within sandbox \"839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:30:14.820805 containerd[1604]: time="2025-09-11T00:30:14.820756257Z" level=info msg="Container ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:14.830238 containerd[1604]: time="2025-09-11T00:30:14.830187141Z" level=info msg="CreateContainer within sandbox \"839e5969ae42f881ff4489e6c70ea0a0bb7a25737669411392a14616f2d14cb6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94\"" Sep 11 00:30:14.830880 containerd[1604]: time="2025-09-11T00:30:14.830838033Z" level=info msg="StartContainer for \"ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94\"" Sep 11 00:30:14.832148 containerd[1604]: time="2025-09-11T00:30:14.832121942Z" level=info msg="connecting to shim ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94" address="unix:///run/containerd/s/f7248b700309bdc1e425f72ad86ce39d4f2237954c0fc30885842d986f31dc34" protocol=ttrpc version=3 Sep 11 00:30:14.858746 systemd[1]: Started cri-containerd-ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94.scope - libcontainer container ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94. Sep 11 00:30:15.047031 containerd[1604]: time="2025-09-11T00:30:15.046928663Z" level=info msg="StartContainer for \"ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94\" returns successfully" Sep 11 00:30:15.582033 containerd[1604]: time="2025-09-11T00:30:15.581991700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94\" id:\"1ffaae1ed9bdf13af951d408cfbf5b9a18aebc18d4870a6e094fd455b3a63dba\" pid:4979 exited_at:{seconds:1757550615 nanos:581735683}" Sep 11 00:30:15.707729 kubelet[2731]: I0911 00:30:15.707673 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9b784dfb7-75x9b" podStartSLOduration=27.055883843 podStartE2EDuration="32.707659651s" podCreationTimestamp="2025-09-11 00:29:43 +0000 UTC" firstStartedPulling="2025-09-11 00:30:09.148246524 +0000 UTC m=+43.865854414" lastFinishedPulling="2025-09-11 00:30:14.800022332 +0000 UTC m=+49.517630222" observedRunningTime="2025-09-11 00:30:15.548876989 +0000 UTC m=+50.266484879" watchObservedRunningTime="2025-09-11 00:30:15.707659651 +0000 UTC m=+50.425267541" Sep 11 00:30:16.004949 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:55734.service - OpenSSH per-connection server daemon (10.0.0.1:55734). Sep 11 00:30:16.059438 sshd[4992]: Accepted publickey for core from 10.0.0.1 port 55734 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:16.061283 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:16.066599 systemd-logind[1584]: New session 10 of user core. Sep 11 00:30:16.077888 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:30:16.405138 sshd[4994]: Connection closed by 10.0.0.1 port 55734 Sep 11 00:30:16.405720 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:16.412584 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:55734.service: Deactivated successfully. Sep 11 00:30:16.415777 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:30:16.416928 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:30:16.419262 systemd-logind[1584]: Removed session 10. Sep 11 00:30:19.953453 containerd[1604]: time="2025-09-11T00:30:19.953394414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:19.954601 containerd[1604]: time="2025-09-11T00:30:19.954558790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:30:19.956695 containerd[1604]: time="2025-09-11T00:30:19.955809186Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:19.957861 containerd[1604]: time="2025-09-11T00:30:19.957797036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:19.958469 containerd[1604]: time="2025-09-11T00:30:19.958427509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.158282575s" Sep 11 00:30:19.958469 containerd[1604]: time="2025-09-11T00:30:19.958455945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:30:19.959441 containerd[1604]: time="2025-09-11T00:30:19.959398344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:30:19.967366 containerd[1604]: time="2025-09-11T00:30:19.967333922Z" level=info msg="CreateContainer within sandbox \"da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:30:19.978363 containerd[1604]: time="2025-09-11T00:30:19.977004914Z" level=info msg="Container 9623d620c927b4636a36d06929ab136c6afc20fbb7360ab5af0650d459733f50: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:19.988408 containerd[1604]: time="2025-09-11T00:30:19.988367010Z" level=info msg="CreateContainer within sandbox \"da6e2e69a109ad0b6721ad62eecb3bf02862f1b515771a8185695902994f912c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9623d620c927b4636a36d06929ab136c6afc20fbb7360ab5af0650d459733f50\"" Sep 11 00:30:19.988902 containerd[1604]: time="2025-09-11T00:30:19.988854551Z" level=info msg="StartContainer for \"9623d620c927b4636a36d06929ab136c6afc20fbb7360ab5af0650d459733f50\"" Sep 11 00:30:19.990149 containerd[1604]: time="2025-09-11T00:30:19.990123232Z" level=info msg="connecting to shim 9623d620c927b4636a36d06929ab136c6afc20fbb7360ab5af0650d459733f50" address="unix:///run/containerd/s/3c552c0dc60c8686a18a1ec6c43333bf4756a126fee0848932e4826d0ded39cd" protocol=ttrpc version=3 Sep 11 00:30:20.037719 systemd[1]: Started cri-containerd-9623d620c927b4636a36d06929ab136c6afc20fbb7360ab5af0650d459733f50.scope - libcontainer container 9623d620c927b4636a36d06929ab136c6afc20fbb7360ab5af0650d459733f50. Sep 11 00:30:20.084393 containerd[1604]: time="2025-09-11T00:30:20.084351658Z" level=info msg="StartContainer for \"9623d620c927b4636a36d06929ab136c6afc20fbb7360ab5af0650d459733f50\" returns successfully" Sep 11 00:30:21.419148 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:57598.service - OpenSSH per-connection server daemon (10.0.0.1:57598). Sep 11 00:30:21.460567 kubelet[2731]: I0911 00:30:21.460265 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8f8b48fb4-dq26j" podStartSLOduration=30.78352457 podStartE2EDuration="41.460249843s" podCreationTimestamp="2025-09-11 00:29:40 +0000 UTC" firstStartedPulling="2025-09-11 00:30:09.282591079 +0000 UTC m=+44.000198969" lastFinishedPulling="2025-09-11 00:30:19.959316352 +0000 UTC m=+54.676924242" observedRunningTime="2025-09-11 00:30:21.459471523 +0000 UTC m=+56.177079413" watchObservedRunningTime="2025-09-11 00:30:21.460249843 +0000 UTC m=+56.177857734" Sep 11 00:30:21.507076 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 57598 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:21.509299 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:21.515305 systemd-logind[1584]: New session 11 of user core. Sep 11 00:30:21.522786 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:30:21.660082 sshd[5066]: Connection closed by 10.0.0.1 port 57598 Sep 11 00:30:21.660421 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:21.674938 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:57598.service: Deactivated successfully. Sep 11 00:30:21.677526 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:30:21.678387 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:30:21.682059 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:57610.service - OpenSSH per-connection server daemon (10.0.0.1:57610). Sep 11 00:30:21.683145 systemd-logind[1584]: Removed session 11. Sep 11 00:30:21.734011 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 57610 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:21.735328 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:21.740016 systemd-logind[1584]: New session 12 of user core. Sep 11 00:30:21.755752 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:30:21.790358 kubelet[2731]: I0911 00:30:21.790317 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:30:21.894043 sshd[5084]: Connection closed by 10.0.0.1 port 57610 Sep 11 00:30:21.894788 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:21.907406 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:57610.service: Deactivated successfully. Sep 11 00:30:21.912469 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:30:21.914619 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:30:21.917419 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:57612.service - OpenSSH per-connection server daemon (10.0.0.1:57612). Sep 11 00:30:21.919099 systemd-logind[1584]: Removed session 12. Sep 11 00:30:21.963094 sshd[5096]: Accepted publickey for core from 10.0.0.1 port 57612 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:21.964633 sshd-session[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:21.969274 systemd-logind[1584]: New session 13 of user core. Sep 11 00:30:21.977685 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:30:22.087153 sshd[5098]: Connection closed by 10.0.0.1 port 57612 Sep 11 00:30:22.087448 sshd-session[5096]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:22.090326 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:57612.service: Deactivated successfully. Sep 11 00:30:22.092328 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:30:22.093854 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:30:22.095684 systemd-logind[1584]: Removed session 13. Sep 11 00:30:22.431093 containerd[1604]: time="2025-09-11T00:30:22.431032246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:22.431998 containerd[1604]: time="2025-09-11T00:30:22.431969928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:30:22.433356 containerd[1604]: time="2025-09-11T00:30:22.433324159Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:22.435203 containerd[1604]: time="2025-09-11T00:30:22.435177201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:22.435780 containerd[1604]: time="2025-09-11T00:30:22.435752932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.476324268s" Sep 11 00:30:22.435843 containerd[1604]: time="2025-09-11T00:30:22.435783452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:30:22.436565 containerd[1604]: time="2025-09-11T00:30:22.436524989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:30:22.443792 containerd[1604]: time="2025-09-11T00:30:22.443744444Z" level=info msg="CreateContainer within sandbox \"1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:30:22.459043 containerd[1604]: time="2025-09-11T00:30:22.458991534Z" level=info msg="Container b774b7a255fcb811c3b7f45595593f96017ad3e02a3bf1aada7d5b8a0da2fa5f: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:22.484661 containerd[1604]: time="2025-09-11T00:30:22.484620562Z" level=info msg="CreateContainer within sandbox \"1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b774b7a255fcb811c3b7f45595593f96017ad3e02a3bf1aada7d5b8a0da2fa5f\"" Sep 11 00:30:22.485183 containerd[1604]: time="2025-09-11T00:30:22.485155273Z" level=info msg="StartContainer for \"b774b7a255fcb811c3b7f45595593f96017ad3e02a3bf1aada7d5b8a0da2fa5f\"" Sep 11 00:30:22.486803 containerd[1604]: time="2025-09-11T00:30:22.486775105Z" level=info msg="connecting to shim b774b7a255fcb811c3b7f45595593f96017ad3e02a3bf1aada7d5b8a0da2fa5f" address="unix:///run/containerd/s/138d738f216b68d8723984e7adf298e3040acf1fa9d52d65c76de436a8c21570" protocol=ttrpc version=3 Sep 11 00:30:22.516759 systemd[1]: Started cri-containerd-b774b7a255fcb811c3b7f45595593f96017ad3e02a3bf1aada7d5b8a0da2fa5f.scope - libcontainer container b774b7a255fcb811c3b7f45595593f96017ad3e02a3bf1aada7d5b8a0da2fa5f. Sep 11 00:30:22.628271 containerd[1604]: time="2025-09-11T00:30:22.628222626Z" level=info msg="StartContainer for \"b774b7a255fcb811c3b7f45595593f96017ad3e02a3bf1aada7d5b8a0da2fa5f\" returns successfully" Sep 11 00:30:26.171835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580943297.mount: Deactivated successfully. Sep 11 00:30:27.084097 containerd[1604]: time="2025-09-11T00:30:27.084053885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:27.085121 containerd[1604]: time="2025-09-11T00:30:27.085085051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:30:27.086490 containerd[1604]: time="2025-09-11T00:30:27.086431563Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:27.089053 containerd[1604]: time="2025-09-11T00:30:27.089010474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:27.089747 containerd[1604]: time="2025-09-11T00:30:27.089702766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.65313309s" Sep 11 00:30:27.089799 containerd[1604]: time="2025-09-11T00:30:27.089748566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:30:27.090961 containerd[1604]: time="2025-09-11T00:30:27.090907561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:30:27.097185 containerd[1604]: time="2025-09-11T00:30:27.097119723Z" level=info msg="CreateContainer within sandbox \"00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:30:27.104094 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:57624.service - OpenSSH per-connection server daemon (10.0.0.1:57624). Sep 11 00:30:27.106319 containerd[1604]: time="2025-09-11T00:30:27.106090847Z" level=info msg="Container e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:27.116756 containerd[1604]: time="2025-09-11T00:30:27.116695774Z" level=info msg="CreateContainer within sandbox \"00b16ddf1ab89e1e019ba0a56c2656033973a91a27b06b8c21b0459ad3b30e7e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953\"" Sep 11 00:30:27.117241 containerd[1604]: time="2025-09-11T00:30:27.117195411Z" level=info msg="StartContainer for \"e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953\"" Sep 11 00:30:27.135020 containerd[1604]: time="2025-09-11T00:30:27.134968790Z" level=info msg="connecting to shim e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953" address="unix:///run/containerd/s/7c8419bfb21e458787407ba11a50df33b498e348e429ebaee1b2339e053be11c" protocol=ttrpc version=3 Sep 11 00:30:27.172751 systemd[1]: Started cri-containerd-e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953.scope - libcontainer container e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953. Sep 11 00:30:27.184135 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 57624 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:27.186024 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:27.193560 systemd-logind[1584]: New session 14 of user core. Sep 11 00:30:27.198690 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:30:27.343731 containerd[1604]: time="2025-09-11T00:30:27.343628647Z" level=info msg="StartContainer for \"e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953\" returns successfully" Sep 11 00:30:27.413594 sshd[5183]: Connection closed by 10.0.0.1 port 57624 Sep 11 00:30:27.413959 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:27.418756 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:57624.service: Deactivated successfully. Sep 11 00:30:27.421002 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:30:27.421911 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:30:27.423625 systemd-logind[1584]: Removed session 14. Sep 11 00:30:27.536370 containerd[1604]: time="2025-09-11T00:30:27.536315928Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:27.537263 containerd[1604]: time="2025-09-11T00:30:27.537192641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:30:27.539090 containerd[1604]: time="2025-09-11T00:30:27.539048678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 448.110007ms" Sep 11 00:30:27.539090 containerd[1604]: time="2025-09-11T00:30:27.539076953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:30:27.540366 containerd[1604]: time="2025-09-11T00:30:27.539905934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:30:27.544521 containerd[1604]: time="2025-09-11T00:30:27.544481397Z" level=info msg="CreateContainer within sandbox \"475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:30:27.553498 containerd[1604]: time="2025-09-11T00:30:27.553455517Z" level=info msg="Container 18b70c3bdb36564ba7cd409add6501c17e5a257badd31896e7a46b5ddc534f3b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:27.561743 containerd[1604]: time="2025-09-11T00:30:27.561706954Z" level=info msg="CreateContainer within sandbox \"475e930d1c6776d33e154f3f017e4053a9e4b07ad05220a83ff089b4dc1e9bb2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"18b70c3bdb36564ba7cd409add6501c17e5a257badd31896e7a46b5ddc534f3b\"" Sep 11 00:30:27.562477 containerd[1604]: time="2025-09-11T00:30:27.562447100Z" level=info msg="StartContainer for \"18b70c3bdb36564ba7cd409add6501c17e5a257badd31896e7a46b5ddc534f3b\"" Sep 11 00:30:27.563634 containerd[1604]: time="2025-09-11T00:30:27.563608240Z" level=info msg="connecting to shim 18b70c3bdb36564ba7cd409add6501c17e5a257badd31896e7a46b5ddc534f3b" address="unix:///run/containerd/s/24b630f830ea6036c5f390ffbc3fcc422ccd3392cdea2eb6d20797839358b146" protocol=ttrpc version=3 Sep 11 00:30:27.588723 systemd[1]: Started cri-containerd-18b70c3bdb36564ba7cd409add6501c17e5a257badd31896e7a46b5ddc534f3b.scope - libcontainer container 18b70c3bdb36564ba7cd409add6501c17e5a257badd31896e7a46b5ddc534f3b. Sep 11 00:30:28.050079 containerd[1604]: time="2025-09-11T00:30:28.050022299Z" level=info msg="StartContainer for \"18b70c3bdb36564ba7cd409add6501c17e5a257badd31896e7a46b5ddc534f3b\" returns successfully" Sep 11 00:30:28.161896 containerd[1604]: time="2025-09-11T00:30:28.161858914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953\" id:\"5302a94bc7a7e3d79177bef3315df93c977ec7922eb97a0ab6c4b9d7f1f0ee2d\" pid:5258 exit_status:1 exited_at:{seconds:1757550628 nanos:161337055}" Sep 11 00:30:28.516466 kubelet[2731]: I0911 00:30:28.516304 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-276b9" podStartSLOduration=29.299788951 podStartE2EDuration="46.51628822s" podCreationTimestamp="2025-09-11 00:29:42 +0000 UTC" firstStartedPulling="2025-09-11 00:30:09.873903655 +0000 UTC m=+44.591511545" lastFinishedPulling="2025-09-11 00:30:27.090402934 +0000 UTC m=+61.808010814" observedRunningTime="2025-09-11 00:30:28.515896606 +0000 UTC m=+63.233504496" watchObservedRunningTime="2025-09-11 00:30:28.51628822 +0000 UTC m=+63.233896110" Sep 11 00:30:29.074993 kubelet[2731]: I0911 00:30:29.074772 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8f8b48fb4-vh4qx" podStartSLOduration=32.128233197 podStartE2EDuration="49.074752024s" podCreationTimestamp="2025-09-11 00:29:40 +0000 UTC" firstStartedPulling="2025-09-11 00:30:10.593276761 +0000 UTC m=+45.310884641" lastFinishedPulling="2025-09-11 00:30:27.539795578 +0000 UTC m=+62.257403468" observedRunningTime="2025-09-11 00:30:29.07473428 +0000 UTC m=+63.792342190" watchObservedRunningTime="2025-09-11 00:30:29.074752024 +0000 UTC m=+63.792359914" Sep 11 00:30:29.153923 containerd[1604]: time="2025-09-11T00:30:29.153881611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953\" id:\"aa84a95be765695a82fb8cc606b690e03c60a2d85616a2108a82cd7598636652\" pid:5286 exit_status:1 exited_at:{seconds:1757550629 nanos:153323231}" Sep 11 00:30:31.544366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835602647.mount: Deactivated successfully. Sep 11 00:30:32.438962 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:54736.service - OpenSSH per-connection server daemon (10.0.0.1:54736). Sep 11 00:30:32.493735 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:32.495203 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:32.499491 systemd-logind[1584]: New session 15 of user core. Sep 11 00:30:32.512655 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:30:32.655479 sshd[5317]: Connection closed by 10.0.0.1 port 54736 Sep 11 00:30:32.655740 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:32.659455 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:54736.service: Deactivated successfully. Sep 11 00:30:32.662688 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:30:32.663787 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:30:32.666442 systemd-logind[1584]: Removed session 15. Sep 11 00:30:32.802684 containerd[1604]: time="2025-09-11T00:30:32.802530397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:32.803851 containerd[1604]: time="2025-09-11T00:30:32.803772263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:30:32.805630 containerd[1604]: time="2025-09-11T00:30:32.805593867Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:32.808286 containerd[1604]: time="2025-09-11T00:30:32.808247850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:32.808756 containerd[1604]: time="2025-09-11T00:30:32.808712304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.268781562s" Sep 11 00:30:32.808756 containerd[1604]: time="2025-09-11T00:30:32.808753655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:30:32.809873 containerd[1604]: time="2025-09-11T00:30:32.809693163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:30:32.814996 containerd[1604]: time="2025-09-11T00:30:32.814961624Z" level=info msg="CreateContainer within sandbox \"01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:30:32.824969 containerd[1604]: time="2025-09-11T00:30:32.824924428Z" level=info msg="Container 27c095ec02b40b966370abb4a78729c20aa18e84672572a506c06a7cadab3a59: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:32.838081 containerd[1604]: time="2025-09-11T00:30:32.838012362Z" level=info msg="CreateContainer within sandbox \"01e0ef11e2298aad33adceb01475f8820756e23b32c158687cf89538b223b7d4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"27c095ec02b40b966370abb4a78729c20aa18e84672572a506c06a7cadab3a59\"" Sep 11 00:30:32.838590 containerd[1604]: time="2025-09-11T00:30:32.838552843Z" level=info msg="StartContainer for \"27c095ec02b40b966370abb4a78729c20aa18e84672572a506c06a7cadab3a59\"" Sep 11 00:30:32.839524 containerd[1604]: time="2025-09-11T00:30:32.839430451Z" level=info msg="connecting to shim 27c095ec02b40b966370abb4a78729c20aa18e84672572a506c06a7cadab3a59" address="unix:///run/containerd/s/880493fe9bbdb752a6f755e40a51bb20121a8a87ec1e18d50193caed61c94e31" protocol=ttrpc version=3 Sep 11 00:30:32.867703 systemd[1]: Started cri-containerd-27c095ec02b40b966370abb4a78729c20aa18e84672572a506c06a7cadab3a59.scope - libcontainer container 27c095ec02b40b966370abb4a78729c20aa18e84672572a506c06a7cadab3a59. Sep 11 00:30:32.917622 containerd[1604]: time="2025-09-11T00:30:32.917586415Z" level=info msg="StartContainer for \"27c095ec02b40b966370abb4a78729c20aa18e84672572a506c06a7cadab3a59\" returns successfully" Sep 11 00:30:33.083563 kubelet[2731]: I0911 00:30:33.082850 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-658957dc95-9xjbv" podStartSLOduration=3.372027788 podStartE2EDuration="27.082835049s" podCreationTimestamp="2025-09-11 00:30:06 +0000 UTC" firstStartedPulling="2025-09-11 00:30:09.098756941 +0000 UTC m=+43.816364831" lastFinishedPulling="2025-09-11 00:30:32.809564202 +0000 UTC m=+67.527172092" observedRunningTime="2025-09-11 00:30:33.081927915 +0000 UTC m=+67.799535805" watchObservedRunningTime="2025-09-11 00:30:33.082835049 +0000 UTC m=+67.800442939" Sep 11 00:30:35.150325 containerd[1604]: time="2025-09-11T00:30:35.150259501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:35.165886 containerd[1604]: time="2025-09-11T00:30:35.151191861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:30:35.165886 containerd[1604]: time="2025-09-11T00:30:35.152342484Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:35.166091 containerd[1604]: time="2025-09-11T00:30:35.156627631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.346908027s" Sep 11 00:30:35.166091 containerd[1604]: time="2025-09-11T00:30:35.166022035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:30:35.166622 containerd[1604]: time="2025-09-11T00:30:35.166580720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:30:35.171503 containerd[1604]: time="2025-09-11T00:30:35.171454099Z" level=info msg="CreateContainer within sandbox \"1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:30:35.183427 containerd[1604]: time="2025-09-11T00:30:35.183378835Z" level=info msg="Container e1b060f9267e5b267dd9328b2a6124c9ed9a9d39d0f7a2f0d3c2618b26fe45c2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:30:35.194176 containerd[1604]: time="2025-09-11T00:30:35.194116095Z" level=info msg="CreateContainer within sandbox \"1241b8f979028de40feeba7791e818076d99f3878576d797a2fb876f4ecd9657\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e1b060f9267e5b267dd9328b2a6124c9ed9a9d39d0f7a2f0d3c2618b26fe45c2\"" Sep 11 00:30:35.194751 containerd[1604]: time="2025-09-11T00:30:35.194703615Z" level=info msg="StartContainer for \"e1b060f9267e5b267dd9328b2a6124c9ed9a9d39d0f7a2f0d3c2618b26fe45c2\"" Sep 11 00:30:35.196037 containerd[1604]: time="2025-09-11T00:30:35.196013208Z" level=info msg="connecting to shim e1b060f9267e5b267dd9328b2a6124c9ed9a9d39d0f7a2f0d3c2618b26fe45c2" address="unix:///run/containerd/s/138d738f216b68d8723984e7adf298e3040acf1fa9d52d65c76de436a8c21570" protocol=ttrpc version=3 Sep 11 00:30:35.217682 systemd[1]: Started cri-containerd-e1b060f9267e5b267dd9328b2a6124c9ed9a9d39d0f7a2f0d3c2618b26fe45c2.scope - libcontainer container e1b060f9267e5b267dd9328b2a6124c9ed9a9d39d0f7a2f0d3c2618b26fe45c2. Sep 11 00:30:35.303778 containerd[1604]: time="2025-09-11T00:30:35.303732105Z" level=info msg="StartContainer for \"e1b060f9267e5b267dd9328b2a6124c9ed9a9d39d0f7a2f0d3c2618b26fe45c2\" returns successfully" Sep 11 00:30:35.482923 kubelet[2731]: I0911 00:30:35.481813 2731 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:30:35.483444 kubelet[2731]: I0911 00:30:35.483347 2731 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:30:37.672585 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:54742.service - OpenSSH per-connection server daemon (10.0.0.1:54742). Sep 11 00:30:37.741890 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 54742 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:37.743729 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:37.749150 systemd-logind[1584]: New session 16 of user core. Sep 11 00:30:37.758672 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:30:38.123490 sshd[5415]: Connection closed by 10.0.0.1 port 54742 Sep 11 00:30:38.123732 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:38.128246 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:54742.service: Deactivated successfully. Sep 11 00:30:38.130420 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:30:38.131339 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:30:38.132605 systemd-logind[1584]: Removed session 16. Sep 11 00:30:39.075672 containerd[1604]: time="2025-09-11T00:30:39.075607641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6a33883325df6bbc39d56c06afe5c51d1148dff3013820210fdb2f39f1b6fa0\" id:\"6b65710715f0277bc71ac12381037b78daec00b969a6982e6cb3d5483f76e89c\" pid:5440 exited_at:{seconds:1757550639 nanos:75193599}" Sep 11 00:30:39.096071 kubelet[2731]: I0911 00:30:39.095993 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n8vdh" podStartSLOduration=30.640761279 podStartE2EDuration="56.09597551s" podCreationTimestamp="2025-09-11 00:29:43 +0000 UTC" firstStartedPulling="2025-09-11 00:30:09.712102588 +0000 UTC m=+44.429710468" lastFinishedPulling="2025-09-11 00:30:35.167316809 +0000 UTC m=+69.884924699" observedRunningTime="2025-09-11 00:30:36.419789615 +0000 UTC m=+71.137397505" watchObservedRunningTime="2025-09-11 00:30:39.09597551 +0000 UTC m=+73.813583400" Sep 11 00:30:40.352948 containerd[1604]: time="2025-09-11T00:30:40.352905921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953\" id:\"f32162052c37bb12643766ce95115f6d38ae748605933996b04cba17a9c21e0d\" pid:5465 exited_at:{seconds:1757550640 nanos:352594609}" Sep 11 00:30:43.140503 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:51620.service - OpenSSH per-connection server daemon (10.0.0.1:51620). Sep 11 00:30:43.192122 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 51620 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:43.193512 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:43.197611 systemd-logind[1584]: New session 17 of user core. Sep 11 00:30:43.205666 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:30:43.371031 sshd[5480]: Connection closed by 10.0.0.1 port 51620 Sep 11 00:30:43.371390 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:43.376139 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:51620.service: Deactivated successfully. Sep 11 00:30:43.378403 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:30:43.379165 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:30:43.380430 systemd-logind[1584]: Removed session 17. Sep 11 00:30:45.059554 containerd[1604]: time="2025-09-11T00:30:45.059497848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94\" id:\"585c9306be71536dd8d4da9b1625660aa4a83325275818e7ddc93cf89e0ed4a9\" pid:5504 exited_at:{seconds:1757550645 nanos:59094099}" Sep 11 00:30:45.587361 containerd[1604]: time="2025-09-11T00:30:45.587308277Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad744614db233fbf6e089c39f91d47b560fa1f4a0d3ea2da6db2148882da1e94\" id:\"02b3850c006aa04ceba80b941fa3b9c9d57f71c2c9487b8bf6adb3e21e2b92a7\" pid:5527 exited_at:{seconds:1757550645 nanos:586998359}" Sep 11 00:30:48.387170 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:51636.service - OpenSSH per-connection server daemon (10.0.0.1:51636). Sep 11 00:30:48.449759 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 51636 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:48.451314 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:48.457279 systemd-logind[1584]: New session 18 of user core. Sep 11 00:30:48.466694 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:30:48.534218 kubelet[2731]: I0911 00:30:48.534149 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:30:48.646008 sshd[5541]: Connection closed by 10.0.0.1 port 51636 Sep 11 00:30:48.646210 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:48.655349 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:51636.service: Deactivated successfully. Sep 11 00:30:48.657222 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:30:48.658058 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:30:48.661261 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:51644.service - OpenSSH per-connection server daemon (10.0.0.1:51644). Sep 11 00:30:48.662065 systemd-logind[1584]: Removed session 18. Sep 11 00:30:48.713272 sshd[5556]: Accepted publickey for core from 10.0.0.1 port 51644 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:48.715186 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:48.720030 systemd-logind[1584]: New session 19 of user core. Sep 11 00:30:48.726706 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:30:49.018465 sshd[5558]: Connection closed by 10.0.0.1 port 51644 Sep 11 00:30:49.019158 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:49.028645 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:51644.service: Deactivated successfully. Sep 11 00:30:49.030635 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:30:49.031582 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:30:49.034678 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:51652.service - OpenSSH per-connection server daemon (10.0.0.1:51652). Sep 11 00:30:49.035600 systemd-logind[1584]: Removed session 19. Sep 11 00:30:49.089403 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 51652 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:49.091126 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:49.095826 systemd-logind[1584]: New session 20 of user core. Sep 11 00:30:49.106730 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:30:49.753958 sshd[5572]: Connection closed by 10.0.0.1 port 51652 Sep 11 00:30:49.754326 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:49.765705 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:51652.service: Deactivated successfully. Sep 11 00:30:49.769393 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:30:49.772276 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:30:49.775378 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:51666.service - OpenSSH per-connection server daemon (10.0.0.1:51666). Sep 11 00:30:49.776669 systemd-logind[1584]: Removed session 20. Sep 11 00:30:49.823827 sshd[5596]: Accepted publickey for core from 10.0.0.1 port 51666 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:49.825714 sshd-session[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:49.836456 systemd-logind[1584]: New session 21 of user core. Sep 11 00:30:49.840706 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:30:50.170122 sshd[5598]: Connection closed by 10.0.0.1 port 51666 Sep 11 00:30:50.172040 sshd-session[5596]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:50.186292 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:51666.service: Deactivated successfully. Sep 11 00:30:50.189111 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:30:50.191708 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:30:50.195620 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:51464.service - OpenSSH per-connection server daemon (10.0.0.1:51464). Sep 11 00:30:50.197330 systemd-logind[1584]: Removed session 21. Sep 11 00:30:50.249425 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 51464 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:50.250779 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:50.255865 systemd-logind[1584]: New session 22 of user core. Sep 11 00:30:50.270779 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:30:50.397366 sshd[5612]: Connection closed by 10.0.0.1 port 51464 Sep 11 00:30:50.397691 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:50.403168 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:51464.service: Deactivated successfully. Sep 11 00:30:50.405366 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:30:50.406370 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:30:50.408006 systemd-logind[1584]: Removed session 22. Sep 11 00:30:55.411697 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:51480.service - OpenSSH per-connection server daemon (10.0.0.1:51480). Sep 11 00:30:55.451649 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 51480 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:30:55.453013 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:55.458034 systemd-logind[1584]: New session 23 of user core. Sep 11 00:30:55.461705 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:30:55.574361 sshd[5630]: Connection closed by 10.0.0.1 port 51480 Sep 11 00:30:55.574666 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:55.577636 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:51480.service: Deactivated successfully. Sep 11 00:30:55.580164 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:30:55.582364 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:30:55.583751 systemd-logind[1584]: Removed session 23. Sep 11 00:30:59.140516 containerd[1604]: time="2025-09-11T00:30:59.140474251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6ab7195d236cf0b3761f501d4eefbe51cc617658ce1f194bd79f15178fbe953\" id:\"53d537782bc7d95ed032b32da28bed5ee5e7cea0249479b25ffed156472e5d0f\" pid:5656 exited_at:{seconds:1757550659 nanos:140126956}" Sep 11 00:31:00.591898 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:43058.service - OpenSSH per-connection server daemon (10.0.0.1:43058). Sep 11 00:31:00.658230 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 43058 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:31:00.660252 sshd-session[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:00.665114 systemd-logind[1584]: New session 24 of user core. Sep 11 00:31:00.671702 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:31:00.853000 sshd[5673]: Connection closed by 10.0.0.1 port 43058 Sep 11 00:31:00.853330 sshd-session[5671]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:00.858653 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:43058.service: Deactivated successfully. Sep 11 00:31:00.861343 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:31:00.862201 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:31:00.863871 systemd-logind[1584]: Removed session 24. Sep 11 00:31:05.866775 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:43068.service - OpenSSH per-connection server daemon (10.0.0.1:43068). Sep 11 00:31:05.941368 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 43068 ssh2: RSA SHA256:8X8V7rdVZ+ECcaT1CGYYrs0ghv0R5SZc8pWmSPaEmX4 Sep 11 00:31:05.943059 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:31:05.947772 systemd-logind[1584]: New session 25 of user core. Sep 11 00:31:05.957751 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:31:06.143295 sshd[5691]: Connection closed by 10.0.0.1 port 43068 Sep 11 00:31:06.143593 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Sep 11 00:31:06.147760 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:43068.service: Deactivated successfully. Sep 11 00:31:06.150245 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:31:06.151349 systemd-logind[1584]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:31:06.152920 systemd-logind[1584]: Removed session 25.