Sep 13 10:24:25.889498 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Sep 13 08:30:13 -00 2025 Sep 13 10:24:25.889546 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:24:25.889561 kernel: BIOS-provided physical RAM map: Sep 13 10:24:25.889572 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 10:24:25.889582 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 10:24:25.889592 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 10:24:25.889604 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 10:24:25.889615 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 10:24:25.889635 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 10:24:25.889644 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 10:24:25.889652 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 10:24:25.889660 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 10:24:25.889668 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 10:24:25.889679 kernel: NX (Execute Disable) protection: active Sep 13 10:24:25.889697 kernel: APIC: Static calls initialized Sep 13 10:24:25.889709 kernel: SMBIOS 2.8 present. Sep 13 10:24:25.889722 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 10:24:25.889731 kernel: DMI: Memory slots populated: 1/1 Sep 13 10:24:25.889740 kernel: Hypervisor detected: KVM Sep 13 10:24:25.889749 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 10:24:25.889758 kernel: kvm-clock: using sched offset of 5189132518 cycles Sep 13 10:24:25.889765 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 10:24:25.889773 kernel: tsc: Detected 2794.748 MHz processor Sep 13 10:24:25.889780 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 10:24:25.889791 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 10:24:25.889798 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 10:24:25.889806 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 10:24:25.889813 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 10:24:25.889820 kernel: Using GB pages for direct mapping Sep 13 10:24:25.889828 kernel: ACPI: Early table checksum verification disabled Sep 13 10:24:25.889835 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 10:24:25.889843 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:24:25.889852 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:24:25.889875 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:24:25.889882 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 10:24:25.889889 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:24:25.889896 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:24:25.889903 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:24:25.889911 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:24:25.889918 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 10:24:25.889931 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 10:24:25.889938 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 10:24:25.889946 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 10:24:25.889953 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 10:24:25.889961 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 10:24:25.889968 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 10:24:25.889977 kernel: No NUMA configuration found Sep 13 10:24:25.889985 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 10:24:25.889992 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 13 10:24:25.890000 kernel: Zone ranges: Sep 13 10:24:25.890007 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 10:24:25.890015 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 10:24:25.890022 kernel: Normal empty Sep 13 10:24:25.890029 kernel: Device empty Sep 13 10:24:25.890037 kernel: Movable zone start for each node Sep 13 10:24:25.890044 kernel: Early memory node ranges Sep 13 10:24:25.890053 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 10:24:25.890068 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 10:24:25.890076 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 10:24:25.890083 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 10:24:25.890091 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 10:24:25.890098 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 10:24:25.890106 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 10:24:25.890117 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 10:24:25.890127 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 10:24:25.890140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 10:24:25.890151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 10:24:25.890165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 10:24:25.890176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 10:24:25.890187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 10:24:25.890197 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 10:24:25.890208 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 10:24:25.890218 kernel: TSC deadline timer available Sep 13 10:24:25.890229 kernel: CPU topo: Max. logical packages: 1 Sep 13 10:24:25.890245 kernel: CPU topo: Max. logical dies: 1 Sep 13 10:24:25.890256 kernel: CPU topo: Max. dies per package: 1 Sep 13 10:24:25.890266 kernel: CPU topo: Max. threads per core: 1 Sep 13 10:24:25.890277 kernel: CPU topo: Num. cores per package: 4 Sep 13 10:24:25.890288 kernel: CPU topo: Num. threads per package: 4 Sep 13 10:24:25.890299 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 10:24:25.890310 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 10:24:25.890321 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 10:24:25.890332 kernel: kvm-guest: setup PV sched yield Sep 13 10:24:25.890343 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 10:24:25.890357 kernel: Booting paravirtualized kernel on KVM Sep 13 10:24:25.890367 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 10:24:25.890377 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 10:24:25.890386 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 10:24:25.890396 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 10:24:25.890405 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 10:24:25.890415 kernel: kvm-guest: PV spinlocks enabled Sep 13 10:24:25.890427 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 10:24:25.890441 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:24:25.890459 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 10:24:25.890472 kernel: random: crng init done Sep 13 10:24:25.890483 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 10:24:25.890493 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 10:24:25.890503 kernel: Fallback order for Node 0: 0 Sep 13 10:24:25.890515 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 13 10:24:25.890528 kernel: Policy zone: DMA32 Sep 13 10:24:25.890540 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 10:24:25.890557 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 10:24:25.890568 kernel: ftrace: allocating 40125 entries in 157 pages Sep 13 10:24:25.890578 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 10:24:25.890587 kernel: Dynamic Preempt: voluntary Sep 13 10:24:25.890594 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 10:24:25.890607 kernel: rcu: RCU event tracing is enabled. Sep 13 10:24:25.890615 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 10:24:25.890623 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 10:24:25.890635 kernel: Rude variant of Tasks RCU enabled. Sep 13 10:24:25.890645 kernel: Tracing variant of Tasks RCU enabled. Sep 13 10:24:25.890652 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 10:24:25.890660 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 10:24:25.890667 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:24:25.890677 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:24:25.890686 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:24:25.890696 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 10:24:25.890707 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 10:24:25.890731 kernel: Console: colour VGA+ 80x25 Sep 13 10:24:25.890742 kernel: printk: legacy console [ttyS0] enabled Sep 13 10:24:25.890753 kernel: ACPI: Core revision 20240827 Sep 13 10:24:25.890765 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 10:24:25.890780 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 10:24:25.890791 kernel: x2apic enabled Sep 13 10:24:25.890802 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 10:24:25.890819 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 10:24:25.890831 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 10:24:25.890846 kernel: kvm-guest: setup PV IPIs Sep 13 10:24:25.890875 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 10:24:25.890890 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:24:25.890902 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 10:24:25.890912 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 10:24:25.890923 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 10:24:25.890933 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 10:24:25.890943 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 10:24:25.890958 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 10:24:25.890969 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 10:24:25.890979 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 10:24:25.890990 kernel: active return thunk: retbleed_return_thunk Sep 13 10:24:25.891000 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 10:24:25.891011 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 10:24:25.891021 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 10:24:25.891031 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 10:24:25.891042 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 10:24:25.891056 kernel: active return thunk: srso_return_thunk Sep 13 10:24:25.891077 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 10:24:25.891087 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 10:24:25.891095 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 10:24:25.891103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 10:24:25.891111 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 10:24:25.891119 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 10:24:25.891127 kernel: Freeing SMP alternatives memory: 32K Sep 13 10:24:25.891135 kernel: pid_max: default: 32768 minimum: 301 Sep 13 10:24:25.891145 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 10:24:25.891153 kernel: landlock: Up and running. Sep 13 10:24:25.891160 kernel: SELinux: Initializing. Sep 13 10:24:25.891172 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:24:25.891179 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:24:25.891187 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 10:24:25.891195 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 10:24:25.891203 kernel: ... version: 0 Sep 13 10:24:25.891210 kernel: ... bit width: 48 Sep 13 10:24:25.891220 kernel: ... generic registers: 6 Sep 13 10:24:25.891228 kernel: ... value mask: 0000ffffffffffff Sep 13 10:24:25.891235 kernel: ... max period: 00007fffffffffff Sep 13 10:24:25.891243 kernel: ... fixed-purpose events: 0 Sep 13 10:24:25.891251 kernel: ... event mask: 000000000000003f Sep 13 10:24:25.891258 kernel: signal: max sigframe size: 1776 Sep 13 10:24:25.891266 kernel: rcu: Hierarchical SRCU implementation. Sep 13 10:24:25.891274 kernel: rcu: Max phase no-delay instances is 400. Sep 13 10:24:25.891282 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 10:24:25.891292 kernel: smp: Bringing up secondary CPUs ... Sep 13 10:24:25.891299 kernel: smpboot: x86: Booting SMP configuration: Sep 13 10:24:25.891307 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 10:24:25.891314 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 10:24:25.891322 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 10:24:25.891330 kernel: Memory: 2428920K/2571752K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54088K init, 2876K bss, 136904K reserved, 0K cma-reserved) Sep 13 10:24:25.891338 kernel: devtmpfs: initialized Sep 13 10:24:25.891346 kernel: x86/mm: Memory block size: 128MB Sep 13 10:24:25.891353 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 10:24:25.891363 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 10:24:25.891371 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 10:24:25.891379 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 10:24:25.891386 kernel: audit: initializing netlink subsys (disabled) Sep 13 10:24:25.891394 kernel: audit: type=2000 audit(1757759062.794:1): state=initialized audit_enabled=0 res=1 Sep 13 10:24:25.891402 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 10:24:25.891409 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 10:24:25.891417 kernel: cpuidle: using governor menu Sep 13 10:24:25.891425 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 10:24:25.891435 kernel: dca service started, version 1.12.1 Sep 13 10:24:25.891442 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 13 10:24:25.891450 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 10:24:25.891458 kernel: PCI: Using configuration type 1 for base access Sep 13 10:24:25.891465 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 10:24:25.891473 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 10:24:25.891481 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 10:24:25.891488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 10:24:25.891496 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 10:24:25.891506 kernel: ACPI: Added _OSI(Module Device) Sep 13 10:24:25.891514 kernel: ACPI: Added _OSI(Processor Device) Sep 13 10:24:25.891521 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 10:24:25.891529 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 10:24:25.891537 kernel: ACPI: Interpreter enabled Sep 13 10:24:25.891544 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 10:24:25.891552 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 10:24:25.891560 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 10:24:25.891567 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 10:24:25.891577 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 10:24:25.891585 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 10:24:25.891892 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 10:24:25.892043 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 10:24:25.892178 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 10:24:25.892189 kernel: PCI host bridge to bus 0000:00 Sep 13 10:24:25.892332 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 10:24:25.892451 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 10:24:25.892560 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 10:24:25.892670 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 10:24:25.892779 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 10:24:25.892907 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 10:24:25.893019 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 10:24:25.893216 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 10:24:25.893461 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 10:24:25.893598 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 13 10:24:25.893719 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 13 10:24:25.893839 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 13 10:24:25.893982 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 10:24:25.894138 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 10:24:25.894274 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 13 10:24:25.894451 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 13 10:24:25.894617 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 10:24:25.894787 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 10:24:25.894993 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 13 10:24:25.895141 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 13 10:24:25.895274 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 10:24:25.895561 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 10:24:25.895691 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 13 10:24:25.895886 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 13 10:24:25.896052 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 10:24:25.896191 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 13 10:24:25.896335 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 10:24:25.896457 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 10:24:25.896624 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 10:24:25.896844 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 13 10:24:25.897211 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 13 10:24:25.897399 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 10:24:25.897533 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 13 10:24:25.897545 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 10:24:25.897559 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 10:24:25.897570 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 10:24:25.897580 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 10:24:25.897591 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 10:24:25.897602 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 10:24:25.897613 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 10:24:25.897625 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 10:24:25.897636 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 10:24:25.897647 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 10:24:25.897662 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 10:24:25.897674 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 10:24:25.897685 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 10:24:25.897697 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 10:24:25.897709 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 10:24:25.897720 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 10:24:25.897732 kernel: iommu: Default domain type: Translated Sep 13 10:24:25.897744 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 10:24:25.897755 kernel: PCI: Using ACPI for IRQ routing Sep 13 10:24:25.897771 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 10:24:25.897782 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 10:24:25.897793 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 10:24:25.897985 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 10:24:25.898175 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 10:24:25.898329 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 10:24:25.898341 kernel: vgaarb: loaded Sep 13 10:24:25.898349 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 10:24:25.898364 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 10:24:25.898374 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 10:24:25.898384 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 10:24:25.898395 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 10:24:25.898406 kernel: pnp: PnP ACPI init Sep 13 10:24:25.898656 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 10:24:25.898676 kernel: pnp: PnP ACPI: found 6 devices Sep 13 10:24:25.898687 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 10:24:25.898698 kernel: NET: Registered PF_INET protocol family Sep 13 10:24:25.898727 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 10:24:25.898739 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 10:24:25.898769 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 10:24:25.898780 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 10:24:25.898788 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 10:24:25.898796 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 10:24:25.898804 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:24:25.898812 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:24:25.898824 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 10:24:25.898832 kernel: NET: Registered PF_XDP protocol family Sep 13 10:24:25.898982 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 10:24:25.899106 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 10:24:25.899237 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 10:24:25.899408 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 10:24:25.899560 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 10:24:25.899686 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 10:24:25.899697 kernel: PCI: CLS 0 bytes, default 64 Sep 13 10:24:25.899710 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:24:25.899718 kernel: Initialise system trusted keyrings Sep 13 10:24:25.899726 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 10:24:25.899734 kernel: Key type asymmetric registered Sep 13 10:24:25.899742 kernel: Asymmetric key parser 'x509' registered Sep 13 10:24:25.899750 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 10:24:25.899758 kernel: io scheduler mq-deadline registered Sep 13 10:24:25.899766 kernel: io scheduler kyber registered Sep 13 10:24:25.899773 kernel: io scheduler bfq registered Sep 13 10:24:25.899784 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 10:24:25.899793 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 10:24:25.899801 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 10:24:25.899809 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 10:24:25.899817 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 10:24:25.899825 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 10:24:25.899833 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 10:24:25.899841 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 10:24:25.899849 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 10:24:25.900149 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 10:24:25.900172 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 10:24:25.900324 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 10:24:25.900477 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T10:24:25 UTC (1757759065) Sep 13 10:24:25.900633 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 10:24:25.900650 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 10:24:25.900661 kernel: NET: Registered PF_INET6 protocol family Sep 13 10:24:25.900679 kernel: Segment Routing with IPv6 Sep 13 10:24:25.900691 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 10:24:25.900703 kernel: NET: Registered PF_PACKET protocol family Sep 13 10:24:25.900713 kernel: Key type dns_resolver registered Sep 13 10:24:25.900724 kernel: IPI shorthand broadcast: enabled Sep 13 10:24:25.900735 kernel: sched_clock: Marking stable (3400006053, 111429444)->(3538916155, -27480658) Sep 13 10:24:25.900746 kernel: registered taskstats version 1 Sep 13 10:24:25.900758 kernel: Loading compiled-in X.509 certificates Sep 13 10:24:25.900770 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: cbb54677ad1c578839cdade5ab8500bbdb72e350' Sep 13 10:24:25.900781 kernel: Demotion targets for Node 0: null Sep 13 10:24:25.900797 kernel: Key type .fscrypt registered Sep 13 10:24:25.900807 kernel: Key type fscrypt-provisioning registered Sep 13 10:24:25.900818 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 10:24:25.900828 kernel: ima: Allocated hash algorithm: sha1 Sep 13 10:24:25.900838 kernel: ima: No architecture policies found Sep 13 10:24:25.900848 kernel: clk: Disabling unused clocks Sep 13 10:24:25.900859 kernel: Warning: unable to open an initial console. Sep 13 10:24:25.900888 kernel: Freeing unused kernel image (initmem) memory: 54088K Sep 13 10:24:25.900900 kernel: Write protecting the kernel read-only data: 24576k Sep 13 10:24:25.900908 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 13 10:24:25.900916 kernel: Run /init as init process Sep 13 10:24:25.900924 kernel: with arguments: Sep 13 10:24:25.900931 kernel: /init Sep 13 10:24:25.900939 kernel: with environment: Sep 13 10:24:25.900946 kernel: HOME=/ Sep 13 10:24:25.900956 kernel: TERM=linux Sep 13 10:24:25.900966 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 10:24:25.900978 systemd[1]: Successfully made /usr/ read-only. Sep 13 10:24:25.901009 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:24:25.901026 systemd[1]: Detected virtualization kvm. Sep 13 10:24:25.901038 systemd[1]: Detected architecture x86-64. Sep 13 10:24:25.901054 systemd[1]: Running in initrd. Sep 13 10:24:25.901077 systemd[1]: No hostname configured, using default hostname. Sep 13 10:24:25.901094 systemd[1]: Hostname set to . Sep 13 10:24:25.901106 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:24:25.901119 systemd[1]: Queued start job for default target initrd.target. Sep 13 10:24:25.901131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:24:25.901144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:24:25.901158 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 10:24:25.901171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:24:25.901186 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 10:24:25.901199 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 10:24:25.901212 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 10:24:25.901224 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 10:24:25.901236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:24:25.901247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:24:25.901258 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:24:25.901272 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:24:25.901283 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:24:25.901295 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:24:25.901306 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:24:25.901318 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:24:25.901329 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 10:24:25.901340 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 10:24:25.901351 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:24:25.901363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:24:25.901377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:24:25.901388 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:24:25.901400 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 10:24:25.901411 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:24:25.901425 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 10:24:25.901444 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 10:24:25.901457 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 10:24:25.901469 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:24:25.901483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:24:25.901497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:24:25.901511 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 10:24:25.901530 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:24:25.901543 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 10:24:25.901600 systemd-journald[220]: Collecting audit messages is disabled. Sep 13 10:24:25.901635 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:24:25.901648 systemd-journald[220]: Journal started Sep 13 10:24:25.901676 systemd-journald[220]: Runtime Journal (/run/log/journal/71ff41a70a834537b6add967d8f5f62e) is 6M, max 48.6M, 42.5M free. Sep 13 10:24:25.897226 systemd-modules-load[223]: Inserted module 'overlay' Sep 13 10:24:25.903902 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:24:25.929918 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 10:24:25.931906 kernel: Bridge firewalling registered Sep 13 10:24:25.931932 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 13 10:24:25.938613 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:24:25.942813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:24:25.945409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:24:25.952744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 10:24:25.956472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:24:25.966740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:24:25.967628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:24:25.980397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:24:25.981367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:24:25.982177 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 10:24:25.987644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:24:25.989586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:24:25.994853 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:24:26.011529 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 10:24:26.041935 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:24:26.045194 systemd-resolved[259]: Positive Trust Anchors: Sep 13 10:24:26.045211 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:24:26.045242 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:24:26.047901 systemd-resolved[259]: Defaulting to hostname 'linux'. Sep 13 10:24:26.049151 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:24:26.050694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:24:26.168917 kernel: SCSI subsystem initialized Sep 13 10:24:26.177894 kernel: Loading iSCSI transport class v2.0-870. Sep 13 10:24:26.191908 kernel: iscsi: registered transport (tcp) Sep 13 10:24:26.214127 kernel: iscsi: registered transport (qla4xxx) Sep 13 10:24:26.214186 kernel: QLogic iSCSI HBA Driver Sep 13 10:24:26.238646 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:24:26.333619 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:24:26.342107 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:24:26.406884 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 10:24:26.417159 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 10:24:26.507909 kernel: raid6: avx2x4 gen() 21971 MB/s Sep 13 10:24:26.524906 kernel: raid6: avx2x2 gen() 21878 MB/s Sep 13 10:24:26.541958 kernel: raid6: avx2x1 gen() 20170 MB/s Sep 13 10:24:26.541990 kernel: raid6: using algorithm avx2x4 gen() 21971 MB/s Sep 13 10:24:26.559926 kernel: raid6: .... xor() 7006 MB/s, rmw enabled Sep 13 10:24:26.559989 kernel: raid6: using avx2x2 recovery algorithm Sep 13 10:24:26.583893 kernel: xor: automatically using best checksumming function avx Sep 13 10:24:26.752930 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 10:24:26.762651 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:24:26.765123 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:24:26.801804 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 13 10:24:26.809560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:24:26.810785 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 10:24:26.847633 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 13 10:24:26.880402 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:24:26.884627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:24:26.969373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:24:26.974363 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 10:24:27.013901 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 10:24:27.020027 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 10:24:27.020062 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 10:24:27.039733 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 10:24:27.039808 kernel: GPT:9289727 != 19775487 Sep 13 10:24:27.039820 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 10:24:27.039831 kernel: GPT:9289727 != 19775487 Sep 13 10:24:27.039841 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 10:24:27.039851 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:24:27.042899 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 10:24:27.045885 kernel: libata version 3.00 loaded. Sep 13 10:24:27.046896 kernel: AES CTR mode by8 optimization enabled Sep 13 10:24:27.061646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:24:27.062447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:24:27.066751 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:24:27.072205 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 10:24:27.072466 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 10:24:27.072484 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 10:24:27.070782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:24:27.077920 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 10:24:27.078102 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 10:24:27.080393 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:24:27.099937 kernel: scsi host0: ahci Sep 13 10:24:27.100298 kernel: scsi host1: ahci Sep 13 10:24:27.101904 kernel: scsi host2: ahci Sep 13 10:24:27.102897 kernel: scsi host3: ahci Sep 13 10:24:27.104899 kernel: scsi host4: ahci Sep 13 10:24:27.105930 kernel: scsi host5: ahci Sep 13 10:24:27.111495 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 lpm-pol 1 Sep 13 10:24:27.111567 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 lpm-pol 1 Sep 13 10:24:27.111586 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 lpm-pol 1 Sep 13 10:24:27.111609 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 lpm-pol 1 Sep 13 10:24:27.111625 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 lpm-pol 1 Sep 13 10:24:27.111641 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 lpm-pol 1 Sep 13 10:24:27.110840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 10:24:27.127344 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 10:24:27.160152 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:24:27.171060 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 10:24:27.172631 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 10:24:27.175393 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 10:24:27.280533 disk-uuid[632]: Primary Header is updated. Sep 13 10:24:27.280533 disk-uuid[632]: Secondary Entries is updated. Sep 13 10:24:27.280533 disk-uuid[632]: Secondary Header is updated. Sep 13 10:24:27.300076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:24:27.302831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:24:27.420944 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 10:24:27.421001 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 10:24:27.421894 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 10:24:27.422895 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 10:24:27.423893 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 10:24:27.424889 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 10:24:27.424909 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:24:27.425990 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 10:24:27.426011 kernel: ata3.00: applying bridge limits Sep 13 10:24:27.428060 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:24:27.428079 kernel: ata3.00: configured for UDMA/100 Sep 13 10:24:27.430886 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 10:24:27.474891 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 10:24:27.475145 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 10:24:27.497890 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 10:24:27.885693 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 10:24:27.886364 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:24:27.888066 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:24:27.891692 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:24:27.895120 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 10:24:27.924263 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:24:28.293896 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:24:28.294292 disk-uuid[633]: The operation has completed successfully. Sep 13 10:24:28.331763 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 10:24:28.331928 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 10:24:28.377276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 10:24:28.401649 sh[663]: Success Sep 13 10:24:28.421895 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 10:24:28.421932 kernel: device-mapper: uevent: version 1.0.3 Sep 13 10:24:28.421948 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 10:24:28.431907 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 10:24:28.463032 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 10:24:28.467024 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 10:24:28.496069 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 10:24:28.502938 kernel: BTRFS: device fsid fbf3e737-db97-4ff7-a1f5-c4d4b7390663 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (675) Sep 13 10:24:28.502976 kernel: BTRFS info (device dm-0): first mount of filesystem fbf3e737-db97-4ff7-a1f5-c4d4b7390663 Sep 13 10:24:28.503001 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:24:28.509341 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 10:24:28.509367 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 10:24:28.510739 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 10:24:28.513094 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:24:28.515373 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 10:24:28.518255 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 10:24:28.520538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 10:24:28.549922 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Sep 13 10:24:28.552089 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:24:28.552121 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:24:28.555912 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:24:28.555940 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:24:28.561891 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:24:28.563246 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 10:24:28.566639 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 10:24:28.664569 ignition[757]: Ignition 2.22.0 Sep 13 10:24:28.664588 ignition[757]: Stage: fetch-offline Sep 13 10:24:28.664628 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:24:28.664641 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:24:28.668216 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:24:28.664752 ignition[757]: parsed url from cmdline: "" Sep 13 10:24:28.672427 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:24:28.664756 ignition[757]: no config URL provided Sep 13 10:24:28.664763 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 10:24:28.664779 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 13 10:24:28.664809 ignition[757]: op(1): [started] loading QEMU firmware config module Sep 13 10:24:28.664815 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 10:24:28.682140 ignition[757]: op(1): [finished] loading QEMU firmware config module Sep 13 10:24:28.722736 ignition[757]: parsing config with SHA512: cdcf6ce712821f33a49397ab15c72cc8192045ce52149a81d4bd3f7a8e07e03c8131e34b38370ae62e1e7bb8334649bcefa36f441343a040c22ff7e50c21d788 Sep 13 10:24:28.725501 systemd-networkd[854]: lo: Link UP Sep 13 10:24:28.725514 systemd-networkd[854]: lo: Gained carrier Sep 13 10:24:28.727477 systemd-networkd[854]: Enumeration completed Sep 13 10:24:28.727979 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:24:28.728165 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:24:28.732435 ignition[757]: fetch-offline: fetch-offline passed Sep 13 10:24:28.728170 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:24:28.732538 ignition[757]: Ignition finished successfully Sep 13 10:24:28.729067 systemd-networkd[854]: eth0: Link UP Sep 13 10:24:28.729303 systemd-networkd[854]: eth0: Gained carrier Sep 13 10:24:28.729315 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:24:28.731553 systemd[1]: Reached target network.target - Network. Sep 13 10:24:28.732038 unknown[757]: fetched base config from "system" Sep 13 10:24:28.732049 unknown[757]: fetched user config from "qemu" Sep 13 10:24:28.737974 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:24:28.740328 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 10:24:28.741250 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 10:24:28.750941 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:24:28.777880 ignition[858]: Ignition 2.22.0 Sep 13 10:24:28.777894 ignition[858]: Stage: kargs Sep 13 10:24:28.778038 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:24:28.778048 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:24:28.778701 ignition[858]: kargs: kargs passed Sep 13 10:24:28.778748 ignition[858]: Ignition finished successfully Sep 13 10:24:28.783352 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 10:24:28.786645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 10:24:28.832503 ignition[868]: Ignition 2.22.0 Sep 13 10:24:28.832519 ignition[868]: Stage: disks Sep 13 10:24:28.832651 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:24:28.832661 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:24:28.833499 ignition[868]: disks: disks passed Sep 13 10:24:28.836637 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 10:24:28.833547 ignition[868]: Ignition finished successfully Sep 13 10:24:28.838590 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 10:24:28.840906 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 10:24:28.843146 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:24:28.845424 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:24:28.847896 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:24:28.848959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 10:24:28.876993 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 10:24:28.884977 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 10:24:28.886314 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 10:24:28.996888 kernel: EXT4-fs (vda9): mounted filesystem 1fad58d4-1271-484a-aa8e-8f7f5dca764c r/w with ordered data mode. Quota mode: none. Sep 13 10:24:28.997443 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 10:24:28.998225 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 10:24:29.001480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:24:29.003418 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 10:24:29.004527 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 10:24:29.004569 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 10:24:29.004596 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:24:29.017879 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 10:24:29.019474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 10:24:29.022971 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 13 10:24:29.025917 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:24:29.025939 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:24:29.029198 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:24:29.029223 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:24:29.031237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:24:29.060651 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 10:24:29.065916 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 13 10:24:29.070386 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 10:24:29.075230 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 10:24:29.169560 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 10:24:29.171940 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 10:24:29.172794 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 10:24:29.197924 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:24:29.214054 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 10:24:29.235180 ignition[1000]: INFO : Ignition 2.22.0 Sep 13 10:24:29.235180 ignition[1000]: INFO : Stage: mount Sep 13 10:24:29.237239 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:24:29.237239 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:24:29.237239 ignition[1000]: INFO : mount: mount passed Sep 13 10:24:29.237239 ignition[1000]: INFO : Ignition finished successfully Sep 13 10:24:29.240091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 10:24:29.242646 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 10:24:29.501852 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 10:24:29.503807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:24:29.528897 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 13 10:24:29.528946 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:24:29.530897 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:24:29.533893 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:24:29.533919 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:24:29.535633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:24:29.568653 ignition[1029]: INFO : Ignition 2.22.0 Sep 13 10:24:29.568653 ignition[1029]: INFO : Stage: files Sep 13 10:24:29.571037 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:24:29.571037 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:24:29.571037 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 13 10:24:29.571037 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 10:24:29.571037 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 10:24:29.578649 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 10:24:29.578649 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 10:24:29.578649 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 10:24:29.578649 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 10:24:29.578649 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 10:24:29.573899 unknown[1029]: wrote ssh authorized keys file for user: core Sep 13 10:24:29.614915 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 10:24:29.938449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 10:24:29.938449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 10:24:29.942473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 10:24:29.942473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:24:29.942473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:24:29.942473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:24:29.942473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:24:29.942473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:24:29.942473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:24:30.108606 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:24:30.111033 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:24:30.111033 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 10:24:30.558058 systemd-networkd[854]: eth0: Gained IPv6LL Sep 13 10:24:30.764408 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 10:24:30.764408 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 10:24:30.770641 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 10:24:31.059432 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 10:24:31.900528 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 10:24:31.900528 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 10:24:31.904319 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:24:32.289298 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:24:32.289298 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 10:24:32.289298 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 10:24:32.289298 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:24:32.295994 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:24:32.295994 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 10:24:32.295994 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 10:24:32.324007 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:24:32.328910 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:24:32.330960 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 10:24:32.330960 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 10:24:32.333972 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 10:24:32.335397 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:24:32.337261 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:24:32.338945 ignition[1029]: INFO : files: files passed Sep 13 10:24:32.338945 ignition[1029]: INFO : Ignition finished successfully Sep 13 10:24:32.342450 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 10:24:32.345044 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 10:24:32.346111 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 10:24:32.366717 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 10:24:32.366885 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 10:24:32.372501 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 10:24:32.399186 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:24:32.399186 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:24:32.403240 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:24:32.406992 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:24:32.408907 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 10:24:32.413128 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 10:24:32.550942 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 10:24:32.551096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 10:24:32.553626 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 10:24:32.555842 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 10:24:32.558006 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 10:24:32.560716 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 10:24:32.603521 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:24:32.605415 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 10:24:32.631646 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:24:32.633037 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:24:32.635563 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 10:24:32.637704 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 10:24:32.637892 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:24:32.639176 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 10:24:32.639564 systemd[1]: Stopped target basic.target - Basic System. Sep 13 10:24:32.639944 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 10:24:32.640460 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:24:32.640814 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 10:24:32.641357 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:24:32.641709 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 10:24:32.642253 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:24:32.642616 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 10:24:32.642993 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 10:24:32.643499 systemd[1]: Stopped target swap.target - Swaps. Sep 13 10:24:32.643817 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 10:24:32.643990 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:24:32.644743 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:24:32.645339 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:24:32.645632 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 10:24:32.645825 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:24:32.670255 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 10:24:32.670438 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 10:24:32.673650 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 10:24:32.673784 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:24:32.675766 systemd[1]: Stopped target paths.target - Path Units. Sep 13 10:24:32.677522 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 10:24:32.681928 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:24:32.683203 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 10:24:32.685479 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 10:24:32.687247 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 10:24:32.687343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:24:32.689077 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 10:24:32.689167 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:24:32.690180 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 10:24:32.690305 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:24:32.692913 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 10:24:32.693025 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 10:24:32.698850 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 10:24:32.700205 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 10:24:32.701685 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 10:24:32.701846 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:24:32.703629 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 10:24:32.703750 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:24:32.711562 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 10:24:32.714149 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 10:24:32.773442 ignition[1084]: INFO : Ignition 2.22.0 Sep 13 10:24:32.773442 ignition[1084]: INFO : Stage: umount Sep 13 10:24:32.828537 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:24:32.828537 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:24:32.828537 ignition[1084]: INFO : umount: umount passed Sep 13 10:24:32.828537 ignition[1084]: INFO : Ignition finished successfully Sep 13 10:24:32.777633 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 10:24:32.777787 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 10:24:32.828154 systemd[1]: Stopped target network.target - Network. Sep 13 10:24:32.828613 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 10:24:32.828689 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 10:24:32.829179 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 10:24:32.829226 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 10:24:32.829525 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 10:24:32.829587 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 10:24:32.830098 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 10:24:32.830143 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 10:24:32.940738 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 10:24:32.942720 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 10:24:32.946715 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 10:24:32.946969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 10:24:32.952201 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 10:24:32.952526 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 10:24:32.952591 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:24:32.956657 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:24:32.957032 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 10:24:32.957167 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 10:24:32.960193 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 10:24:32.960689 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 10:24:32.961596 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 10:24:32.961650 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:24:32.964844 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 10:24:32.978002 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 10:24:32.978073 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:24:32.978334 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:24:32.978377 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:24:32.983969 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 10:24:32.984028 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 10:24:32.985154 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:24:32.991310 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 10:24:32.995333 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 10:24:33.008408 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 10:24:33.008624 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:24:33.011120 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 10:24:33.011246 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 10:24:33.012410 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 10:24:33.012522 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 10:24:33.016193 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 10:24:33.016283 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 10:24:33.019133 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 10:24:33.019187 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:24:33.020200 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 10:24:33.020255 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:24:33.020997 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 10:24:33.021054 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 10:24:33.021800 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 10:24:33.021849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:24:33.022691 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 10:24:33.022744 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 10:24:33.024088 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 10:24:33.033016 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 10:24:33.033085 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:24:33.037660 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 10:24:33.037730 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:24:33.042168 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 10:24:33.042221 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:24:33.047244 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 10:24:33.047294 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:24:33.048467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:24:33.048522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:24:33.054218 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 10:24:33.054330 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 10:24:33.055125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 10:24:33.058777 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 10:24:33.084675 systemd[1]: Switching root. Sep 13 10:24:33.119271 systemd-journald[220]: Journal stopped Sep 13 10:24:35.719174 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 13 10:24:35.719256 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 10:24:35.719271 kernel: SELinux: policy capability open_perms=1 Sep 13 10:24:35.719282 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 10:24:35.719300 kernel: SELinux: policy capability always_check_network=0 Sep 13 10:24:35.719311 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 10:24:35.719333 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 10:24:35.719344 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 10:24:35.719355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 10:24:35.719370 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 10:24:35.719385 kernel: audit: type=1403 audit(1757759073.876:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 10:24:35.719406 systemd[1]: Successfully loaded SELinux policy in 75.772ms. Sep 13 10:24:35.719431 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.892ms. Sep 13 10:24:35.719462 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:24:35.719477 systemd[1]: Detected virtualization kvm. Sep 13 10:24:35.719493 systemd[1]: Detected architecture x86-64. Sep 13 10:24:35.719505 systemd[1]: Detected first boot. Sep 13 10:24:35.719517 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:24:35.719537 zram_generator::config[1132]: No configuration found. Sep 13 10:24:35.719550 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1255991160 wd_nsec: 1255991230 Sep 13 10:24:35.719573 kernel: Guest personality initialized and is inactive Sep 13 10:24:35.719584 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 10:24:35.719596 kernel: Initialized host personality Sep 13 10:24:35.719613 kernel: NET: Registered PF_VSOCK protocol family Sep 13 10:24:35.719630 systemd[1]: Populated /etc with preset unit settings. Sep 13 10:24:35.719647 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 10:24:35.719660 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 10:24:35.719673 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 10:24:35.719684 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 10:24:35.719697 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 10:24:35.719709 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 10:24:35.719724 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 10:24:35.719736 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 10:24:35.719754 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 10:24:35.719769 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 10:24:35.719783 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 10:24:35.719811 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 10:24:35.719824 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:24:35.719837 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:24:35.719849 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 10:24:35.719893 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 10:24:35.719906 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 10:24:35.719918 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:24:35.719933 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 10:24:35.719950 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:24:35.719967 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:24:35.719984 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 10:24:35.720003 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 10:24:35.720021 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 10:24:35.720033 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 10:24:35.720045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:24:35.720057 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:24:35.720069 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:24:35.720093 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:24:35.720105 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 10:24:35.720117 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 10:24:35.720129 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 10:24:35.720144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:24:35.720156 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:24:35.720168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:24:35.720183 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 10:24:35.720200 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 10:24:35.720214 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 10:24:35.720236 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 10:24:35.720249 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:24:35.720261 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 10:24:35.720276 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 10:24:35.720288 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 10:24:35.720300 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 10:24:35.720312 systemd[1]: Reached target machines.target - Containers. Sep 13 10:24:35.720324 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 10:24:35.720337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:24:35.720349 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:24:35.720365 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 10:24:35.720398 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:24:35.720412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:24:35.720424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:24:35.720436 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 10:24:35.720448 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:24:35.720461 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 10:24:35.720473 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 10:24:35.720485 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 10:24:35.720504 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 10:24:35.720516 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 10:24:35.720529 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:24:35.720541 kernel: fuse: init (API version 7.41) Sep 13 10:24:35.720553 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:24:35.720565 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:24:35.720578 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:24:35.720595 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 10:24:35.720630 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 10:24:35.720648 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:24:35.720660 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 10:24:35.720672 systemd[1]: Stopped verity-setup.service. Sep 13 10:24:35.720685 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:24:35.720708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 10:24:35.720722 kernel: ACPI: bus type drm_connector registered Sep 13 10:24:35.720734 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 10:24:35.720746 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 10:24:35.720758 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 10:24:35.720773 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 10:24:35.720793 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 10:24:35.720820 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 10:24:35.720879 systemd-journald[1208]: Collecting audit messages is disabled. Sep 13 10:24:35.720904 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:24:35.720916 kernel: loop: module loaded Sep 13 10:24:35.720929 systemd-journald[1208]: Journal started Sep 13 10:24:35.720955 systemd-journald[1208]: Runtime Journal (/run/log/journal/71ff41a70a834537b6add967d8f5f62e) is 6M, max 48.6M, 42.5M free. Sep 13 10:24:35.376095 systemd[1]: Queued start job for default target multi-user.target. Sep 13 10:24:35.398330 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 10:24:35.398961 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 10:24:35.723151 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 10:24:35.723183 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 10:24:35.725925 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:24:35.727219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:24:35.727446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:24:35.728895 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:24:35.729118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:24:35.730431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:24:35.730648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:24:35.732255 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 10:24:35.732485 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 10:24:35.733836 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:24:35.734086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:24:35.735685 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:24:35.737312 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:24:35.739250 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 10:24:35.741024 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 10:24:35.742663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:24:35.758182 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:24:35.760691 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 10:24:35.762886 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 10:24:35.764021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 10:24:35.764054 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:24:35.766019 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 10:24:35.780024 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 10:24:35.781641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:24:35.783518 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 10:24:35.786042 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 10:24:35.787355 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:24:35.789976 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 10:24:35.791406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:24:35.796332 systemd-journald[1208]: Time spent on flushing to /var/log/journal/71ff41a70a834537b6add967d8f5f62e is 17.114ms for 980 entries. Sep 13 10:24:35.796332 systemd-journald[1208]: System Journal (/var/log/journal/71ff41a70a834537b6add967d8f5f62e) is 8M, max 195.6M, 187.6M free. Sep 13 10:24:36.390213 systemd-journald[1208]: Received client request to flush runtime journal. Sep 13 10:24:36.390305 kernel: loop0: detected capacity change from 0 to 110984 Sep 13 10:24:36.390342 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 10:24:36.390369 kernel: loop1: detected capacity change from 0 to 128016 Sep 13 10:24:36.390394 kernel: loop2: detected capacity change from 0 to 224512 Sep 13 10:24:36.390417 kernel: loop3: detected capacity change from 0 to 110984 Sep 13 10:24:36.390444 kernel: loop4: detected capacity change from 0 to 128016 Sep 13 10:24:36.390466 kernel: loop5: detected capacity change from 0 to 224512 Sep 13 10:24:36.390488 zram_generator::config[1295]: No configuration found. Sep 13 10:24:35.794016 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:24:35.797676 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 10:24:35.805245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:24:35.809008 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 10:24:35.810737 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 10:24:35.887538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:24:35.896518 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 13 10:24:35.896532 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 13 10:24:35.901954 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:24:35.905328 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 10:24:36.035305 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 10:24:36.075180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:24:36.114731 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 13 10:24:36.114746 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 13 10:24:36.118816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:24:36.172426 (sd-merge)[1264]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 10:24:36.173093 (sd-merge)[1264]: Merged extensions into '/usr'. Sep 13 10:24:36.177566 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 10:24:36.177577 systemd[1]: Reloading... Sep 13 10:24:36.503930 systemd[1]: Reloading finished in 325 ms. Sep 13 10:24:36.527278 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 10:24:36.529117 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 10:24:36.530730 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 10:24:36.535723 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 10:24:36.540213 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 10:24:36.560282 systemd[1]: Starting ensure-sysext.service... Sep 13 10:24:36.562273 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 10:24:36.564668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:24:36.583464 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 10:24:36.591289 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Sep 13 10:24:36.591304 systemd[1]: Reloading... Sep 13 10:24:36.594257 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 10:24:36.594294 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 10:24:36.594628 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 10:24:36.594931 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 10:24:36.595845 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 10:24:36.596164 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 13 10:24:36.596244 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 13 10:24:36.600657 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:24:36.600674 systemd-tmpfiles[1337]: Skipping /boot Sep 13 10:24:36.611008 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:24:36.611022 systemd-tmpfiles[1337]: Skipping /boot Sep 13 10:24:36.701904 zram_generator::config[1372]: No configuration found. Sep 13 10:24:36.880134 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 10:24:36.880421 systemd[1]: Reloading finished in 288 ms. Sep 13 10:24:36.932400 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 10:24:36.934152 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:24:36.943491 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:24:36.992920 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 10:24:36.995426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 10:24:37.000083 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:24:37.005170 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 10:24:37.008455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:24:37.008630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:24:37.009887 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:24:37.013175 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:24:37.016596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:24:37.017785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:24:37.017956 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:24:37.018051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:24:37.019186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:24:37.019416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:24:37.023318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:24:37.023551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:24:37.025599 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:24:37.025820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:24:37.032681 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:24:37.032945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:24:37.034339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:24:37.036787 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:24:37.042116 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:24:37.046200 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:24:37.047426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:24:37.047526 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:24:37.056843 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 10:24:37.058008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:24:37.059418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:24:37.059654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:24:37.090836 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:24:37.091183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:24:37.093157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:24:37.093654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:24:37.095921 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:24:37.096370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:24:37.107527 systemd[1]: Finished ensure-sysext.service. Sep 13 10:24:37.115522 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:24:37.117234 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:24:37.121670 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 10:24:37.124140 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 10:24:37.135529 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 10:24:37.180408 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 10:24:37.182631 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 10:24:37.266364 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 10:24:37.298292 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 10:24:37.314064 augenrules[1456]: No rules Sep 13 10:24:37.315042 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:24:37.318025 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 10:24:37.322918 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:24:37.323282 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:24:37.343310 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 10:24:37.365540 systemd-udevd[1458]: Using default interface naming scheme 'v255'. Sep 13 10:24:37.384060 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 10:24:37.386286 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 10:24:37.387888 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:24:37.388902 systemd-resolved[1411]: Positive Trust Anchors: Sep 13 10:24:37.388922 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:24:37.388953 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:24:37.392490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:24:37.393146 systemd-resolved[1411]: Defaulting to hostname 'linux'. Sep 13 10:24:37.396410 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:24:37.398695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:24:37.400320 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:24:37.401831 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 10:24:37.403602 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 10:24:37.405217 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 10:24:37.406855 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 10:24:37.408135 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 10:24:37.409492 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 10:24:37.410954 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 10:24:37.410989 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:24:37.412096 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:24:37.415535 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 10:24:37.418466 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 10:24:37.422148 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 10:24:37.425179 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 10:24:37.426856 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 10:24:37.432082 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 10:24:37.434340 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 10:24:37.436437 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 10:24:37.446933 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:24:37.448941 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:24:37.449969 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:24:37.450002 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:24:37.452952 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 10:24:37.456001 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 10:24:37.460022 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 10:24:37.464379 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 10:24:37.465609 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 10:24:37.470882 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 10:24:37.484747 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 10:24:37.490913 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 10:24:37.495943 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 10:24:37.499880 jq[1500]: false Sep 13 10:24:37.498385 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 10:24:37.500173 oslogin_cache_refresh[1504]: Refreshing passwd entry cache Sep 13 10:24:37.500588 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Refreshing passwd entry cache Sep 13 10:24:37.506629 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Failure getting users, quitting Sep 13 10:24:37.506629 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:24:37.506629 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Refreshing group entry cache Sep 13 10:24:37.506629 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Failure getting groups, quitting Sep 13 10:24:37.506629 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:24:37.505701 oslogin_cache_refresh[1504]: Failure getting users, quitting Sep 13 10:24:37.505722 oslogin_cache_refresh[1504]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:24:37.505786 oslogin_cache_refresh[1504]: Refreshing group entry cache Sep 13 10:24:37.506279 oslogin_cache_refresh[1504]: Failure getting groups, quitting Sep 13 10:24:37.506288 oslogin_cache_refresh[1504]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:24:37.508666 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 10:24:37.510632 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 10:24:37.511460 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 10:24:37.512479 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 10:24:37.514374 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 10:24:37.517810 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 10:24:37.520564 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 10:24:37.523069 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 10:24:37.523419 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 10:24:37.523717 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 10:24:37.531988 extend-filesystems[1502]: Found /dev/vda6 Sep 13 10:24:37.534012 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 10:24:37.541037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 10:24:37.541593 extend-filesystems[1502]: Found /dev/vda9 Sep 13 10:24:37.541376 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 10:24:37.546209 update_engine[1518]: I20250913 10:24:37.542601 1518 main.cc:92] Flatcar Update Engine starting Sep 13 10:24:37.546464 extend-filesystems[1502]: Checking size of /dev/vda9 Sep 13 10:24:37.554900 jq[1519]: true Sep 13 10:24:37.568474 jq[1540]: true Sep 13 10:24:37.572316 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 10:24:37.572603 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 10:24:37.575978 tar[1522]: linux-amd64/LICENSE Sep 13 10:24:37.575978 tar[1522]: linux-amd64/helm Sep 13 10:24:37.583902 extend-filesystems[1502]: Resized partition /dev/vda9 Sep 13 10:24:37.587588 dbus-daemon[1497]: [system] SELinux support is enabled Sep 13 10:24:37.587768 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 10:24:37.591811 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 10:24:37.591917 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 10:24:37.593203 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 10:24:37.593218 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 10:24:37.595544 extend-filesystems[1548]: resize2fs 1.47.3 (8-Jul-2025) Sep 13 10:24:37.602845 systemd[1]: Started update-engine.service - Update Engine. Sep 13 10:24:37.605499 update_engine[1518]: I20250913 10:24:37.605324 1518 update_check_scheduler.cc:74] Next update check in 8m43s Sep 13 10:24:37.606063 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 10:24:37.608883 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 10:24:37.636433 systemd-networkd[1472]: lo: Link UP Sep 13 10:24:37.636447 systemd-networkd[1472]: lo: Gained carrier Sep 13 10:24:37.644237 systemd-networkd[1472]: Enumeration completed Sep 13 10:24:37.644446 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:24:37.644839 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:24:37.674757 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 10:24:37.644844 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:24:37.646119 systemd[1]: Reached target network.target - Network. Sep 13 10:24:37.648983 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 10:24:37.650336 systemd-networkd[1472]: eth0: Link UP Sep 13 10:24:37.650510 systemd-networkd[1472]: eth0: Gained carrier Sep 13 10:24:37.650528 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:24:37.653908 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 10:24:37.662144 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 10:24:37.669650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:24:37.675679 systemd-networkd[1472]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:24:37.677314 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Sep 13 10:24:37.677742 systemd-logind[1517]: New seat seat0. Sep 13 10:24:38.394127 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 10:24:38.394127 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 10:24:38.394127 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 10:24:38.394000 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 10:24:38.394082 systemd-timesyncd[1440]: Initial clock synchronization to Sat 2025-09-13 10:24:38.393749 UTC. Sep 13 10:24:38.397505 systemd-resolved[1411]: Clock change detected. Flushing caches. Sep 13 10:24:38.398352 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Sep 13 10:24:38.399355 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Sep 13 10:24:38.402896 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 10:24:38.404634 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 10:24:38.406233 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 10:24:38.406589 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 10:24:38.408192 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 10:24:38.417294 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 10:24:38.422023 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 10:24:38.433269 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 10:24:38.441122 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 10:24:38.446970 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 10:24:38.461050 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 10:24:38.466126 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 10:24:38.466410 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 10:24:38.498568 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 10:24:38.506555 kernel: ACPI: button: Power Button [PWRF] Sep 13 10:24:38.555523 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 10:24:38.555391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:24:38.607462 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 10:24:38.612069 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 10:24:38.643064 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 10:24:38.643381 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 10:24:38.645710 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 10:24:38.676509 kernel: kvm_amd: TSC scaling supported Sep 13 10:24:38.676619 kernel: kvm_amd: Nested Virtualization enabled Sep 13 10:24:38.676635 kernel: kvm_amd: Nested Paging enabled Sep 13 10:24:38.676647 kernel: kvm_amd: LBR virtualization supported Sep 13 10:24:38.676659 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 10:24:38.676672 kernel: kvm_amd: Virtual GIF supported Sep 13 10:24:38.694550 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 10:24:38.697986 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 10:24:38.699612 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 10:24:38.699803 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 10:24:38.707404 containerd[1571]: time="2025-09-13T10:24:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 10:24:38.708726 containerd[1571]: time="2025-09-13T10:24:38.708691764Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 13 10:24:38.711419 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 10:24:38.716667 systemd-logind[1517]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 10:24:38.723372 containerd[1571]: time="2025-09-13T10:24:38.723315284Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.6µs" Sep 13 10:24:38.723372 containerd[1571]: time="2025-09-13T10:24:38.723368293Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 10:24:38.723452 containerd[1571]: time="2025-09-13T10:24:38.723397698Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 10:24:38.723731 containerd[1571]: time="2025-09-13T10:24:38.723697370Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 10:24:38.723764 containerd[1571]: time="2025-09-13T10:24:38.723738117Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 10:24:38.723815 containerd[1571]: time="2025-09-13T10:24:38.723778863Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:24:38.723966 containerd[1571]: time="2025-09-13T10:24:38.723920399Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:24:38.723966 containerd[1571]: time="2025-09-13T10:24:38.723959552Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:24:38.724466 containerd[1571]: time="2025-09-13T10:24:38.724428732Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:24:38.724502 containerd[1571]: time="2025-09-13T10:24:38.724465952Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:24:38.724502 containerd[1571]: time="2025-09-13T10:24:38.724485108Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:24:38.724555 containerd[1571]: time="2025-09-13T10:24:38.724501839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 10:24:38.724974 containerd[1571]: time="2025-09-13T10:24:38.724933188Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 10:24:38.725357 containerd[1571]: time="2025-09-13T10:24:38.725323320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:24:38.725394 containerd[1571]: time="2025-09-13T10:24:38.725378614Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:24:38.725430 containerd[1571]: time="2025-09-13T10:24:38.725393412Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 10:24:38.725479 containerd[1571]: time="2025-09-13T10:24:38.725455047Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 10:24:38.725906 containerd[1571]: time="2025-09-13T10:24:38.725858304Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 10:24:38.726021 containerd[1571]: time="2025-09-13T10:24:38.725980032Z" level=info msg="metadata content store policy set" policy=shared Sep 13 10:24:38.733103 containerd[1571]: time="2025-09-13T10:24:38.733065823Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 10:24:38.733145 containerd[1571]: time="2025-09-13T10:24:38.733130073Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 10:24:38.733167 containerd[1571]: time="2025-09-13T10:24:38.733150361Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 10:24:38.733248 containerd[1571]: time="2025-09-13T10:24:38.733218960Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 10:24:38.733283 containerd[1571]: time="2025-09-13T10:24:38.733248956Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 10:24:38.733283 containerd[1571]: time="2025-09-13T10:24:38.733263784Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 10:24:38.733283 containerd[1571]: time="2025-09-13T10:24:38.733278682Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 10:24:38.733369 containerd[1571]: time="2025-09-13T10:24:38.733293069Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 10:24:38.733369 containerd[1571]: time="2025-09-13T10:24:38.733306354Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 10:24:38.733369 containerd[1571]: time="2025-09-13T10:24:38.733318647Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 10:24:38.733369 containerd[1571]: time="2025-09-13T10:24:38.733329297Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 10:24:38.733369 containerd[1571]: time="2025-09-13T10:24:38.733344055Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 10:24:38.733568 containerd[1571]: time="2025-09-13T10:24:38.733516027Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 10:24:38.733607 containerd[1571]: time="2025-09-13T10:24:38.733575459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 10:24:38.733607 containerd[1571]: time="2025-09-13T10:24:38.733599163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 10:24:38.733646 containerd[1571]: time="2025-09-13T10:24:38.733619531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 10:24:38.733646 containerd[1571]: time="2025-09-13T10:24:38.733641392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 10:24:38.733683 containerd[1571]: time="2025-09-13T10:24:38.733656942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 10:24:38.733709 containerd[1571]: time="2025-09-13T10:24:38.733679364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 10:24:38.733709 containerd[1571]: time="2025-09-13T10:24:38.733694993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 10:24:38.733753 containerd[1571]: time="2025-09-13T10:24:38.733709300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 10:24:38.733753 containerd[1571]: time="2025-09-13T10:24:38.733723677Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 10:24:38.733753 containerd[1571]: time="2025-09-13T10:24:38.733737914Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 10:24:38.733846 containerd[1571]: time="2025-09-13T10:24:38.733814908Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 10:24:38.733881 containerd[1571]: time="2025-09-13T10:24:38.733846377Z" level=info msg="Start snapshots syncer" Sep 13 10:24:38.733925 containerd[1571]: time="2025-09-13T10:24:38.733898214Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 10:24:38.734328 containerd[1571]: time="2025-09-13T10:24:38.734265583Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 10:24:38.734441 containerd[1571]: time="2025-09-13T10:24:38.734348028Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 10:24:38.735740 containerd[1571]: time="2025-09-13T10:24:38.735588785Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 10:24:38.735942 containerd[1571]: time="2025-09-13T10:24:38.735868971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 10:24:38.735942 containerd[1571]: time="2025-09-13T10:24:38.735904297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 10:24:38.735942 containerd[1571]: time="2025-09-13T10:24:38.735919455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 10:24:38.735942 containerd[1571]: time="2025-09-13T10:24:38.735933291Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 10:24:38.736028 containerd[1571]: time="2025-09-13T10:24:38.735947648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 10:24:38.736028 containerd[1571]: time="2025-09-13T10:24:38.735962787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 10:24:38.736028 containerd[1571]: time="2025-09-13T10:24:38.735976703Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 10:24:38.736028 containerd[1571]: time="2025-09-13T10:24:38.736025745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 10:24:38.736109 containerd[1571]: time="2025-09-13T10:24:38.736042356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 10:24:38.736109 containerd[1571]: time="2025-09-13T10:24:38.736055571Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 10:24:38.736109 containerd[1571]: time="2025-09-13T10:24:38.736106045Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:24:38.736164 containerd[1571]: time="2025-09-13T10:24:38.736121855Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:24:38.736164 containerd[1571]: time="2025-09-13T10:24:38.736131563Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:24:38.736164 containerd[1571]: time="2025-09-13T10:24:38.736140640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:24:38.736164 containerd[1571]: time="2025-09-13T10:24:38.736148716Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 10:24:38.736164 containerd[1571]: time="2025-09-13T10:24:38.736157783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 10:24:38.736256 containerd[1571]: time="2025-09-13T10:24:38.736167922Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 10:24:38.736256 containerd[1571]: time="2025-09-13T10:24:38.736185785Z" level=info msg="runtime interface created" Sep 13 10:24:38.736256 containerd[1571]: time="2025-09-13T10:24:38.736191436Z" level=info msg="created NRI interface" Sep 13 10:24:38.736256 containerd[1571]: time="2025-09-13T10:24:38.736199781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 10:24:38.736256 containerd[1571]: time="2025-09-13T10:24:38.736210732Z" level=info msg="Connect containerd service" Sep 13 10:24:38.736256 containerd[1571]: time="2025-09-13T10:24:38.736233164Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 10:24:38.737218 containerd[1571]: time="2025-09-13T10:24:38.737186562Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:24:38.766155 kernel: EDAC MC: Ver: 3.0.0 Sep 13 10:24:38.843259 tar[1522]: linux-amd64/README.md Sep 13 10:24:38.856995 containerd[1571]: time="2025-09-13T10:24:38.856937611Z" level=info msg="Start subscribing containerd event" Sep 13 10:24:38.857176 containerd[1571]: time="2025-09-13T10:24:38.857121686Z" level=info msg="Start recovering state" Sep 13 10:24:38.857369 containerd[1571]: time="2025-09-13T10:24:38.857353651Z" level=info msg="Start event monitor" Sep 13 10:24:38.857462 containerd[1571]: time="2025-09-13T10:24:38.857449812Z" level=info msg="Start cni network conf syncer for default" Sep 13 10:24:38.857549 containerd[1571]: time="2025-09-13T10:24:38.857521877Z" level=info msg="Start streaming server" Sep 13 10:24:38.857625 containerd[1571]: time="2025-09-13T10:24:38.857611435Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 10:24:38.857699 containerd[1571]: time="2025-09-13T10:24:38.857686045Z" level=info msg="runtime interface starting up..." Sep 13 10:24:38.857752 containerd[1571]: time="2025-09-13T10:24:38.857733434Z" level=info msg="starting plugins..." Sep 13 10:24:38.857821 containerd[1571]: time="2025-09-13T10:24:38.857808304Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 10:24:38.857954 containerd[1571]: time="2025-09-13T10:24:38.857148777Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 10:24:38.858050 containerd[1571]: time="2025-09-13T10:24:38.858022726Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 10:24:38.858217 containerd[1571]: time="2025-09-13T10:24:38.858203776Z" level=info msg="containerd successfully booted in 0.151310s" Sep 13 10:24:38.858301 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 10:24:38.867807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:24:38.874198 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 10:24:39.705512 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 10:24:39.708747 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:55140.service - OpenSSH per-connection server daemon (10.0.0.1:55140). Sep 13 10:24:39.796367 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 55140 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:39.798473 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:39.805305 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 10:24:40.039434 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 10:24:40.048660 systemd-logind[1517]: New session 1 of user core. Sep 13 10:24:40.062435 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 10:24:40.067184 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 10:24:40.086232 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 10:24:40.089648 systemd-logind[1517]: New session c1 of user core. Sep 13 10:24:40.252203 systemd[1645]: Queued start job for default target default.target. Sep 13 10:24:40.270861 systemd[1645]: Created slice app.slice - User Application Slice. Sep 13 10:24:40.270888 systemd[1645]: Reached target paths.target - Paths. Sep 13 10:24:40.270933 systemd[1645]: Reached target timers.target - Timers. Sep 13 10:24:40.272663 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 10:24:40.286648 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 10:24:40.286772 systemd[1645]: Reached target sockets.target - Sockets. Sep 13 10:24:40.286813 systemd[1645]: Reached target basic.target - Basic System. Sep 13 10:24:40.286853 systemd[1645]: Reached target default.target - Main User Target. Sep 13 10:24:40.286886 systemd[1645]: Startup finished in 189ms. Sep 13 10:24:40.287864 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 10:24:40.291080 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 10:24:40.356911 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:54274.service - OpenSSH per-connection server daemon (10.0.0.1:54274). Sep 13 10:24:40.361714 systemd-networkd[1472]: eth0: Gained IPv6LL Sep 13 10:24:40.366118 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 10:24:40.369019 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 10:24:40.372235 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 10:24:40.375034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:24:40.381412 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 10:24:40.413526 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 10:24:40.413945 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 10:24:40.415741 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 10:24:40.420747 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 10:24:40.424688 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 54274 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:40.426457 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:40.431424 systemd-logind[1517]: New session 2 of user core. Sep 13 10:24:40.441724 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 10:24:40.498694 sshd[1677]: Connection closed by 10.0.0.1 port 54274 Sep 13 10:24:40.500123 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Sep 13 10:24:40.515386 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:54274.service: Deactivated successfully. Sep 13 10:24:40.517370 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 10:24:40.518107 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. Sep 13 10:24:40.520901 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:54290.service - OpenSSH per-connection server daemon (10.0.0.1:54290). Sep 13 10:24:40.523312 systemd-logind[1517]: Removed session 2. Sep 13 10:24:40.585441 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 54290 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:40.587117 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:40.593676 systemd-logind[1517]: New session 3 of user core. Sep 13 10:24:40.606876 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 10:24:40.715186 sshd[1686]: Connection closed by 10.0.0.1 port 54290 Sep 13 10:24:40.715761 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Sep 13 10:24:40.722306 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:54290.service: Deactivated successfully. Sep 13 10:24:40.724913 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 10:24:40.725815 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. Sep 13 10:24:40.727155 systemd-logind[1517]: Removed session 3. Sep 13 10:24:41.656156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:24:41.658510 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 10:24:41.660292 systemd[1]: Startup finished in 3.526s (kernel) + 8.175s (initrd) + 7.132s (userspace) = 18.833s. Sep 13 10:24:41.691283 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:24:42.414048 kubelet[1696]: E0913 10:24:42.413959 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:24:42.420438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:24:42.420773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:24:42.421332 systemd[1]: kubelet.service: Consumed 1.782s CPU time, 265.7M memory peak. Sep 13 10:24:50.741369 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:38518.service - OpenSSH per-connection server daemon (10.0.0.1:38518). Sep 13 10:24:50.808950 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 38518 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:50.810374 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:50.814761 systemd-logind[1517]: New session 4 of user core. Sep 13 10:24:50.824666 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 10:24:50.876902 sshd[1712]: Connection closed by 10.0.0.1 port 38518 Sep 13 10:24:50.877266 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Sep 13 10:24:50.891491 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:38518.service: Deactivated successfully. Sep 13 10:24:50.893483 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 10:24:50.894375 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Sep 13 10:24:50.897169 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:38532.service - OpenSSH per-connection server daemon (10.0.0.1:38532). Sep 13 10:24:50.897808 systemd-logind[1517]: Removed session 4. Sep 13 10:24:50.954194 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 38532 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:50.955872 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:50.960409 systemd-logind[1517]: New session 5 of user core. Sep 13 10:24:50.969667 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 10:24:51.020504 sshd[1721]: Connection closed by 10.0.0.1 port 38532 Sep 13 10:24:51.021000 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Sep 13 10:24:51.042174 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:38532.service: Deactivated successfully. Sep 13 10:24:51.044420 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 10:24:51.045198 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Sep 13 10:24:51.048676 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:38536.service - OpenSSH per-connection server daemon (10.0.0.1:38536). Sep 13 10:24:51.049431 systemd-logind[1517]: Removed session 5. Sep 13 10:24:51.117350 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 38536 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:51.118993 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:51.124182 systemd-logind[1517]: New session 6 of user core. Sep 13 10:24:51.134737 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 10:24:51.191496 sshd[1730]: Connection closed by 10.0.0.1 port 38536 Sep 13 10:24:51.191915 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 13 10:24:51.206769 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:38536.service: Deactivated successfully. Sep 13 10:24:51.208817 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 10:24:51.209818 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Sep 13 10:24:51.212728 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:38546.service - OpenSSH per-connection server daemon (10.0.0.1:38546). Sep 13 10:24:51.213295 systemd-logind[1517]: Removed session 6. Sep 13 10:24:51.279413 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 38546 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:51.280919 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:51.285394 systemd-logind[1517]: New session 7 of user core. Sep 13 10:24:51.295875 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 10:24:51.356980 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 10:24:51.357393 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:24:51.374635 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 13 10:24:51.377372 sshd[1739]: Connection closed by 10.0.0.1 port 38546 Sep 13 10:24:51.378006 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 13 10:24:51.393164 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:38546.service: Deactivated successfully. Sep 13 10:24:51.396005 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 10:24:51.397088 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Sep 13 10:24:51.401284 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:38552.service - OpenSSH per-connection server daemon (10.0.0.1:38552). Sep 13 10:24:51.402188 systemd-logind[1517]: Removed session 7. Sep 13 10:24:51.466296 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 38552 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:51.468266 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:51.474334 systemd-logind[1517]: New session 8 of user core. Sep 13 10:24:51.488813 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 10:24:51.547162 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 10:24:51.547480 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:24:51.587609 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 13 10:24:51.594614 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 10:24:51.594929 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:24:51.604977 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:24:51.666123 augenrules[1774]: No rules Sep 13 10:24:51.668247 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:24:51.668572 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:24:51.669873 sudo[1751]: pam_unix(sudo:session): session closed for user root Sep 13 10:24:51.671841 sshd[1750]: Connection closed by 10.0.0.1 port 38552 Sep 13 10:24:51.672218 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 13 10:24:51.683940 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:38552.service: Deactivated successfully. Sep 13 10:24:51.685959 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 10:24:51.686788 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Sep 13 10:24:51.689516 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:38562.service - OpenSSH per-connection server daemon (10.0.0.1:38562). Sep 13 10:24:51.690371 systemd-logind[1517]: Removed session 8. Sep 13 10:24:51.751967 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 38562 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:24:51.754027 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:24:51.759777 systemd-logind[1517]: New session 9 of user core. Sep 13 10:24:51.769734 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 10:24:51.826479 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 10:24:51.826884 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:24:52.586674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 10:24:52.588633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:24:52.635948 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 10:24:52.641887 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 10:24:52.917552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:24:52.933084 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:24:53.060625 kubelet[1817]: E0913 10:24:53.060549 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:24:53.067727 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:24:53.067944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:24:53.068392 systemd[1]: kubelet.service: Consumed 415ms CPU time, 111.6M memory peak. Sep 13 10:24:53.320959 dockerd[1811]: time="2025-09-13T10:24:53.320793831Z" level=info msg="Starting up" Sep 13 10:24:53.321970 dockerd[1811]: time="2025-09-13T10:24:53.321925684Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 10:24:53.386887 dockerd[1811]: time="2025-09-13T10:24:53.386818985Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 13 10:24:53.888990 dockerd[1811]: time="2025-09-13T10:24:53.888910365Z" level=info msg="Loading containers: start." Sep 13 10:24:53.899583 kernel: Initializing XFRM netlink socket Sep 13 10:24:54.210137 systemd-networkd[1472]: docker0: Link UP Sep 13 10:24:54.214637 dockerd[1811]: time="2025-09-13T10:24:54.214581311Z" level=info msg="Loading containers: done." Sep 13 10:24:54.231948 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3517731558-merged.mount: Deactivated successfully. Sep 13 10:24:54.233755 dockerd[1811]: time="2025-09-13T10:24:54.233708097Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 10:24:54.233843 dockerd[1811]: time="2025-09-13T10:24:54.233821330Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 13 10:24:54.233956 dockerd[1811]: time="2025-09-13T10:24:54.233939080Z" level=info msg="Initializing buildkit" Sep 13 10:24:54.265563 dockerd[1811]: time="2025-09-13T10:24:54.265475143Z" level=info msg="Completed buildkit initialization" Sep 13 10:24:54.273431 dockerd[1811]: time="2025-09-13T10:24:54.273368940Z" level=info msg="Daemon has completed initialization" Sep 13 10:24:54.273580 dockerd[1811]: time="2025-09-13T10:24:54.273505135Z" level=info msg="API listen on /run/docker.sock" Sep 13 10:24:54.273781 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 10:24:55.168378 containerd[1571]: time="2025-09-13T10:24:55.168288072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 10:24:55.848554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857535953.mount: Deactivated successfully. Sep 13 10:24:56.734542 containerd[1571]: time="2025-09-13T10:24:56.734459272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:56.735223 containerd[1571]: time="2025-09-13T10:24:56.735179603Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 13 10:24:56.736365 containerd[1571]: time="2025-09-13T10:24:56.736319120Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:56.739028 containerd[1571]: time="2025-09-13T10:24:56.738997504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:56.739897 containerd[1571]: time="2025-09-13T10:24:56.739852588Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.571495246s" Sep 13 10:24:56.739897 containerd[1571]: time="2025-09-13T10:24:56.739885640Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 10:24:56.740813 containerd[1571]: time="2025-09-13T10:24:56.740777673Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 10:24:57.946211 containerd[1571]: time="2025-09-13T10:24:57.946141661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:57.946977 containerd[1571]: time="2025-09-13T10:24:57.946952883Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 13 10:24:57.948077 containerd[1571]: time="2025-09-13T10:24:57.948027719Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:57.950497 containerd[1571]: time="2025-09-13T10:24:57.950457006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:57.951377 containerd[1571]: time="2025-09-13T10:24:57.951346614Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.21054189s" Sep 13 10:24:57.951377 containerd[1571]: time="2025-09-13T10:24:57.951376080Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 10:24:57.951909 containerd[1571]: time="2025-09-13T10:24:57.951874114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 10:24:59.555327 containerd[1571]: time="2025-09-13T10:24:59.555253095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:59.556167 containerd[1571]: time="2025-09-13T10:24:59.556143174Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 13 10:24:59.557413 containerd[1571]: time="2025-09-13T10:24:59.557348264Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:59.560072 containerd[1571]: time="2025-09-13T10:24:59.560030596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:24:59.561233 containerd[1571]: time="2025-09-13T10:24:59.561198466Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.609284087s" Sep 13 10:24:59.561233 containerd[1571]: time="2025-09-13T10:24:59.561231398Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 10:24:59.562148 containerd[1571]: time="2025-09-13T10:24:59.562102071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 10:25:00.734475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355879538.mount: Deactivated successfully. Sep 13 10:25:01.703485 containerd[1571]: time="2025-09-13T10:25:01.703395325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:01.704195 containerd[1571]: time="2025-09-13T10:25:01.704123561Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 13 10:25:01.705283 containerd[1571]: time="2025-09-13T10:25:01.705249704Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:01.707483 containerd[1571]: time="2025-09-13T10:25:01.707430194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:01.708240 containerd[1571]: time="2025-09-13T10:25:01.708166725Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.146028696s" Sep 13 10:25:01.708240 containerd[1571]: time="2025-09-13T10:25:01.708218292Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 10:25:01.708979 containerd[1571]: time="2025-09-13T10:25:01.708811364Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 10:25:02.335949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500650315.mount: Deactivated successfully. Sep 13 10:25:03.050994 containerd[1571]: time="2025-09-13T10:25:03.050920047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:03.051712 containerd[1571]: time="2025-09-13T10:25:03.051643083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 10:25:03.052820 containerd[1571]: time="2025-09-13T10:25:03.052784103Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:03.055304 containerd[1571]: time="2025-09-13T10:25:03.055268543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:03.056332 containerd[1571]: time="2025-09-13T10:25:03.056301020Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.347460711s" Sep 13 10:25:03.056332 containerd[1571]: time="2025-09-13T10:25:03.056329984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 10:25:03.056897 containerd[1571]: time="2025-09-13T10:25:03.056867392Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 10:25:03.086735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 10:25:03.088741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:25:03.302913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:25:03.312868 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:25:03.480943 kubelet[2179]: E0913 10:25:03.480831 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:25:03.485516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:25:03.485770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:25:03.486198 systemd[1]: kubelet.service: Consumed 222ms CPU time, 109.2M memory peak. Sep 13 10:25:03.846963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225749607.mount: Deactivated successfully. Sep 13 10:25:03.853551 containerd[1571]: time="2025-09-13T10:25:03.853490359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:25:03.854424 containerd[1571]: time="2025-09-13T10:25:03.854391059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 10:25:03.855580 containerd[1571]: time="2025-09-13T10:25:03.855543701Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:25:03.857454 containerd[1571]: time="2025-09-13T10:25:03.857407757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:25:03.857981 containerd[1571]: time="2025-09-13T10:25:03.857950725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 801.055601ms" Sep 13 10:25:03.857981 containerd[1571]: time="2025-09-13T10:25:03.857978447Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 10:25:03.858434 containerd[1571]: time="2025-09-13T10:25:03.858414345Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 10:25:04.469888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191682967.mount: Deactivated successfully. Sep 13 10:25:06.767851 containerd[1571]: time="2025-09-13T10:25:06.767792246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:06.768636 containerd[1571]: time="2025-09-13T10:25:06.768583991Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 13 10:25:06.769781 containerd[1571]: time="2025-09-13T10:25:06.769745901Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:06.772019 containerd[1571]: time="2025-09-13T10:25:06.771982897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:06.773162 containerd[1571]: time="2025-09-13T10:25:06.773121543Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.914674276s" Sep 13 10:25:06.773162 containerd[1571]: time="2025-09-13T10:25:06.773154474Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 10:25:09.446515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:25:09.446713 systemd[1]: kubelet.service: Consumed 222ms CPU time, 109.2M memory peak. Sep 13 10:25:09.448929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:25:09.474654 systemd[1]: Reload requested from client PID 2275 ('systemctl') (unit session-9.scope)... Sep 13 10:25:09.474672 systemd[1]: Reloading... Sep 13 10:25:09.559102 zram_generator::config[2318]: No configuration found. Sep 13 10:25:09.919738 systemd[1]: Reloading finished in 444 ms. Sep 13 10:25:10.010278 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 10:25:10.010378 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 10:25:10.010760 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:25:10.010818 systemd[1]: kubelet.service: Consumed 192ms CPU time, 98.2M memory peak. Sep 13 10:25:10.012702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:25:10.223185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:25:10.240198 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:25:10.298939 kubelet[2366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:25:10.298939 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 10:25:10.298939 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:25:10.299545 kubelet[2366]: I0913 10:25:10.299050 2366 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:25:10.587217 kubelet[2366]: I0913 10:25:10.587168 2366 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 10:25:10.587217 kubelet[2366]: I0913 10:25:10.587201 2366 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:25:10.587517 kubelet[2366]: I0913 10:25:10.587490 2366 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 10:25:10.618097 kubelet[2366]: E0913 10:25:10.618046 2366 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:10.620022 kubelet[2366]: I0913 10:25:10.619719 2366 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:25:10.628791 kubelet[2366]: I0913 10:25:10.628765 2366 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:25:10.634843 kubelet[2366]: I0913 10:25:10.634801 2366 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:25:10.636350 kubelet[2366]: I0913 10:25:10.636259 2366 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:25:10.636549 kubelet[2366]: I0913 10:25:10.636318 2366 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:25:10.636857 kubelet[2366]: I0913 10:25:10.636582 2366 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:25:10.636857 kubelet[2366]: I0913 10:25:10.636595 2366 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 10:25:10.636857 kubelet[2366]: I0913 10:25:10.636747 2366 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:25:10.639767 kubelet[2366]: I0913 10:25:10.639722 2366 kubelet.go:446] "Attempting to sync node with API server" Sep 13 10:25:10.639767 kubelet[2366]: I0913 10:25:10.639763 2366 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:25:10.639860 kubelet[2366]: I0913 10:25:10.639797 2366 kubelet.go:352] "Adding apiserver pod source" Sep 13 10:25:10.639860 kubelet[2366]: I0913 10:25:10.639835 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:25:10.646124 kubelet[2366]: W0913 10:25:10.646040 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 10:25:10.646124 kubelet[2366]: E0913 10:25:10.646124 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:10.646801 kubelet[2366]: I0913 10:25:10.646712 2366 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:25:10.647181 kubelet[2366]: W0913 10:25:10.647119 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 10:25:10.647181 kubelet[2366]: E0913 10:25:10.647161 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:10.647365 kubelet[2366]: I0913 10:25:10.647341 2366 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 10:25:10.648462 kubelet[2366]: W0913 10:25:10.648374 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 10:25:10.651575 kubelet[2366]: I0913 10:25:10.651075 2366 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 10:25:10.651575 kubelet[2366]: I0913 10:25:10.651121 2366 server.go:1287] "Started kubelet" Sep 13 10:25:10.653148 kubelet[2366]: I0913 10:25:10.653060 2366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:25:10.654136 kubelet[2366]: I0913 10:25:10.654115 2366 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:25:10.654231 kubelet[2366]: I0913 10:25:10.654202 2366 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:25:10.656223 kubelet[2366]: I0913 10:25:10.655812 2366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:25:10.656223 kubelet[2366]: I0913 10:25:10.656021 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:25:10.657447 kubelet[2366]: I0913 10:25:10.657414 2366 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 10:25:10.657501 kubelet[2366]: E0913 10:25:10.657484 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:25:10.659674 kubelet[2366]: W0913 10:25:10.659617 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 10:25:10.659735 kubelet[2366]: E0913 10:25:10.659687 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:10.659735 kubelet[2366]: E0913 10:25:10.659715 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Sep 13 10:25:10.659798 kubelet[2366]: I0913 10:25:10.659729 2366 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 10:25:10.659937 kubelet[2366]: I0913 10:25:10.659903 2366 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:25:10.660136 kubelet[2366]: I0913 10:25:10.660120 2366 server.go:479] "Adding debug handlers to kubelet server" Sep 13 10:25:10.660798 kubelet[2366]: I0913 10:25:10.660770 2366 factory.go:221] Registration of the systemd container factory successfully Sep 13 10:25:10.660936 kubelet[2366]: I0913 10:25:10.660918 2366 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:25:10.661892 kubelet[2366]: E0913 10:25:10.661857 2366 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:25:10.662388 kubelet[2366]: I0913 10:25:10.662332 2366 factory.go:221] Registration of the containerd container factory successfully Sep 13 10:25:10.662639 kubelet[2366]: E0913 10:25:10.659785 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864d0991f10a2e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:25:10.65109373 +0000 UTC m=+0.401751923,LastTimestamp:2025-09-13 10:25:10.65109373 +0000 UTC m=+0.401751923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:25:10.677431 kubelet[2366]: I0913 10:25:10.677343 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 10:25:10.679205 kubelet[2366]: I0913 10:25:10.679180 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 10:25:10.679283 kubelet[2366]: I0913 10:25:10.679223 2366 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 10:25:10.679283 kubelet[2366]: I0913 10:25:10.679257 2366 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 10:25:10.679283 kubelet[2366]: I0913 10:25:10.679271 2366 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 10:25:10.679404 kubelet[2366]: E0913 10:25:10.679343 2366 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:25:10.681495 kubelet[2366]: W0913 10:25:10.680609 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 10:25:10.681495 kubelet[2366]: E0913 10:25:10.680661 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:10.681495 kubelet[2366]: I0913 10:25:10.681350 2366 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 10:25:10.681495 kubelet[2366]: I0913 10:25:10.681362 2366 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 10:25:10.681495 kubelet[2366]: I0913 10:25:10.681380 2366 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:25:10.758271 kubelet[2366]: E0913 10:25:10.758224 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:25:10.779476 kubelet[2366]: E0913 10:25:10.779454 2366 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:25:10.858813 kubelet[2366]: E0913 10:25:10.858731 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:25:10.860274 kubelet[2366]: E0913 10:25:10.860246 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Sep 13 10:25:10.959584 kubelet[2366]: E0913 10:25:10.959554 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:25:10.979926 kubelet[2366]: E0913 10:25:10.979896 2366 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:25:11.060324 kubelet[2366]: E0913 10:25:11.060278 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:25:11.146609 kubelet[2366]: I0913 10:25:11.146456 2366 policy_none.go:49] "None policy: Start" Sep 13 10:25:11.146609 kubelet[2366]: I0913 10:25:11.146498 2366 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 10:25:11.146609 kubelet[2366]: I0913 10:25:11.146547 2366 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:25:11.154092 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 10:25:11.160948 kubelet[2366]: E0913 10:25:11.160926 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:25:11.165187 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 10:25:11.168655 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 10:25:11.177663 kubelet[2366]: I0913 10:25:11.177617 2366 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 10:25:11.177937 kubelet[2366]: I0913 10:25:11.177916 2366 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:25:11.177983 kubelet[2366]: I0913 10:25:11.177936 2366 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:25:11.178766 kubelet[2366]: I0913 10:25:11.178338 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:25:11.179039 kubelet[2366]: E0913 10:25:11.179006 2366 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 10:25:11.179090 kubelet[2366]: E0913 10:25:11.179049 2366 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 10:25:11.261823 kubelet[2366]: E0913 10:25:11.261753 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Sep 13 10:25:11.281933 kubelet[2366]: I0913 10:25:11.281892 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:25:11.282264 kubelet[2366]: E0913 10:25:11.282228 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 10:25:11.304933 kubelet[2366]: E0913 10:25:11.304822 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864d0991f10a2e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:25:10.65109373 +0000 UTC m=+0.401751923,LastTimestamp:2025-09-13 10:25:10.65109373 +0000 UTC m=+0.401751923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:25:11.388818 systemd[1]: Created slice kubepods-burstable-pod3ef87a1c1a96a1648dcec29c7af365df.slice - libcontainer container kubepods-burstable-pod3ef87a1c1a96a1648dcec29c7af365df.slice. Sep 13 10:25:11.409166 kubelet[2366]: E0913 10:25:11.409033 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:25:11.412312 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 13 10:25:11.431043 kubelet[2366]: E0913 10:25:11.431002 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:25:11.434169 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 13 10:25:11.436553 kubelet[2366]: E0913 10:25:11.436062 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:25:11.464352 kubelet[2366]: I0913 10:25:11.464315 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ef87a1c1a96a1648dcec29c7af365df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ef87a1c1a96a1648dcec29c7af365df\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:11.464352 kubelet[2366]: I0913 10:25:11.464350 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ef87a1c1a96a1648dcec29c7af365df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ef87a1c1a96a1648dcec29c7af365df\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:11.464511 kubelet[2366]: I0913 10:25:11.464374 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:11.464511 kubelet[2366]: I0913 10:25:11.464421 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:11.464511 kubelet[2366]: I0913 10:25:11.464438 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:11.464511 kubelet[2366]: I0913 10:25:11.464452 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:11.464511 kubelet[2366]: I0913 10:25:11.464466 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:11.465885 kubelet[2366]: I0913 10:25:11.464480 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ef87a1c1a96a1648dcec29c7af365df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ef87a1c1a96a1648dcec29c7af365df\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:11.465885 kubelet[2366]: I0913 10:25:11.464545 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:11.484784 kubelet[2366]: I0913 10:25:11.484759 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:25:11.485276 kubelet[2366]: E0913 10:25:11.485233 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 10:25:11.667284 kubelet[2366]: W0913 10:25:11.667094 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 10:25:11.667284 kubelet[2366]: E0913 10:25:11.667203 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:11.709907 kubelet[2366]: E0913 10:25:11.709867 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:11.710526 containerd[1571]: time="2025-09-13T10:25:11.710487895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ef87a1c1a96a1648dcec29c7af365df,Namespace:kube-system,Attempt:0,}" Sep 13 10:25:11.731936 kubelet[2366]: E0913 10:25:11.731881 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:11.732488 containerd[1571]: time="2025-09-13T10:25:11.732435230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 13 10:25:11.734128 kubelet[2366]: W0913 10:25:11.734043 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 10:25:11.734211 kubelet[2366]: E0913 10:25:11.734126 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:11.736570 kubelet[2366]: E0913 10:25:11.736525 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:11.737056 containerd[1571]: time="2025-09-13T10:25:11.737019754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 13 10:25:11.872558 containerd[1571]: time="2025-09-13T10:25:11.871897608Z" level=info msg="connecting to shim b514e168517303732eb1ef33fe180ca90bfe752dafa5e0f04879dc6050bdf3a7" address="unix:///run/containerd/s/1abb2142791a7dd321b3adb8a0f9e73d7539a7819a7347907b4d0d15006b45a8" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:25:11.874609 containerd[1571]: time="2025-09-13T10:25:11.874562447Z" level=info msg="connecting to shim fc63b55a36addd65d03a216c7512e4816ca28c726e9fe38ade734800a59afaaa" address="unix:///run/containerd/s/692e9839855c7e5c4e38ce2eaf50870fad398985ebdb42af1631c32f4f6c51a6" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:25:11.900115 kubelet[2366]: W0913 10:25:11.900041 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 10:25:11.900115 kubelet[2366]: E0913 10:25:11.900114 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:11.901673 kubelet[2366]: I0913 10:25:11.901610 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:25:11.902965 containerd[1571]: time="2025-09-13T10:25:11.902895724Z" level=info msg="connecting to shim 65f5b46ea54782aeeb635d6077948e2dd0e8cb425523f2041e8d361c32fd691e" address="unix:///run/containerd/s/aea2e57bf53a0377ba1626b6f53dc7a875e24bb433196eb38ecc0c606a300a05" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:25:11.903042 kubelet[2366]: E0913 10:25:11.903017 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 10:25:11.957767 systemd[1]: Started cri-containerd-b514e168517303732eb1ef33fe180ca90bfe752dafa5e0f04879dc6050bdf3a7.scope - libcontainer container b514e168517303732eb1ef33fe180ca90bfe752dafa5e0f04879dc6050bdf3a7. Sep 13 10:25:11.960076 systemd[1]: Started cri-containerd-fc63b55a36addd65d03a216c7512e4816ca28c726e9fe38ade734800a59afaaa.scope - libcontainer container fc63b55a36addd65d03a216c7512e4816ca28c726e9fe38ade734800a59afaaa. Sep 13 10:25:11.964886 systemd[1]: Started cri-containerd-65f5b46ea54782aeeb635d6077948e2dd0e8cb425523f2041e8d361c32fd691e.scope - libcontainer container 65f5b46ea54782aeeb635d6077948e2dd0e8cb425523f2041e8d361c32fd691e. Sep 13 10:25:12.020588 kubelet[2366]: W0913 10:25:12.017876 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 10:25:12.020588 kubelet[2366]: E0913 10:25:12.017966 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 10:25:12.050607 containerd[1571]: time="2025-09-13T10:25:12.050547144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ef87a1c1a96a1648dcec29c7af365df,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc63b55a36addd65d03a216c7512e4816ca28c726e9fe38ade734800a59afaaa\"" Sep 13 10:25:12.051766 kubelet[2366]: E0913 10:25:12.051738 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:12.053827 containerd[1571]: time="2025-09-13T10:25:12.053785026Z" level=info msg="CreateContainer within sandbox \"fc63b55a36addd65d03a216c7512e4816ca28c726e9fe38ade734800a59afaaa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 10:25:12.054119 containerd[1571]: time="2025-09-13T10:25:12.054038403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"b514e168517303732eb1ef33fe180ca90bfe752dafa5e0f04879dc6050bdf3a7\"" Sep 13 10:25:12.055029 containerd[1571]: time="2025-09-13T10:25:12.054962318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"65f5b46ea54782aeeb635d6077948e2dd0e8cb425523f2041e8d361c32fd691e\"" Sep 13 10:25:12.055373 kubelet[2366]: E0913 10:25:12.055355 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:12.055724 kubelet[2366]: E0913 10:25:12.055672 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:12.057229 containerd[1571]: time="2025-09-13T10:25:12.057196512Z" level=info msg="CreateContainer within sandbox \"b514e168517303732eb1ef33fe180ca90bfe752dafa5e0f04879dc6050bdf3a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 10:25:12.057311 containerd[1571]: time="2025-09-13T10:25:12.057268170Z" level=info msg="CreateContainer within sandbox \"65f5b46ea54782aeeb635d6077948e2dd0e8cb425523f2041e8d361c32fd691e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 10:25:12.063033 kubelet[2366]: E0913 10:25:12.062985 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Sep 13 10:25:12.065061 containerd[1571]: time="2025-09-13T10:25:12.065010309Z" level=info msg="Container 0503888fc45c7b8b526150000634a71424d05c361e21b8dfa29429e36477f1eb: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:12.073903 containerd[1571]: time="2025-09-13T10:25:12.073835829Z" level=info msg="Container 06493fc8badb01e001d375b65382c29cf0aa27f803067730f57348a9bc2bd269: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:12.075672 containerd[1571]: time="2025-09-13T10:25:12.075621260Z" level=info msg="CreateContainer within sandbox \"fc63b55a36addd65d03a216c7512e4816ca28c726e9fe38ade734800a59afaaa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0503888fc45c7b8b526150000634a71424d05c361e21b8dfa29429e36477f1eb\"" Sep 13 10:25:12.077355 containerd[1571]: time="2025-09-13T10:25:12.077007384Z" level=info msg="StartContainer for \"0503888fc45c7b8b526150000634a71424d05c361e21b8dfa29429e36477f1eb\"" Sep 13 10:25:12.077991 containerd[1571]: time="2025-09-13T10:25:12.077948622Z" level=info msg="Container 922035c8e4d383d00e7c6e4e67aea970ae243476671d25759411127938d6add7: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:12.079459 containerd[1571]: time="2025-09-13T10:25:12.079406515Z" level=info msg="connecting to shim 0503888fc45c7b8b526150000634a71424d05c361e21b8dfa29429e36477f1eb" address="unix:///run/containerd/s/692e9839855c7e5c4e38ce2eaf50870fad398985ebdb42af1631c32f4f6c51a6" protocol=ttrpc version=3 Sep 13 10:25:12.083162 containerd[1571]: time="2025-09-13T10:25:12.083127344Z" level=info msg="CreateContainer within sandbox \"65f5b46ea54782aeeb635d6077948e2dd0e8cb425523f2041e8d361c32fd691e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06493fc8badb01e001d375b65382c29cf0aa27f803067730f57348a9bc2bd269\"" Sep 13 10:25:12.083746 containerd[1571]: time="2025-09-13T10:25:12.083706297Z" level=info msg="StartContainer for \"06493fc8badb01e001d375b65382c29cf0aa27f803067730f57348a9bc2bd269\"" Sep 13 10:25:12.085092 containerd[1571]: time="2025-09-13T10:25:12.085060960Z" level=info msg="connecting to shim 06493fc8badb01e001d375b65382c29cf0aa27f803067730f57348a9bc2bd269" address="unix:///run/containerd/s/aea2e57bf53a0377ba1626b6f53dc7a875e24bb433196eb38ecc0c606a300a05" protocol=ttrpc version=3 Sep 13 10:25:12.090593 containerd[1571]: time="2025-09-13T10:25:12.090504391Z" level=info msg="CreateContainer within sandbox \"b514e168517303732eb1ef33fe180ca90bfe752dafa5e0f04879dc6050bdf3a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"922035c8e4d383d00e7c6e4e67aea970ae243476671d25759411127938d6add7\"" Sep 13 10:25:12.092564 containerd[1571]: time="2025-09-13T10:25:12.091717773Z" level=info msg="StartContainer for \"922035c8e4d383d00e7c6e4e67aea970ae243476671d25759411127938d6add7\"" Sep 13 10:25:12.093104 containerd[1571]: time="2025-09-13T10:25:12.093074270Z" level=info msg="connecting to shim 922035c8e4d383d00e7c6e4e67aea970ae243476671d25759411127938d6add7" address="unix:///run/containerd/s/1abb2142791a7dd321b3adb8a0f9e73d7539a7819a7347907b4d0d15006b45a8" protocol=ttrpc version=3 Sep 13 10:25:12.103944 systemd[1]: Started cri-containerd-0503888fc45c7b8b526150000634a71424d05c361e21b8dfa29429e36477f1eb.scope - libcontainer container 0503888fc45c7b8b526150000634a71424d05c361e21b8dfa29429e36477f1eb. Sep 13 10:25:12.111764 systemd[1]: Started cri-containerd-06493fc8badb01e001d375b65382c29cf0aa27f803067730f57348a9bc2bd269.scope - libcontainer container 06493fc8badb01e001d375b65382c29cf0aa27f803067730f57348a9bc2bd269. Sep 13 10:25:12.117458 systemd[1]: Started cri-containerd-922035c8e4d383d00e7c6e4e67aea970ae243476671d25759411127938d6add7.scope - libcontainer container 922035c8e4d383d00e7c6e4e67aea970ae243476671d25759411127938d6add7. Sep 13 10:25:12.251132 containerd[1571]: time="2025-09-13T10:25:12.250948834Z" level=info msg="StartContainer for \"06493fc8badb01e001d375b65382c29cf0aa27f803067730f57348a9bc2bd269\" returns successfully" Sep 13 10:25:12.258227 containerd[1571]: time="2025-09-13T10:25:12.258172135Z" level=info msg="StartContainer for \"0503888fc45c7b8b526150000634a71424d05c361e21b8dfa29429e36477f1eb\" returns successfully" Sep 13 10:25:12.277615 containerd[1571]: time="2025-09-13T10:25:12.277550585Z" level=info msg="StartContainer for \"922035c8e4d383d00e7c6e4e67aea970ae243476671d25759411127938d6add7\" returns successfully" Sep 13 10:25:12.691471 kubelet[2366]: E0913 10:25:12.691430 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:25:12.693409 kubelet[2366]: E0913 10:25:12.693384 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:12.694212 kubelet[2366]: E0913 10:25:12.694187 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:25:12.694298 kubelet[2366]: E0913 10:25:12.694277 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:12.695540 kubelet[2366]: E0913 10:25:12.695505 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:25:12.695674 kubelet[2366]: E0913 10:25:12.695653 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:12.705149 kubelet[2366]: I0913 10:25:12.705118 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:25:13.751480 kubelet[2366]: E0913 10:25:13.696739 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:25:13.751480 kubelet[2366]: E0913 10:25:13.696876 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:13.751480 kubelet[2366]: E0913 10:25:13.697012 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:25:13.751480 kubelet[2366]: E0913 10:25:13.697095 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:13.877572 kubelet[2366]: E0913 10:25:13.877139 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 10:25:13.963572 kubelet[2366]: I0913 10:25:13.963002 2366 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 10:25:14.058712 kubelet[2366]: I0913 10:25:14.058562 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:14.063125 kubelet[2366]: E0913 10:25:14.063100 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:14.063125 kubelet[2366]: I0913 10:25:14.063118 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:14.064602 kubelet[2366]: E0913 10:25:14.064564 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:14.064602 kubelet[2366]: I0913 10:25:14.064599 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:14.065922 kubelet[2366]: E0913 10:25:14.065901 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:14.647594 kubelet[2366]: I0913 10:25:14.647549 2366 apiserver.go:52] "Watching apiserver" Sep 13 10:25:14.658243 kubelet[2366]: I0913 10:25:14.658196 2366 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 10:25:14.696987 kubelet[2366]: I0913 10:25:14.696961 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:14.701525 kubelet[2366]: E0913 10:25:14.701503 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:14.701673 kubelet[2366]: E0913 10:25:14.701655 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:15.671258 systemd[1]: Reload requested from client PID 2647 ('systemctl') (unit session-9.scope)... Sep 13 10:25:15.671278 systemd[1]: Reloading... Sep 13 10:25:15.756624 zram_generator::config[2693]: No configuration found. Sep 13 10:25:15.995504 systemd[1]: Reloading finished in 323 ms. Sep 13 10:25:16.021316 kubelet[2366]: I0913 10:25:16.021255 2366 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:25:16.021297 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:25:16.038962 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 10:25:16.039289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:25:16.039339 systemd[1]: kubelet.service: Consumed 915ms CPU time, 132.1M memory peak. Sep 13 10:25:16.041193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:25:16.268596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:25:16.272885 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:25:16.321249 kubelet[2735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:25:16.321249 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 10:25:16.321249 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:25:16.321792 kubelet[2735]: I0913 10:25:16.321333 2735 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:25:16.328089 kubelet[2735]: I0913 10:25:16.328044 2735 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 10:25:16.328089 kubelet[2735]: I0913 10:25:16.328072 2735 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:25:16.328367 kubelet[2735]: I0913 10:25:16.328338 2735 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 10:25:16.329740 kubelet[2735]: I0913 10:25:16.329703 2735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 10:25:16.331856 kubelet[2735]: I0913 10:25:16.331828 2735 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:25:16.337408 kubelet[2735]: I0913 10:25:16.337378 2735 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:25:16.342202 kubelet[2735]: I0913 10:25:16.342157 2735 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:25:16.342499 kubelet[2735]: I0913 10:25:16.342456 2735 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:25:16.342726 kubelet[2735]: I0913 10:25:16.342486 2735 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:25:16.342875 kubelet[2735]: I0913 10:25:16.342732 2735 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:25:16.342875 kubelet[2735]: I0913 10:25:16.342746 2735 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 10:25:16.342875 kubelet[2735]: I0913 10:25:16.342811 2735 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:25:16.343012 kubelet[2735]: I0913 10:25:16.342993 2735 kubelet.go:446] "Attempting to sync node with API server" Sep 13 10:25:16.343064 kubelet[2735]: I0913 10:25:16.343027 2735 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:25:16.343064 kubelet[2735]: I0913 10:25:16.343055 2735 kubelet.go:352] "Adding apiserver pod source" Sep 13 10:25:16.343137 kubelet[2735]: I0913 10:25:16.343068 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:25:16.346550 kubelet[2735]: I0913 10:25:16.344367 2735 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:25:16.346550 kubelet[2735]: I0913 10:25:16.344878 2735 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 10:25:16.346550 kubelet[2735]: I0913 10:25:16.345394 2735 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 10:25:16.346550 kubelet[2735]: I0913 10:25:16.345427 2735 server.go:1287] "Started kubelet" Sep 13 10:25:16.350847 kubelet[2735]: I0913 10:25:16.350793 2735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:25:16.351341 kubelet[2735]: I0913 10:25:16.351296 2735 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:25:16.351492 kubelet[2735]: I0913 10:25:16.351469 2735 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:25:16.354311 kubelet[2735]: I0913 10:25:16.354278 2735 server.go:479] "Adding debug handlers to kubelet server" Sep 13 10:25:16.355315 kubelet[2735]: I0913 10:25:16.351313 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:25:16.355502 kubelet[2735]: I0913 10:25:16.351310 2735 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:25:16.356164 kubelet[2735]: I0913 10:25:16.356127 2735 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 10:25:16.356271 kubelet[2735]: I0913 10:25:16.356232 2735 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 10:25:16.356383 kubelet[2735]: I0913 10:25:16.356359 2735 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:25:16.356744 kubelet[2735]: E0913 10:25:16.356720 2735 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:25:16.358143 kubelet[2735]: E0913 10:25:16.356845 2735 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:25:16.359150 kubelet[2735]: I0913 10:25:16.359117 2735 factory.go:221] Registration of the systemd container factory successfully Sep 13 10:25:16.359523 kubelet[2735]: I0913 10:25:16.359463 2735 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:25:16.362014 kubelet[2735]: I0913 10:25:16.361981 2735 factory.go:221] Registration of the containerd container factory successfully Sep 13 10:25:16.373859 kubelet[2735]: I0913 10:25:16.373807 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 10:25:16.375670 kubelet[2735]: I0913 10:25:16.375631 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 10:25:16.375731 kubelet[2735]: I0913 10:25:16.375691 2735 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 10:25:16.375731 kubelet[2735]: I0913 10:25:16.375712 2735 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 10:25:16.376581 kubelet[2735]: I0913 10:25:16.375719 2735 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 10:25:16.376779 kubelet[2735]: E0913 10:25:16.376736 2735 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:25:16.400076 kubelet[2735]: I0913 10:25:16.400033 2735 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 10:25:16.400076 kubelet[2735]: I0913 10:25:16.400057 2735 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 10:25:16.400076 kubelet[2735]: I0913 10:25:16.400075 2735 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:25:16.400279 kubelet[2735]: I0913 10:25:16.400224 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 10:25:16.400279 kubelet[2735]: I0913 10:25:16.400237 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 10:25:16.400279 kubelet[2735]: I0913 10:25:16.400257 2735 policy_none.go:49] "None policy: Start" Sep 13 10:25:16.400279 kubelet[2735]: I0913 10:25:16.400268 2735 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 10:25:16.400279 kubelet[2735]: I0913 10:25:16.400280 2735 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:25:16.400455 kubelet[2735]: I0913 10:25:16.400390 2735 state_mem.go:75] "Updated machine memory state" Sep 13 10:25:16.404399 kubelet[2735]: I0913 10:25:16.404362 2735 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 10:25:16.404642 kubelet[2735]: I0913 10:25:16.404612 2735 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:25:16.404727 kubelet[2735]: I0913 10:25:16.404643 2735 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:25:16.405082 kubelet[2735]: I0913 10:25:16.405061 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:25:16.405926 kubelet[2735]: E0913 10:25:16.405885 2735 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 10:25:16.478191 kubelet[2735]: I0913 10:25:16.478133 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:16.478371 kubelet[2735]: I0913 10:25:16.478221 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:16.478371 kubelet[2735]: I0913 10:25:16.478136 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:16.509698 kubelet[2735]: I0913 10:25:16.509549 2735 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:25:16.522699 kubelet[2735]: I0913 10:25:16.522570 2735 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 10:25:16.522699 kubelet[2735]: I0913 10:25:16.522678 2735 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 10:25:16.657739 kubelet[2735]: I0913 10:25:16.657689 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ef87a1c1a96a1648dcec29c7af365df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ef87a1c1a96a1648dcec29c7af365df\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:16.657739 kubelet[2735]: I0913 10:25:16.657751 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ef87a1c1a96a1648dcec29c7af365df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ef87a1c1a96a1648dcec29c7af365df\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:16.657954 kubelet[2735]: I0913 10:25:16.657780 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:16.657954 kubelet[2735]: I0913 10:25:16.657803 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:16.657954 kubelet[2735]: I0913 10:25:16.657822 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ef87a1c1a96a1648dcec29c7af365df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ef87a1c1a96a1648dcec29c7af365df\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:16.657954 kubelet[2735]: I0913 10:25:16.657840 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:16.657954 kubelet[2735]: I0913 10:25:16.657858 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:16.658127 kubelet[2735]: I0913 10:25:16.657873 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:16.658127 kubelet[2735]: I0913 10:25:16.658016 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:16.783325 kubelet[2735]: E0913 10:25:16.783188 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:16.783325 kubelet[2735]: E0913 10:25:16.783239 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:16.783325 kubelet[2735]: E0913 10:25:16.783320 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:17.343888 kubelet[2735]: I0913 10:25:17.343831 2735 apiserver.go:52] "Watching apiserver" Sep 13 10:25:17.357083 kubelet[2735]: I0913 10:25:17.357043 2735 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 10:25:17.392481 kubelet[2735]: I0913 10:25:17.392439 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:17.392669 kubelet[2735]: I0913 10:25:17.392576 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:17.392839 kubelet[2735]: I0913 10:25:17.392809 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:17.621069 kubelet[2735]: E0913 10:25:17.620878 2735 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 10:25:17.621069 kubelet[2735]: E0913 10:25:17.620919 2735 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:25:17.621251 kubelet[2735]: E0913 10:25:17.621096 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:17.621251 kubelet[2735]: E0913 10:25:17.621117 2735 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:25:17.622175 kubelet[2735]: E0913 10:25:17.621280 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:17.622175 kubelet[2735]: E0913 10:25:17.622155 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:17.641189 kubelet[2735]: I0913 10:25:17.640851 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.640806989 podStartE2EDuration="1.640806989s" podCreationTimestamp="2025-09-13 10:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:25:17.640564486 +0000 UTC m=+1.362429567" watchObservedRunningTime="2025-09-13 10:25:17.640806989 +0000 UTC m=+1.362672070" Sep 13 10:25:17.653379 kubelet[2735]: I0913 10:25:17.653319 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.653297173 podStartE2EDuration="1.653297173s" podCreationTimestamp="2025-09-13 10:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:25:17.647809037 +0000 UTC m=+1.369674118" watchObservedRunningTime="2025-09-13 10:25:17.653297173 +0000 UTC m=+1.375162255" Sep 13 10:25:18.393696 kubelet[2735]: E0913 10:25:18.393635 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:18.393696 kubelet[2735]: E0913 10:25:18.393636 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:18.394159 kubelet[2735]: E0913 10:25:18.393865 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:19.395968 kubelet[2735]: E0913 10:25:19.395873 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:19.395968 kubelet[2735]: E0913 10:25:19.395925 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:22.260507 kubelet[2735]: I0913 10:25:22.260442 2735 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 10:25:22.261058 containerd[1571]: time="2025-09-13T10:25:22.260980840Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 10:25:22.261362 kubelet[2735]: I0913 10:25:22.261261 2735 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 10:25:23.243565 kubelet[2735]: I0913 10:25:23.241511 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.241483427 podStartE2EDuration="7.241483427s" podCreationTimestamp="2025-09-13 10:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:25:17.654184498 +0000 UTC m=+1.376049579" watchObservedRunningTime="2025-09-13 10:25:23.241483427 +0000 UTC m=+6.963348508" Sep 13 10:25:23.250739 systemd[1]: Created slice kubepods-besteffort-pod46c39100_4356_4aad_a9fe_376cab9229cb.slice - libcontainer container kubepods-besteffort-pod46c39100_4356_4aad_a9fe_376cab9229cb.slice. Sep 13 10:25:23.296945 kubelet[2735]: I0913 10:25:23.296888 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46c39100-4356-4aad-a9fe-376cab9229cb-kube-proxy\") pod \"kube-proxy-2tjmf\" (UID: \"46c39100-4356-4aad-a9fe-376cab9229cb\") " pod="kube-system/kube-proxy-2tjmf" Sep 13 10:25:23.296945 kubelet[2735]: I0913 10:25:23.296952 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46c39100-4356-4aad-a9fe-376cab9229cb-xtables-lock\") pod \"kube-proxy-2tjmf\" (UID: \"46c39100-4356-4aad-a9fe-376cab9229cb\") " pod="kube-system/kube-proxy-2tjmf" Sep 13 10:25:23.297691 kubelet[2735]: I0913 10:25:23.296984 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46c39100-4356-4aad-a9fe-376cab9229cb-lib-modules\") pod \"kube-proxy-2tjmf\" (UID: \"46c39100-4356-4aad-a9fe-376cab9229cb\") " pod="kube-system/kube-proxy-2tjmf" Sep 13 10:25:23.297691 kubelet[2735]: I0913 10:25:23.297016 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdcp\" (UniqueName: \"kubernetes.io/projected/46c39100-4356-4aad-a9fe-376cab9229cb-kube-api-access-shdcp\") pod \"kube-proxy-2tjmf\" (UID: \"46c39100-4356-4aad-a9fe-376cab9229cb\") " pod="kube-system/kube-proxy-2tjmf" Sep 13 10:25:23.370245 systemd[1]: Created slice kubepods-besteffort-podac28cb1d_697a_47ec_97ce_51299fffae84.slice - libcontainer container kubepods-besteffort-podac28cb1d_697a_47ec_97ce_51299fffae84.slice. Sep 13 10:25:23.397731 kubelet[2735]: I0913 10:25:23.397699 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9lmd\" (UniqueName: \"kubernetes.io/projected/ac28cb1d-697a-47ec-97ce-51299fffae84-kube-api-access-m9lmd\") pod \"tigera-operator-755d956888-cr2dn\" (UID: \"ac28cb1d-697a-47ec-97ce-51299fffae84\") " pod="tigera-operator/tigera-operator-755d956888-cr2dn" Sep 13 10:25:23.397847 kubelet[2735]: I0913 10:25:23.397760 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac28cb1d-697a-47ec-97ce-51299fffae84-var-lib-calico\") pod \"tigera-operator-755d956888-cr2dn\" (UID: \"ac28cb1d-697a-47ec-97ce-51299fffae84\") " pod="tigera-operator/tigera-operator-755d956888-cr2dn" Sep 13 10:25:23.559466 kubelet[2735]: E0913 10:25:23.559326 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:23.560103 containerd[1571]: time="2025-09-13T10:25:23.560054866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2tjmf,Uid:46c39100-4356-4aad-a9fe-376cab9229cb,Namespace:kube-system,Attempt:0,}" Sep 13 10:25:23.581617 containerd[1571]: time="2025-09-13T10:25:23.581577293Z" level=info msg="connecting to shim a0cf6e067a040a5a3321bacad2f466e84fcce9ad65998152725731a817e695c5" address="unix:///run/containerd/s/1754034a7fe42c30e3fc5765b199a7e94b7306ad2f087f915c5b91b283f19538" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:25:23.630700 systemd[1]: Started cri-containerd-a0cf6e067a040a5a3321bacad2f466e84fcce9ad65998152725731a817e695c5.scope - libcontainer container a0cf6e067a040a5a3321bacad2f466e84fcce9ad65998152725731a817e695c5. Sep 13 10:25:23.658842 containerd[1571]: time="2025-09-13T10:25:23.658792493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2tjmf,Uid:46c39100-4356-4aad-a9fe-376cab9229cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0cf6e067a040a5a3321bacad2f466e84fcce9ad65998152725731a817e695c5\"" Sep 13 10:25:23.659715 kubelet[2735]: E0913 10:25:23.659684 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:23.662091 containerd[1571]: time="2025-09-13T10:25:23.662023487Z" level=info msg="CreateContainer within sandbox \"a0cf6e067a040a5a3321bacad2f466e84fcce9ad65998152725731a817e695c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 10:25:23.675557 containerd[1571]: time="2025-09-13T10:25:23.673676629Z" level=info msg="Container ab472d14b60c472e10e179d032d620f4453b790c7a9597a5ee27e080a5bc8d7c: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:23.675557 containerd[1571]: time="2025-09-13T10:25:23.674067180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-cr2dn,Uid:ac28cb1d-697a-47ec-97ce-51299fffae84,Namespace:tigera-operator,Attempt:0,}" Sep 13 10:25:23.684986 containerd[1571]: time="2025-09-13T10:25:23.684945541Z" level=info msg="CreateContainer within sandbox \"a0cf6e067a040a5a3321bacad2f466e84fcce9ad65998152725731a817e695c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab472d14b60c472e10e179d032d620f4453b790c7a9597a5ee27e080a5bc8d7c\"" Sep 13 10:25:23.685494 containerd[1571]: time="2025-09-13T10:25:23.685461390Z" level=info msg="StartContainer for \"ab472d14b60c472e10e179d032d620f4453b790c7a9597a5ee27e080a5bc8d7c\"" Sep 13 10:25:23.686920 containerd[1571]: time="2025-09-13T10:25:23.686886065Z" level=info msg="connecting to shim ab472d14b60c472e10e179d032d620f4453b790c7a9597a5ee27e080a5bc8d7c" address="unix:///run/containerd/s/1754034a7fe42c30e3fc5765b199a7e94b7306ad2f087f915c5b91b283f19538" protocol=ttrpc version=3 Sep 13 10:25:23.698358 containerd[1571]: time="2025-09-13T10:25:23.698323537Z" level=info msg="connecting to shim 099a97ff251e461a999df1c8911d25df620001d80c12f694edfc6f883fa573bb" address="unix:///run/containerd/s/6539a96a3d12f2b12f5e060deac3224fc84b4f6bac25234537698f25addf0cb8" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:25:23.711868 systemd[1]: Started cri-containerd-ab472d14b60c472e10e179d032d620f4453b790c7a9597a5ee27e080a5bc8d7c.scope - libcontainer container ab472d14b60c472e10e179d032d620f4453b790c7a9597a5ee27e080a5bc8d7c. Sep 13 10:25:23.725895 systemd[1]: Started cri-containerd-099a97ff251e461a999df1c8911d25df620001d80c12f694edfc6f883fa573bb.scope - libcontainer container 099a97ff251e461a999df1c8911d25df620001d80c12f694edfc6f883fa573bb. Sep 13 10:25:23.772562 containerd[1571]: time="2025-09-13T10:25:23.771863040Z" level=info msg="StartContainer for \"ab472d14b60c472e10e179d032d620f4453b790c7a9597a5ee27e080a5bc8d7c\" returns successfully" Sep 13 10:25:23.774362 containerd[1571]: time="2025-09-13T10:25:23.774322229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-cr2dn,Uid:ac28cb1d-697a-47ec-97ce-51299fffae84,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"099a97ff251e461a999df1c8911d25df620001d80c12f694edfc6f883fa573bb\"" Sep 13 10:25:23.776070 containerd[1571]: time="2025-09-13T10:25:23.776036724Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 10:25:23.826801 update_engine[1518]: I20250913 10:25:23.826611 1518 update_attempter.cc:509] Updating boot flags... Sep 13 10:25:24.405020 kubelet[2735]: E0913 10:25:24.404976 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:24.409827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692447852.mount: Deactivated successfully. Sep 13 10:25:25.607515 kubelet[2735]: E0913 10:25:25.607460 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:25.621825 kubelet[2735]: I0913 10:25:25.621757 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2tjmf" podStartSLOduration=2.621736707 podStartE2EDuration="2.621736707s" podCreationTimestamp="2025-09-13 10:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:25:25.077550016 +0000 UTC m=+8.799415098" watchObservedRunningTime="2025-09-13 10:25:25.621736707 +0000 UTC m=+9.343601798" Sep 13 10:25:26.238222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640379467.mount: Deactivated successfully. Sep 13 10:25:26.409431 kubelet[2735]: E0913 10:25:26.409388 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:26.598682 containerd[1571]: time="2025-09-13T10:25:26.598641188Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:26.599498 containerd[1571]: time="2025-09-13T10:25:26.599470610Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 10:25:26.600608 containerd[1571]: time="2025-09-13T10:25:26.600584229Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:26.602636 containerd[1571]: time="2025-09-13T10:25:26.602583998Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:26.603145 containerd[1571]: time="2025-09-13T10:25:26.603099584Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.827029959s" Sep 13 10:25:26.603187 containerd[1571]: time="2025-09-13T10:25:26.603144770Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 10:25:26.605334 containerd[1571]: time="2025-09-13T10:25:26.604811527Z" level=info msg="CreateContainer within sandbox \"099a97ff251e461a999df1c8911d25df620001d80c12f694edfc6f883fa573bb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 10:25:26.611034 containerd[1571]: time="2025-09-13T10:25:26.610997205Z" level=info msg="Container 22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:26.617176 containerd[1571]: time="2025-09-13T10:25:26.617146134Z" level=info msg="CreateContainer within sandbox \"099a97ff251e461a999df1c8911d25df620001d80c12f694edfc6f883fa573bb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c\"" Sep 13 10:25:26.618191 containerd[1571]: time="2025-09-13T10:25:26.617578683Z" level=info msg="StartContainer for \"22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c\"" Sep 13 10:25:26.618345 containerd[1571]: time="2025-09-13T10:25:26.618312053Z" level=info msg="connecting to shim 22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c" address="unix:///run/containerd/s/6539a96a3d12f2b12f5e060deac3224fc84b4f6bac25234537698f25addf0cb8" protocol=ttrpc version=3 Sep 13 10:25:26.675699 systemd[1]: Started cri-containerd-22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c.scope - libcontainer container 22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c. Sep 13 10:25:26.708161 containerd[1571]: time="2025-09-13T10:25:26.708115472Z" level=info msg="StartContainer for \"22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c\" returns successfully" Sep 13 10:25:29.099340 kubelet[2735]: E0913 10:25:29.098912 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:29.290353 kubelet[2735]: E0913 10:25:29.290312 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:29.406111 kubelet[2735]: I0913 10:25:29.405893 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-cr2dn" podStartSLOduration=3.577707296 podStartE2EDuration="6.405873579s" podCreationTimestamp="2025-09-13 10:25:23 +0000 UTC" firstStartedPulling="2025-09-13 10:25:23.775623178 +0000 UTC m=+7.497488259" lastFinishedPulling="2025-09-13 10:25:26.603789471 +0000 UTC m=+10.325654542" observedRunningTime="2025-09-13 10:25:27.478161769 +0000 UTC m=+11.200026840" watchObservedRunningTime="2025-09-13 10:25:29.405873579 +0000 UTC m=+13.127738660" Sep 13 10:25:29.416586 kubelet[2735]: E0913 10:25:29.416546 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:30.289682 systemd[1]: cri-containerd-22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c.scope: Deactivated successfully. Sep 13 10:25:30.294555 containerd[1571]: time="2025-09-13T10:25:30.294466686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c\" id:\"22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c\" pid:3074 exit_status:1 exited_at:{seconds:1757759130 nanos:293825474}" Sep 13 10:25:30.295323 containerd[1571]: time="2025-09-13T10:25:30.295156119Z" level=info msg="received exit event container_id:\"22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c\" id:\"22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c\" pid:3074 exit_status:1 exited_at:{seconds:1757759130 nanos:293825474}" Sep 13 10:25:30.352161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c-rootfs.mount: Deactivated successfully. Sep 13 10:25:31.422182 kubelet[2735]: I0913 10:25:31.422143 2735 scope.go:117] "RemoveContainer" containerID="22f406a25f4d28e86a2440ab172309a557c340eeb76545c7757384d3c957bf7c" Sep 13 10:25:31.424649 containerd[1571]: time="2025-09-13T10:25:31.424575634Z" level=info msg="CreateContainer within sandbox \"099a97ff251e461a999df1c8911d25df620001d80c12f694edfc6f883fa573bb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 10:25:31.456201 containerd[1571]: time="2025-09-13T10:25:31.455718541Z" level=info msg="Container b433fe76e93500c98be2d321c1a13e26ca03e66cad527637cf24f868872a8621: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:31.463992 containerd[1571]: time="2025-09-13T10:25:31.463942107Z" level=info msg="CreateContainer within sandbox \"099a97ff251e461a999df1c8911d25df620001d80c12f694edfc6f883fa573bb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b433fe76e93500c98be2d321c1a13e26ca03e66cad527637cf24f868872a8621\"" Sep 13 10:25:31.464506 containerd[1571]: time="2025-09-13T10:25:31.464463742Z" level=info msg="StartContainer for \"b433fe76e93500c98be2d321c1a13e26ca03e66cad527637cf24f868872a8621\"" Sep 13 10:25:31.465374 containerd[1571]: time="2025-09-13T10:25:31.465321703Z" level=info msg="connecting to shim b433fe76e93500c98be2d321c1a13e26ca03e66cad527637cf24f868872a8621" address="unix:///run/containerd/s/6539a96a3d12f2b12f5e060deac3224fc84b4f6bac25234537698f25addf0cb8" protocol=ttrpc version=3 Sep 13 10:25:31.490689 systemd[1]: Started cri-containerd-b433fe76e93500c98be2d321c1a13e26ca03e66cad527637cf24f868872a8621.scope - libcontainer container b433fe76e93500c98be2d321c1a13e26ca03e66cad527637cf24f868872a8621. Sep 13 10:25:31.524700 containerd[1571]: time="2025-09-13T10:25:31.524654034Z" level=info msg="StartContainer for \"b433fe76e93500c98be2d321c1a13e26ca03e66cad527637cf24f868872a8621\" returns successfully" Sep 13 10:25:33.318388 sudo[1787]: pam_unix(sudo:session): session closed for user root Sep 13 10:25:33.320465 sshd[1786]: Connection closed by 10.0.0.1 port 38562 Sep 13 10:25:33.320990 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Sep 13 10:25:33.324982 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:38562.service: Deactivated successfully. Sep 13 10:25:33.328258 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 10:25:33.329033 systemd[1]: session-9.scope: Consumed 5.336s CPU time, 228.5M memory peak. Sep 13 10:25:33.330791 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Sep 13 10:25:33.332432 systemd-logind[1517]: Removed session 9. Sep 13 10:25:37.274151 systemd[1]: Created slice kubepods-besteffort-pode1318152_4c74_47e8_8ae0_5ebba4fd02ad.slice - libcontainer container kubepods-besteffort-pode1318152_4c74_47e8_8ae0_5ebba4fd02ad.slice. Sep 13 10:25:37.290102 kubelet[2735]: I0913 10:25:37.289928 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e1318152-4c74-47e8-8ae0-5ebba4fd02ad-typha-certs\") pod \"calico-typha-7bb4897d5-644vp\" (UID: \"e1318152-4c74-47e8-8ae0-5ebba4fd02ad\") " pod="calico-system/calico-typha-7bb4897d5-644vp" Sep 13 10:25:37.290102 kubelet[2735]: I0913 10:25:37.289971 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1318152-4c74-47e8-8ae0-5ebba4fd02ad-tigera-ca-bundle\") pod \"calico-typha-7bb4897d5-644vp\" (UID: \"e1318152-4c74-47e8-8ae0-5ebba4fd02ad\") " pod="calico-system/calico-typha-7bb4897d5-644vp" Sep 13 10:25:37.290102 kubelet[2735]: I0913 10:25:37.289993 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8f7w\" (UniqueName: \"kubernetes.io/projected/e1318152-4c74-47e8-8ae0-5ebba4fd02ad-kube-api-access-m8f7w\") pod \"calico-typha-7bb4897d5-644vp\" (UID: \"e1318152-4c74-47e8-8ae0-5ebba4fd02ad\") " pod="calico-system/calico-typha-7bb4897d5-644vp" Sep 13 10:25:37.578415 kubelet[2735]: E0913 10:25:37.578247 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:37.579192 containerd[1571]: time="2025-09-13T10:25:37.579147630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bb4897d5-644vp,Uid:e1318152-4c74-47e8-8ae0-5ebba4fd02ad,Namespace:calico-system,Attempt:0,}" Sep 13 10:25:37.621015 systemd[1]: Created slice kubepods-besteffort-pod33718f28_3136_4e25_af5b_1d53105deafd.slice - libcontainer container kubepods-besteffort-pod33718f28_3136_4e25_af5b_1d53105deafd.slice. Sep 13 10:25:37.624336 containerd[1571]: time="2025-09-13T10:25:37.624269279Z" level=info msg="connecting to shim 59d0f817ba9258c8ee7842948d4626267e2853ac08753b74e27dd3ae61621eaa" address="unix:///run/containerd/s/18fedc6099a29aaef2767917f019046ff34d2447c28072e65f806258c3426cb5" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:25:37.656676 systemd[1]: Started cri-containerd-59d0f817ba9258c8ee7842948d4626267e2853ac08753b74e27dd3ae61621eaa.scope - libcontainer container 59d0f817ba9258c8ee7842948d4626267e2853ac08753b74e27dd3ae61621eaa. Sep 13 10:25:37.692387 kubelet[2735]: I0913 10:25:37.692241 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-cni-net-dir\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.692387 kubelet[2735]: I0913 10:25:37.692325 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-cni-log-dir\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.692387 kubelet[2735]: I0913 10:25:37.692344 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-var-lib-calico\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.692933 kubelet[2735]: I0913 10:25:37.692361 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lf5\" (UniqueName: \"kubernetes.io/projected/33718f28-3136-4e25-af5b-1d53105deafd-kube-api-access-z7lf5\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.692933 kubelet[2735]: I0913 10:25:37.692587 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-flexvol-driver-host\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.692933 kubelet[2735]: I0913 10:25:37.692603 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-policysync\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.692933 kubelet[2735]: I0913 10:25:37.692731 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-cni-bin-dir\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.692933 kubelet[2735]: I0913 10:25:37.692749 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/33718f28-3136-4e25-af5b-1d53105deafd-node-certs\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.693261 kubelet[2735]: I0913 10:25:37.692762 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-var-run-calico\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.693261 kubelet[2735]: I0913 10:25:37.692976 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33718f28-3136-4e25-af5b-1d53105deafd-tigera-ca-bundle\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.693261 kubelet[2735]: I0913 10:25:37.692998 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-lib-modules\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.693261 kubelet[2735]: I0913 10:25:37.693013 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33718f28-3136-4e25-af5b-1d53105deafd-xtables-lock\") pod \"calico-node-98lqr\" (UID: \"33718f28-3136-4e25-af5b-1d53105deafd\") " pod="calico-system/calico-node-98lqr" Sep 13 10:25:37.711321 containerd[1571]: time="2025-09-13T10:25:37.711271127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bb4897d5-644vp,Uid:e1318152-4c74-47e8-8ae0-5ebba4fd02ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"59d0f817ba9258c8ee7842948d4626267e2853ac08753b74e27dd3ae61621eaa\"" Sep 13 10:25:37.712173 kubelet[2735]: E0913 10:25:37.712144 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:37.713070 containerd[1571]: time="2025-09-13T10:25:37.713033188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 10:25:37.795768 kubelet[2735]: E0913 10:25:37.795678 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.795768 kubelet[2735]: W0913 10:25:37.795715 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.795768 kubelet[2735]: E0913 10:25:37.795753 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.798000 kubelet[2735]: E0913 10:25:37.797939 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.798000 kubelet[2735]: W0913 10:25:37.797965 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.798000 kubelet[2735]: E0913 10:25:37.797987 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.804884 kubelet[2735]: E0913 10:25:37.804840 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.804884 kubelet[2735]: W0913 10:25:37.804857 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.804884 kubelet[2735]: E0913 10:25:37.804872 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.895740 kubelet[2735]: E0913 10:25:37.895655 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2fq7" podUID="36ae8918-1257-451c-80bc-5f671f0cac0f" Sep 13 10:25:37.926308 containerd[1571]: time="2025-09-13T10:25:37.926247382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-98lqr,Uid:33718f28-3136-4e25-af5b-1d53105deafd,Namespace:calico-system,Attempt:0,}" Sep 13 10:25:37.953566 containerd[1571]: time="2025-09-13T10:25:37.952673402Z" level=info msg="connecting to shim b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e" address="unix:///run/containerd/s/f36b4dcecda3b8945a17d4831e8fead1b9d7dfaa7651463ed58dc14ae9a90330" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:25:37.983804 systemd[1]: Started cri-containerd-b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e.scope - libcontainer container b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e. Sep 13 10:25:37.993286 kubelet[2735]: E0913 10:25:37.993146 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.993286 kubelet[2735]: W0913 10:25:37.993171 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.993286 kubelet[2735]: E0913 10:25:37.993194 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.993493 kubelet[2735]: E0913 10:25:37.993473 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.993566 kubelet[2735]: W0913 10:25:37.993554 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.993628 kubelet[2735]: E0913 10:25:37.993617 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.993876 kubelet[2735]: E0913 10:25:37.993863 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.993988 kubelet[2735]: W0913 10:25:37.993932 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.993988 kubelet[2735]: E0913 10:25:37.993945 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.994297 kubelet[2735]: E0913 10:25:37.994281 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.994359 kubelet[2735]: W0913 10:25:37.994347 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.994420 kubelet[2735]: E0913 10:25:37.994407 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.994763 kubelet[2735]: E0913 10:25:37.994750 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.994879 kubelet[2735]: W0913 10:25:37.994827 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.994879 kubelet[2735]: E0913 10:25:37.994840 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.995114 kubelet[2735]: E0913 10:25:37.995102 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.995252 kubelet[2735]: W0913 10:25:37.995161 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.995252 kubelet[2735]: E0913 10:25:37.995173 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.995377 kubelet[2735]: E0913 10:25:37.995365 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.995426 kubelet[2735]: W0913 10:25:37.995416 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.995556 kubelet[2735]: E0913 10:25:37.995515 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.995873 kubelet[2735]: E0913 10:25:37.995859 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.996017 kubelet[2735]: W0913 10:25:37.995925 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.996017 kubelet[2735]: E0913 10:25:37.995938 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.996276 kubelet[2735]: E0913 10:25:37.996263 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.996421 kubelet[2735]: W0913 10:25:37.996337 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.996421 kubelet[2735]: E0913 10:25:37.996349 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.996924 kubelet[2735]: E0913 10:25:37.996901 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.997039 kubelet[2735]: W0913 10:25:37.996980 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.997039 kubelet[2735]: E0913 10:25:37.996994 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.997396 kubelet[2735]: E0913 10:25:37.997329 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.997396 kubelet[2735]: W0913 10:25:37.997340 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.997396 kubelet[2735]: E0913 10:25:37.997349 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.997737 kubelet[2735]: E0913 10:25:37.997697 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.997737 kubelet[2735]: W0913 10:25:37.997709 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.997737 kubelet[2735]: E0913 10:25:37.997718 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.998146 kubelet[2735]: E0913 10:25:37.998085 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.998146 kubelet[2735]: W0913 10:25:37.998097 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.998146 kubelet[2735]: E0913 10:25:37.998106 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.999517 kubelet[2735]: E0913 10:25:37.999401 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.999517 kubelet[2735]: W0913 10:25:37.999415 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.999517 kubelet[2735]: E0913 10:25:37.999425 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:37.999851 kubelet[2735]: E0913 10:25:37.999720 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:37.999851 kubelet[2735]: W0913 10:25:37.999736 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:37.999851 kubelet[2735]: E0913 10:25:37.999748 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.000134 kubelet[2735]: E0913 10:25:38.000004 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.000134 kubelet[2735]: W0913 10:25:38.000016 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.000134 kubelet[2735]: E0913 10:25:38.000026 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.000394 kubelet[2735]: E0913 10:25:38.000288 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.000394 kubelet[2735]: W0913 10:25:38.000300 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.000394 kubelet[2735]: E0913 10:25:38.000310 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.000719 kubelet[2735]: E0913 10:25:38.000603 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.000719 kubelet[2735]: W0913 10:25:38.000616 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.000719 kubelet[2735]: E0913 10:25:38.000627 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.000972 kubelet[2735]: E0913 10:25:38.000865 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.000972 kubelet[2735]: W0913 10:25:38.000876 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.000972 kubelet[2735]: E0913 10:25:38.000884 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.001291 kubelet[2735]: E0913 10:25:38.001117 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.001291 kubelet[2735]: W0913 10:25:38.001128 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.001291 kubelet[2735]: E0913 10:25:38.001137 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.001614 kubelet[2735]: E0913 10:25:38.001429 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.001614 kubelet[2735]: W0913 10:25:38.001441 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.001614 kubelet[2735]: E0913 10:25:38.001450 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.001614 kubelet[2735]: I0913 10:25:38.001486 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/36ae8918-1257-451c-80bc-5f671f0cac0f-varrun\") pod \"csi-node-driver-h2fq7\" (UID: \"36ae8918-1257-451c-80bc-5f671f0cac0f\") " pod="calico-system/csi-node-driver-h2fq7" Sep 13 10:25:38.001967 kubelet[2735]: E0913 10:25:38.001784 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.001967 kubelet[2735]: W0913 10:25:38.001797 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.001967 kubelet[2735]: E0913 10:25:38.001819 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.001967 kubelet[2735]: I0913 10:25:38.001833 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/36ae8918-1257-451c-80bc-5f671f0cac0f-registration-dir\") pod \"csi-node-driver-h2fq7\" (UID: \"36ae8918-1257-451c-80bc-5f671f0cac0f\") " pod="calico-system/csi-node-driver-h2fq7" Sep 13 10:25:38.002310 kubelet[2735]: E0913 10:25:38.002139 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.002310 kubelet[2735]: W0913 10:25:38.002151 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.002310 kubelet[2735]: E0913 10:25:38.002173 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.002310 kubelet[2735]: I0913 10:25:38.002190 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36ae8918-1257-451c-80bc-5f671f0cac0f-kubelet-dir\") pod \"csi-node-driver-h2fq7\" (UID: \"36ae8918-1257-451c-80bc-5f671f0cac0f\") " pod="calico-system/csi-node-driver-h2fq7" Sep 13 10:25:38.002618 kubelet[2735]: E0913 10:25:38.002472 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.002618 kubelet[2735]: W0913 10:25:38.002492 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.002618 kubelet[2735]: E0913 10:25:38.002511 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.002830 kubelet[2735]: I0913 10:25:38.002815 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnnd\" (UniqueName: \"kubernetes.io/projected/36ae8918-1257-451c-80bc-5f671f0cac0f-kube-api-access-lnnnd\") pod \"csi-node-driver-h2fq7\" (UID: \"36ae8918-1257-451c-80bc-5f671f0cac0f\") " pod="calico-system/csi-node-driver-h2fq7" Sep 13 10:25:38.002960 kubelet[2735]: E0913 10:25:38.002947 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.003030 kubelet[2735]: W0913 10:25:38.003017 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.003095 kubelet[2735]: E0913 10:25:38.003083 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.003434 kubelet[2735]: E0913 10:25:38.003317 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.003434 kubelet[2735]: W0913 10:25:38.003341 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.003434 kubelet[2735]: E0913 10:25:38.003416 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.003618 kubelet[2735]: E0913 10:25:38.003592 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.003618 kubelet[2735]: W0913 10:25:38.003604 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.003830 kubelet[2735]: E0913 10:25:38.003814 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.004045 kubelet[2735]: E0913 10:25:38.004033 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.004120 kubelet[2735]: W0913 10:25:38.004108 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.004331 kubelet[2735]: E0913 10:25:38.004308 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.004483 kubelet[2735]: E0913 10:25:38.004446 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.004578 kubelet[2735]: W0913 10:25:38.004567 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.004721 kubelet[2735]: E0913 10:25:38.004697 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.004948 kubelet[2735]: E0913 10:25:38.004916 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.005020 kubelet[2735]: W0913 10:25:38.004993 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.005128 kubelet[2735]: E0913 10:25:38.005105 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.005269 kubelet[2735]: I0913 10:25:38.005246 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/36ae8918-1257-451c-80bc-5f671f0cac0f-socket-dir\") pod \"csi-node-driver-h2fq7\" (UID: \"36ae8918-1257-451c-80bc-5f671f0cac0f\") " pod="calico-system/csi-node-driver-h2fq7" Sep 13 10:25:38.005656 kubelet[2735]: E0913 10:25:38.005595 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.005656 kubelet[2735]: W0913 10:25:38.005607 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.005656 kubelet[2735]: E0913 10:25:38.005617 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.006202 kubelet[2735]: E0913 10:25:38.006177 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.006202 kubelet[2735]: W0913 10:25:38.006189 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.006427 kubelet[2735]: E0913 10:25:38.006411 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.007650 kubelet[2735]: E0913 10:25:38.007524 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.007650 kubelet[2735]: W0913 10:25:38.007555 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.007650 kubelet[2735]: E0913 10:25:38.007565 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.007811 kubelet[2735]: E0913 10:25:38.007799 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.007879 kubelet[2735]: W0913 10:25:38.007855 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.007879 kubelet[2735]: E0913 10:25:38.007867 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.008170 kubelet[2735]: E0913 10:25:38.008122 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.008170 kubelet[2735]: W0913 10:25:38.008133 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.008170 kubelet[2735]: E0913 10:25:38.008142 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.014974 containerd[1571]: time="2025-09-13T10:25:38.014932796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-98lqr,Uid:33718f28-3136-4e25-af5b-1d53105deafd,Namespace:calico-system,Attempt:0,} returns sandbox id \"b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e\"" Sep 13 10:25:38.106799 kubelet[2735]: E0913 10:25:38.106752 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.106799 kubelet[2735]: W0913 10:25:38.106774 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.106799 kubelet[2735]: E0913 10:25:38.106796 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.107072 kubelet[2735]: E0913 10:25:38.107054 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.107072 kubelet[2735]: W0913 10:25:38.107065 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.107136 kubelet[2735]: E0913 10:25:38.107079 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.107367 kubelet[2735]: E0913 10:25:38.107324 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.107367 kubelet[2735]: W0913 10:25:38.107356 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.107433 kubelet[2735]: E0913 10:25:38.107387 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.107586 kubelet[2735]: E0913 10:25:38.107568 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.107586 kubelet[2735]: W0913 10:25:38.107581 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.107671 kubelet[2735]: E0913 10:25:38.107593 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.107936 kubelet[2735]: E0913 10:25:38.107891 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.107936 kubelet[2735]: W0913 10:25:38.107922 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.108101 kubelet[2735]: E0913 10:25:38.107965 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.108225 kubelet[2735]: E0913 10:25:38.108206 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.108225 kubelet[2735]: W0913 10:25:38.108219 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.108271 kubelet[2735]: E0913 10:25:38.108234 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.108419 kubelet[2735]: E0913 10:25:38.108402 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.108419 kubelet[2735]: W0913 10:25:38.108413 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.108472 kubelet[2735]: E0913 10:25:38.108426 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.108613 kubelet[2735]: E0913 10:25:38.108597 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.108613 kubelet[2735]: W0913 10:25:38.108608 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.108679 kubelet[2735]: E0913 10:25:38.108621 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.108870 kubelet[2735]: E0913 10:25:38.108844 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.108870 kubelet[2735]: W0913 10:25:38.108856 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.108920 kubelet[2735]: E0913 10:25:38.108881 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.109037 kubelet[2735]: E0913 10:25:38.109021 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.109037 kubelet[2735]: W0913 10:25:38.109033 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.109083 kubelet[2735]: E0913 10:25:38.109053 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.109209 kubelet[2735]: E0913 10:25:38.109194 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.109209 kubelet[2735]: W0913 10:25:38.109204 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.109256 kubelet[2735]: E0913 10:25:38.109224 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.109396 kubelet[2735]: E0913 10:25:38.109381 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.109396 kubelet[2735]: W0913 10:25:38.109394 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.109450 kubelet[2735]: E0913 10:25:38.109428 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.109624 kubelet[2735]: E0913 10:25:38.109607 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.109624 kubelet[2735]: W0913 10:25:38.109619 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.109686 kubelet[2735]: E0913 10:25:38.109633 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.109855 kubelet[2735]: E0913 10:25:38.109835 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.109855 kubelet[2735]: W0913 10:25:38.109849 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.109906 kubelet[2735]: E0913 10:25:38.109864 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.110076 kubelet[2735]: E0913 10:25:38.110060 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.110076 kubelet[2735]: W0913 10:25:38.110071 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.110130 kubelet[2735]: E0913 10:25:38.110086 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.110261 kubelet[2735]: E0913 10:25:38.110245 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.110261 kubelet[2735]: W0913 10:25:38.110256 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.110308 kubelet[2735]: E0913 10:25:38.110269 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.110443 kubelet[2735]: E0913 10:25:38.110427 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.110443 kubelet[2735]: W0913 10:25:38.110438 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.110501 kubelet[2735]: E0913 10:25:38.110462 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.110651 kubelet[2735]: E0913 10:25:38.110635 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.110651 kubelet[2735]: W0913 10:25:38.110648 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.110728 kubelet[2735]: E0913 10:25:38.110709 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.110841 kubelet[2735]: E0913 10:25:38.110826 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.110841 kubelet[2735]: W0913 10:25:38.110837 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.110887 kubelet[2735]: E0913 10:25:38.110860 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.111038 kubelet[2735]: E0913 10:25:38.111022 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.111038 kubelet[2735]: W0913 10:25:38.111033 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.111081 kubelet[2735]: E0913 10:25:38.111048 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.111234 kubelet[2735]: E0913 10:25:38.111217 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.111234 kubelet[2735]: W0913 10:25:38.111228 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.111290 kubelet[2735]: E0913 10:25:38.111242 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.111409 kubelet[2735]: E0913 10:25:38.111392 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.111409 kubelet[2735]: W0913 10:25:38.111404 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.111452 kubelet[2735]: E0913 10:25:38.111415 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.111665 kubelet[2735]: E0913 10:25:38.111645 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.111665 kubelet[2735]: W0913 10:25:38.111660 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.111734 kubelet[2735]: E0913 10:25:38.111671 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.111911 kubelet[2735]: E0913 10:25:38.111888 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.111911 kubelet[2735]: W0913 10:25:38.111902 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.111911 kubelet[2735]: E0913 10:25:38.111914 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.112145 kubelet[2735]: E0913 10:25:38.112124 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.112145 kubelet[2735]: W0913 10:25:38.112137 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.112145 kubelet[2735]: E0913 10:25:38.112146 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:38.120780 kubelet[2735]: E0913 10:25:38.120743 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:38.120780 kubelet[2735]: W0913 10:25:38.120767 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:38.120847 kubelet[2735]: E0913 10:25:38.120792 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:39.188141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204077280.mount: Deactivated successfully. Sep 13 10:25:39.377479 kubelet[2735]: E0913 10:25:39.377404 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2fq7" podUID="36ae8918-1257-451c-80bc-5f671f0cac0f" Sep 13 10:25:41.377050 kubelet[2735]: E0913 10:25:41.376969 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2fq7" podUID="36ae8918-1257-451c-80bc-5f671f0cac0f" Sep 13 10:25:41.881116 containerd[1571]: time="2025-09-13T10:25:41.881054046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:41.881759 containerd[1571]: time="2025-09-13T10:25:41.881727334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 10:25:41.882899 containerd[1571]: time="2025-09-13T10:25:41.882868342Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:41.884940 containerd[1571]: time="2025-09-13T10:25:41.884890489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:41.885808 containerd[1571]: time="2025-09-13T10:25:41.885736813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.172670893s" Sep 13 10:25:41.885808 containerd[1571]: time="2025-09-13T10:25:41.885794752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 10:25:41.889552 containerd[1571]: time="2025-09-13T10:25:41.889287588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 10:25:41.897431 containerd[1571]: time="2025-09-13T10:25:41.897383231Z" level=info msg="CreateContainer within sandbox \"59d0f817ba9258c8ee7842948d4626267e2853ac08753b74e27dd3ae61621eaa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 10:25:41.906252 containerd[1571]: time="2025-09-13T10:25:41.906228505Z" level=info msg="Container 72990367e9001bbab6eac510f2d5498604a49d754202e2f10ebd2dbafe8f479d: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:41.916491 containerd[1571]: time="2025-09-13T10:25:41.916243561Z" level=info msg="CreateContainer within sandbox \"59d0f817ba9258c8ee7842948d4626267e2853ac08753b74e27dd3ae61621eaa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"72990367e9001bbab6eac510f2d5498604a49d754202e2f10ebd2dbafe8f479d\"" Sep 13 10:25:41.916998 containerd[1571]: time="2025-09-13T10:25:41.916966823Z" level=info msg="StartContainer for \"72990367e9001bbab6eac510f2d5498604a49d754202e2f10ebd2dbafe8f479d\"" Sep 13 10:25:41.918242 containerd[1571]: time="2025-09-13T10:25:41.918211867Z" level=info msg="connecting to shim 72990367e9001bbab6eac510f2d5498604a49d754202e2f10ebd2dbafe8f479d" address="unix:///run/containerd/s/18fedc6099a29aaef2767917f019046ff34d2447c28072e65f806258c3426cb5" protocol=ttrpc version=3 Sep 13 10:25:41.948772 systemd[1]: Started cri-containerd-72990367e9001bbab6eac510f2d5498604a49d754202e2f10ebd2dbafe8f479d.scope - libcontainer container 72990367e9001bbab6eac510f2d5498604a49d754202e2f10ebd2dbafe8f479d. Sep 13 10:25:42.007365 containerd[1571]: time="2025-09-13T10:25:42.007311761Z" level=info msg="StartContainer for \"72990367e9001bbab6eac510f2d5498604a49d754202e2f10ebd2dbafe8f479d\" returns successfully" Sep 13 10:25:42.451280 kubelet[2735]: E0913 10:25:42.451235 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:42.461467 kubelet[2735]: I0913 10:25:42.461368 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bb4897d5-644vp" podStartSLOduration=1.284926081 podStartE2EDuration="5.461345412s" podCreationTimestamp="2025-09-13 10:25:37 +0000 UTC" firstStartedPulling="2025-09-13 10:25:37.712739374 +0000 UTC m=+21.434604455" lastFinishedPulling="2025-09-13 10:25:41.889158705 +0000 UTC m=+25.611023786" observedRunningTime="2025-09-13 10:25:42.460923198 +0000 UTC m=+26.182788289" watchObservedRunningTime="2025-09-13 10:25:42.461345412 +0000 UTC m=+26.183210494" Sep 13 10:25:42.530229 kubelet[2735]: E0913 10:25:42.530178 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.530229 kubelet[2735]: W0913 10:25:42.530205 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.530229 kubelet[2735]: E0913 10:25:42.530230 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.530468 kubelet[2735]: E0913 10:25:42.530451 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.530468 kubelet[2735]: W0913 10:25:42.530458 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.530468 kubelet[2735]: E0913 10:25:42.530466 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.530699 kubelet[2735]: E0913 10:25:42.530671 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.530699 kubelet[2735]: W0913 10:25:42.530680 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.530699 kubelet[2735]: E0913 10:25:42.530688 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.530904 kubelet[2735]: E0913 10:25:42.530891 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.530904 kubelet[2735]: W0913 10:25:42.530900 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.530958 kubelet[2735]: E0913 10:25:42.530907 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.531120 kubelet[2735]: E0913 10:25:42.531108 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.531120 kubelet[2735]: W0913 10:25:42.531117 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.531167 kubelet[2735]: E0913 10:25:42.531126 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.531305 kubelet[2735]: E0913 10:25:42.531288 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.531305 kubelet[2735]: W0913 10:25:42.531297 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.531305 kubelet[2735]: E0913 10:25:42.531304 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.531495 kubelet[2735]: E0913 10:25:42.531481 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.531495 kubelet[2735]: W0913 10:25:42.531490 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.531588 kubelet[2735]: E0913 10:25:42.531499 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.531697 kubelet[2735]: E0913 10:25:42.531684 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.531697 kubelet[2735]: W0913 10:25:42.531694 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.531771 kubelet[2735]: E0913 10:25:42.531702 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.531978 kubelet[2735]: E0913 10:25:42.531940 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.532033 kubelet[2735]: W0913 10:25:42.531975 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.532033 kubelet[2735]: E0913 10:25:42.532009 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.532331 kubelet[2735]: E0913 10:25:42.532313 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.532331 kubelet[2735]: W0913 10:25:42.532327 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.532422 kubelet[2735]: E0913 10:25:42.532337 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.532571 kubelet[2735]: E0913 10:25:42.532556 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.532571 kubelet[2735]: W0913 10:25:42.532566 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.532640 kubelet[2735]: E0913 10:25:42.532574 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.532737 kubelet[2735]: E0913 10:25:42.532724 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.532737 kubelet[2735]: W0913 10:25:42.532734 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.532813 kubelet[2735]: E0913 10:25:42.532741 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.532920 kubelet[2735]: E0913 10:25:42.532907 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.532920 kubelet[2735]: W0913 10:25:42.532916 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.533000 kubelet[2735]: E0913 10:25:42.532924 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.533111 kubelet[2735]: E0913 10:25:42.533100 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.533111 kubelet[2735]: W0913 10:25:42.533111 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.533242 kubelet[2735]: E0913 10:25:42.533118 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.533297 kubelet[2735]: E0913 10:25:42.533283 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.533297 kubelet[2735]: W0913 10:25:42.533292 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.533366 kubelet[2735]: E0913 10:25:42.533299 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.536853 kubelet[2735]: E0913 10:25:42.536831 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.536853 kubelet[2735]: W0913 10:25:42.536848 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.536952 kubelet[2735]: E0913 10:25:42.536863 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.537187 kubelet[2735]: E0913 10:25:42.537170 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.537187 kubelet[2735]: W0913 10:25:42.537185 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.537260 kubelet[2735]: E0913 10:25:42.537216 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.537620 kubelet[2735]: E0913 10:25:42.537586 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.537620 kubelet[2735]: W0913 10:25:42.537616 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.537702 kubelet[2735]: E0913 10:25:42.537655 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.537906 kubelet[2735]: E0913 10:25:42.537888 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.537906 kubelet[2735]: W0913 10:25:42.537900 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.537973 kubelet[2735]: E0913 10:25:42.537933 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.538181 kubelet[2735]: E0913 10:25:42.538164 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.538181 kubelet[2735]: W0913 10:25:42.538176 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.538254 kubelet[2735]: E0913 10:25:42.538191 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.538444 kubelet[2735]: E0913 10:25:42.538415 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.538475 kubelet[2735]: W0913 10:25:42.538443 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.538475 kubelet[2735]: E0913 10:25:42.538459 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.538698 kubelet[2735]: E0913 10:25:42.538684 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.538698 kubelet[2735]: W0913 10:25:42.538694 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.538781 kubelet[2735]: E0913 10:25:42.538744 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.538884 kubelet[2735]: E0913 10:25:42.538870 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.538884 kubelet[2735]: W0913 10:25:42.538880 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.538939 kubelet[2735]: E0913 10:25:42.538919 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.539197 kubelet[2735]: E0913 10:25:42.539148 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.539197 kubelet[2735]: W0913 10:25:42.539162 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.539197 kubelet[2735]: E0913 10:25:42.539181 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.539635 kubelet[2735]: E0913 10:25:42.539602 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.539635 kubelet[2735]: W0913 10:25:42.539616 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.539635 kubelet[2735]: E0913 10:25:42.539635 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.539892 kubelet[2735]: E0913 10:25:42.539822 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.539892 kubelet[2735]: W0913 10:25:42.539831 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.539892 kubelet[2735]: E0913 10:25:42.539844 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.540063 kubelet[2735]: E0913 10:25:42.540044 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.540063 kubelet[2735]: W0913 10:25:42.540057 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.540118 kubelet[2735]: E0913 10:25:42.540071 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.540347 kubelet[2735]: E0913 10:25:42.540327 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.540347 kubelet[2735]: W0913 10:25:42.540341 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.540429 kubelet[2735]: E0913 10:25:42.540356 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.540555 kubelet[2735]: E0913 10:25:42.540525 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.540555 kubelet[2735]: W0913 10:25:42.540550 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.540610 kubelet[2735]: E0913 10:25:42.540564 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.540762 kubelet[2735]: E0913 10:25:42.540744 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.540762 kubelet[2735]: W0913 10:25:42.540756 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.540820 kubelet[2735]: E0913 10:25:42.540781 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.540917 kubelet[2735]: E0913 10:25:42.540900 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.540917 kubelet[2735]: W0913 10:25:42.540911 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.540969 kubelet[2735]: E0913 10:25:42.540929 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.541075 kubelet[2735]: E0913 10:25:42.541058 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.541075 kubelet[2735]: W0913 10:25:42.541069 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.541132 kubelet[2735]: E0913 10:25:42.541082 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:42.541264 kubelet[2735]: E0913 10:25:42.541246 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:42.541264 kubelet[2735]: W0913 10:25:42.541258 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:42.541315 kubelet[2735]: E0913 10:25:42.541267 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.377413 kubelet[2735]: E0913 10:25:43.377347 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2fq7" podUID="36ae8918-1257-451c-80bc-5f671f0cac0f" Sep 13 10:25:43.452734 kubelet[2735]: I0913 10:25:43.452694 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:25:43.453218 kubelet[2735]: E0913 10:25:43.453036 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:43.503123 containerd[1571]: time="2025-09-13T10:25:43.503064739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:43.504024 containerd[1571]: time="2025-09-13T10:25:43.503990521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 10:25:43.505241 containerd[1571]: time="2025-09-13T10:25:43.505205337Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:43.507519 containerd[1571]: time="2025-09-13T10:25:43.507477954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:43.508189 containerd[1571]: time="2025-09-13T10:25:43.508134510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.618817587s" Sep 13 10:25:43.508189 containerd[1571]: time="2025-09-13T10:25:43.508165648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 10:25:43.510575 containerd[1571]: time="2025-09-13T10:25:43.510167135Z" level=info msg="CreateContainer within sandbox \"b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 10:25:43.519117 containerd[1571]: time="2025-09-13T10:25:43.519061003Z" level=info msg="Container 5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:43.526814 containerd[1571]: time="2025-09-13T10:25:43.526773327Z" level=info msg="CreateContainer within sandbox \"b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef\"" Sep 13 10:25:43.527284 containerd[1571]: time="2025-09-13T10:25:43.527250815Z" level=info msg="StartContainer for \"5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef\"" Sep 13 10:25:43.528847 containerd[1571]: time="2025-09-13T10:25:43.528817374Z" level=info msg="connecting to shim 5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef" address="unix:///run/containerd/s/f36b4dcecda3b8945a17d4831e8fead1b9d7dfaa7651463ed58dc14ae9a90330" protocol=ttrpc version=3 Sep 13 10:25:43.539912 kubelet[2735]: E0913 10:25:43.539871 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.539912 kubelet[2735]: W0913 10:25:43.539894 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.539912 kubelet[2735]: E0913 10:25:43.539919 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.540215 kubelet[2735]: E0913 10:25:43.540188 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.540215 kubelet[2735]: W0913 10:25:43.540202 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.540215 kubelet[2735]: E0913 10:25:43.540211 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.540477 kubelet[2735]: E0913 10:25:43.540457 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.540477 kubelet[2735]: W0913 10:25:43.540469 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.540583 kubelet[2735]: E0913 10:25:43.540507 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.540738 kubelet[2735]: E0913 10:25:43.540717 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.540738 kubelet[2735]: W0913 10:25:43.540729 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.540738 kubelet[2735]: E0913 10:25:43.540738 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.541003 kubelet[2735]: E0913 10:25:43.540983 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.541049 kubelet[2735]: W0913 10:25:43.541011 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.541049 kubelet[2735]: E0913 10:25:43.541022 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.541289 kubelet[2735]: E0913 10:25:43.541269 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.541289 kubelet[2735]: W0913 10:25:43.541281 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.541289 kubelet[2735]: E0913 10:25:43.541290 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.541515 kubelet[2735]: E0913 10:25:43.541494 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.541515 kubelet[2735]: W0913 10:25:43.541506 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.541515 kubelet[2735]: E0913 10:25:43.541515 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.541787 kubelet[2735]: E0913 10:25:43.541757 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.541787 kubelet[2735]: W0913 10:25:43.541774 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.541787 kubelet[2735]: E0913 10:25:43.541785 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.542030 kubelet[2735]: E0913 10:25:43.542008 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.542030 kubelet[2735]: W0913 10:25:43.542020 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.542030 kubelet[2735]: E0913 10:25:43.542030 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.542222 kubelet[2735]: E0913 10:25:43.542202 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.542222 kubelet[2735]: W0913 10:25:43.542213 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.542222 kubelet[2735]: E0913 10:25:43.542224 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.542425 kubelet[2735]: E0913 10:25:43.542404 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.542425 kubelet[2735]: W0913 10:25:43.542415 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.542425 kubelet[2735]: E0913 10:25:43.542424 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.542702 kubelet[2735]: E0913 10:25:43.542682 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.542702 kubelet[2735]: W0913 10:25:43.542693 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.542790 kubelet[2735]: E0913 10:25:43.542702 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.543112 kubelet[2735]: E0913 10:25:43.543075 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.543112 kubelet[2735]: W0913 10:25:43.543093 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.543112 kubelet[2735]: E0913 10:25:43.543104 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.543486 kubelet[2735]: E0913 10:25:43.543454 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.543486 kubelet[2735]: W0913 10:25:43.543467 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.543486 kubelet[2735]: E0913 10:25:43.543477 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.543726 kubelet[2735]: E0913 10:25:43.543697 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.543726 kubelet[2735]: W0913 10:25:43.543710 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.543726 kubelet[2735]: E0913 10:25:43.543722 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.546419 kubelet[2735]: E0913 10:25:43.546394 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.546419 kubelet[2735]: W0913 10:25:43.546408 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.546419 kubelet[2735]: E0913 10:25:43.546419 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.546709 kubelet[2735]: E0913 10:25:43.546689 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.546709 kubelet[2735]: W0913 10:25:43.546702 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.546774 kubelet[2735]: E0913 10:25:43.546716 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.546982 kubelet[2735]: E0913 10:25:43.546958 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.546982 kubelet[2735]: W0913 10:25:43.546973 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.546982 kubelet[2735]: E0913 10:25:43.546985 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.547333 kubelet[2735]: E0913 10:25:43.547314 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.547333 kubelet[2735]: W0913 10:25:43.547326 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.547410 kubelet[2735]: E0913 10:25:43.547342 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.547684 kubelet[2735]: E0913 10:25:43.547587 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.547684 kubelet[2735]: W0913 10:25:43.547602 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.547684 kubelet[2735]: E0913 10:25:43.547611 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.547964 kubelet[2735]: E0913 10:25:43.547938 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.547964 kubelet[2735]: W0913 10:25:43.547953 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.548047 kubelet[2735]: E0913 10:25:43.548010 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.548600 kubelet[2735]: E0913 10:25:43.548576 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.548600 kubelet[2735]: W0913 10:25:43.548591 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.548696 kubelet[2735]: E0913 10:25:43.548651 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.549195 kubelet[2735]: E0913 10:25:43.549152 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.549195 kubelet[2735]: W0913 10:25:43.549166 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.549303 kubelet[2735]: E0913 10:25:43.549274 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.549402 kubelet[2735]: E0913 10:25:43.549383 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.549402 kubelet[2735]: W0913 10:25:43.549395 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.549489 kubelet[2735]: E0913 10:25:43.549451 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.549627 kubelet[2735]: E0913 10:25:43.549606 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.549627 kubelet[2735]: W0913 10:25:43.549621 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.549710 kubelet[2735]: E0913 10:25:43.549635 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.549872 kubelet[2735]: E0913 10:25:43.549839 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.549872 kubelet[2735]: W0913 10:25:43.549851 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.549872 kubelet[2735]: E0913 10:25:43.549862 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.550058 kubelet[2735]: E0913 10:25:43.550025 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.550058 kubelet[2735]: W0913 10:25:43.550036 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.550058 kubelet[2735]: E0913 10:25:43.550044 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.550408 kubelet[2735]: E0913 10:25:43.550380 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.550408 kubelet[2735]: W0913 10:25:43.550406 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.550507 kubelet[2735]: E0913 10:25:43.550423 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.550647 kubelet[2735]: E0913 10:25:43.550612 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.550647 kubelet[2735]: W0913 10:25:43.550626 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.550647 kubelet[2735]: E0913 10:25:43.550639 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.550826 kubelet[2735]: E0913 10:25:43.550806 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.550826 kubelet[2735]: W0913 10:25:43.550818 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.550900 kubelet[2735]: E0913 10:25:43.550830 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.550994 kubelet[2735]: E0913 10:25:43.550976 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.550994 kubelet[2735]: W0913 10:25:43.550987 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.550994 kubelet[2735]: E0913 10:25:43.550995 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.551168 kubelet[2735]: E0913 10:25:43.551151 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.551168 kubelet[2735]: W0913 10:25:43.551161 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.551168 kubelet[2735]: E0913 10:25:43.551169 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.551454 kubelet[2735]: E0913 10:25:43.551436 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:25:43.551454 kubelet[2735]: W0913 10:25:43.551447 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:25:43.551454 kubelet[2735]: E0913 10:25:43.551455 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:25:43.554709 systemd[1]: Started cri-containerd-5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef.scope - libcontainer container 5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef. Sep 13 10:25:43.600458 containerd[1571]: time="2025-09-13T10:25:43.600311902Z" level=info msg="StartContainer for \"5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef\" returns successfully" Sep 13 10:25:43.614972 systemd[1]: cri-containerd-5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef.scope: Deactivated successfully. Sep 13 10:25:43.615327 systemd[1]: cri-containerd-5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef.scope: Consumed 43ms CPU time, 6.5M memory peak, 4.3M written to disk. Sep 13 10:25:43.618341 containerd[1571]: time="2025-09-13T10:25:43.618305505Z" level=info msg="received exit event container_id:\"5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef\" id:\"5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef\" pid:3516 exited_at:{seconds:1757759143 nanos:617965926}" Sep 13 10:25:43.618545 containerd[1571]: time="2025-09-13T10:25:43.618490293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef\" id:\"5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef\" pid:3516 exited_at:{seconds:1757759143 nanos:617965926}" Sep 13 10:25:43.639927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b906c430ae2d680cc797c44cfa6f74186a2fcb88ef7ff5e832548a4cf48ceef-rootfs.mount: Deactivated successfully. Sep 13 10:25:44.456275 containerd[1571]: time="2025-09-13T10:25:44.456228758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 10:25:45.377234 kubelet[2735]: E0913 10:25:45.377176 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2fq7" podUID="36ae8918-1257-451c-80bc-5f671f0cac0f" Sep 13 10:25:47.378029 kubelet[2735]: E0913 10:25:47.377938 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2fq7" podUID="36ae8918-1257-451c-80bc-5f671f0cac0f" Sep 13 10:25:47.860921 containerd[1571]: time="2025-09-13T10:25:47.860875599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:47.862071 containerd[1571]: time="2025-09-13T10:25:47.862048394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 10:25:47.863215 containerd[1571]: time="2025-09-13T10:25:47.863191614Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:47.865262 containerd[1571]: time="2025-09-13T10:25:47.865204018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:47.865926 containerd[1571]: time="2025-09-13T10:25:47.865886191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.409619301s" Sep 13 10:25:47.865968 containerd[1571]: time="2025-09-13T10:25:47.865924713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 10:25:47.869200 containerd[1571]: time="2025-09-13T10:25:47.869136413Z" level=info msg="CreateContainer within sandbox \"b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 10:25:47.881815 containerd[1571]: time="2025-09-13T10:25:47.881757498Z" level=info msg="Container 88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:47.891385 containerd[1571]: time="2025-09-13T10:25:47.891344236Z" level=info msg="CreateContainer within sandbox \"b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300\"" Sep 13 10:25:47.891933 containerd[1571]: time="2025-09-13T10:25:47.891898488Z" level=info msg="StartContainer for \"88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300\"" Sep 13 10:25:47.893615 containerd[1571]: time="2025-09-13T10:25:47.893581042Z" level=info msg="connecting to shim 88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300" address="unix:///run/containerd/s/f36b4dcecda3b8945a17d4831e8fead1b9d7dfaa7651463ed58dc14ae9a90330" protocol=ttrpc version=3 Sep 13 10:25:47.920700 systemd[1]: Started cri-containerd-88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300.scope - libcontainer container 88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300. Sep 13 10:25:47.966219 containerd[1571]: time="2025-09-13T10:25:47.966171560Z" level=info msg="StartContainer for \"88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300\" returns successfully" Sep 13 10:25:49.365173 systemd[1]: cri-containerd-88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300.scope: Deactivated successfully. Sep 13 10:25:49.365684 systemd[1]: cri-containerd-88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300.scope: Consumed 628ms CPU time, 178.4M memory peak, 3.7M read from disk, 171.3M written to disk. Sep 13 10:25:49.366312 containerd[1571]: time="2025-09-13T10:25:49.366223417Z" level=info msg="received exit event container_id:\"88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300\" id:\"88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300\" pid:3575 exited_at:{seconds:1757759149 nanos:365908324}" Sep 13 10:25:49.366701 containerd[1571]: time="2025-09-13T10:25:49.366275715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300\" id:\"88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300\" pid:3575 exited_at:{seconds:1757759149 nanos:365908324}" Sep 13 10:25:49.378555 kubelet[2735]: E0913 10:25:49.377478 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2fq7" podUID="36ae8918-1257-451c-80bc-5f671f0cac0f" Sep 13 10:25:49.390201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88e08b6c76ae70265eb45b9c2b2ab4d56fe9c647753b677dc444e1c725738300-rootfs.mount: Deactivated successfully. Sep 13 10:25:49.428554 kubelet[2735]: I0913 10:25:49.428514 2735 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 10:25:49.525553 systemd[1]: Created slice kubepods-burstable-pod332cfa20_cbf3_471e_a252_91d5c9d68d05.slice - libcontainer container kubepods-burstable-pod332cfa20_cbf3_471e_a252_91d5c9d68d05.slice. Sep 13 10:25:49.531496 systemd[1]: Created slice kubepods-burstable-podbc75f4fb_0a1c_49fb_a0d7_141e5083d312.slice - libcontainer container kubepods-burstable-podbc75f4fb_0a1c_49fb_a0d7_141e5083d312.slice. Sep 13 10:25:49.536909 systemd[1]: Created slice kubepods-besteffort-poda9b1d0e6_d124_4ce1_b03f_2f2566e34a5e.slice - libcontainer container kubepods-besteffort-poda9b1d0e6_d124_4ce1_b03f_2f2566e34a5e.slice. Sep 13 10:25:49.541638 systemd[1]: Created slice kubepods-besteffort-podff73d452_8bc0_4b29_883d_3a8f8df30df4.slice - libcontainer container kubepods-besteffort-podff73d452_8bc0_4b29_883d_3a8f8df30df4.slice. Sep 13 10:25:49.548051 systemd[1]: Created slice kubepods-besteffort-pod0b2c76b8_96c3_4491_a4d0_e4bb679eea01.slice - libcontainer container kubepods-besteffort-pod0b2c76b8_96c3_4491_a4d0_e4bb679eea01.slice. Sep 13 10:25:49.556288 systemd[1]: Created slice kubepods-besteffort-pod33be0de5_a55c_4d60_88e7_12b6ab30c6d9.slice - libcontainer container kubepods-besteffort-pod33be0de5_a55c_4d60_88e7_12b6ab30c6d9.slice. Sep 13 10:25:49.563644 systemd[1]: Created slice kubepods-besteffort-pod1583222c_6d16_4797_9770_76267a81da49.slice - libcontainer container kubepods-besteffort-pod1583222c_6d16_4797_9770_76267a81da49.slice. Sep 13 10:25:49.587560 kubelet[2735]: I0913 10:25:49.586892 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshc4\" (UniqueName: \"kubernetes.io/projected/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-kube-api-access-tshc4\") pod \"whisker-58f4887dc7-7ddbf\" (UID: \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\") " pod="calico-system/whisker-58f4887dc7-7ddbf" Sep 13 10:25:49.587560 kubelet[2735]: I0913 10:25:49.586936 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff73d452-8bc0-4b29-883d-3a8f8df30df4-calico-apiserver-certs\") pod \"calico-apiserver-6f964d5944-f9ntd\" (UID: \"ff73d452-8bc0-4b29-883d-3a8f8df30df4\") " pod="calico-apiserver/calico-apiserver-6f964d5944-f9ntd" Sep 13 10:25:49.587560 kubelet[2735]: I0913 10:25:49.586952 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvv7\" (UniqueName: \"kubernetes.io/projected/ff73d452-8bc0-4b29-883d-3a8f8df30df4-kube-api-access-nkvv7\") pod \"calico-apiserver-6f964d5944-f9ntd\" (UID: \"ff73d452-8bc0-4b29-883d-3a8f8df30df4\") " pod="calico-apiserver/calico-apiserver-6f964d5944-f9ntd" Sep 13 10:25:49.587560 kubelet[2735]: I0913 10:25:49.586968 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc75f4fb-0a1c-49fb-a0d7-141e5083d312-config-volume\") pod \"coredns-668d6bf9bc-2pl89\" (UID: \"bc75f4fb-0a1c-49fb-a0d7-141e5083d312\") " pod="kube-system/coredns-668d6bf9bc-2pl89" Sep 13 10:25:49.587560 kubelet[2735]: I0913 10:25:49.586983 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b2c76b8-96c3-4491-a4d0-e4bb679eea01-calico-apiserver-certs\") pod \"calico-apiserver-6f964d5944-bnt8t\" (UID: \"0b2c76b8-96c3-4491-a4d0-e4bb679eea01\") " pod="calico-apiserver/calico-apiserver-6f964d5944-bnt8t" Sep 13 10:25:49.587856 kubelet[2735]: I0913 10:25:49.586998 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgr9\" (UniqueName: \"kubernetes.io/projected/bc75f4fb-0a1c-49fb-a0d7-141e5083d312-kube-api-access-ptgr9\") pod \"coredns-668d6bf9bc-2pl89\" (UID: \"bc75f4fb-0a1c-49fb-a0d7-141e5083d312\") " pod="kube-system/coredns-668d6bf9bc-2pl89" Sep 13 10:25:49.587856 kubelet[2735]: I0913 10:25:49.587016 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t256s\" (UniqueName: \"kubernetes.io/projected/a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e-kube-api-access-t256s\") pod \"calico-kube-controllers-dd9b5bf75-8xjv4\" (UID: \"a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e\") " pod="calico-system/calico-kube-controllers-dd9b5bf75-8xjv4" Sep 13 10:25:49.587856 kubelet[2735]: I0913 10:25:49.587031 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1583222c-6d16-4797-9770-76267a81da49-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-9whjr\" (UID: \"1583222c-6d16-4797-9770-76267a81da49\") " pod="calico-system/goldmane-54d579b49d-9whjr" Sep 13 10:25:49.587856 kubelet[2735]: I0913 10:25:49.587084 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332cfa20-cbf3-471e-a252-91d5c9d68d05-config-volume\") pod \"coredns-668d6bf9bc-wgdxp\" (UID: \"332cfa20-cbf3-471e-a252-91d5c9d68d05\") " pod="kube-system/coredns-668d6bf9bc-wgdxp" Sep 13 10:25:49.587856 kubelet[2735]: I0913 10:25:49.587117 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-whisker-backend-key-pair\") pod \"whisker-58f4887dc7-7ddbf\" (UID: \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\") " pod="calico-system/whisker-58f4887dc7-7ddbf" Sep 13 10:25:49.587973 kubelet[2735]: I0913 10:25:49.587139 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp66g\" (UniqueName: \"kubernetes.io/projected/332cfa20-cbf3-471e-a252-91d5c9d68d05-kube-api-access-cp66g\") pod \"coredns-668d6bf9bc-wgdxp\" (UID: \"332cfa20-cbf3-471e-a252-91d5c9d68d05\") " pod="kube-system/coredns-668d6bf9bc-wgdxp" Sep 13 10:25:49.587973 kubelet[2735]: I0913 10:25:49.587159 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-whisker-ca-bundle\") pod \"whisker-58f4887dc7-7ddbf\" (UID: \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\") " pod="calico-system/whisker-58f4887dc7-7ddbf" Sep 13 10:25:49.587973 kubelet[2735]: I0913 10:25:49.587183 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1583222c-6d16-4797-9770-76267a81da49-goldmane-key-pair\") pod \"goldmane-54d579b49d-9whjr\" (UID: \"1583222c-6d16-4797-9770-76267a81da49\") " pod="calico-system/goldmane-54d579b49d-9whjr" Sep 13 10:25:49.587973 kubelet[2735]: I0913 10:25:49.587205 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2w7\" (UniqueName: \"kubernetes.io/projected/1583222c-6d16-4797-9770-76267a81da49-kube-api-access-9k2w7\") pod \"goldmane-54d579b49d-9whjr\" (UID: \"1583222c-6d16-4797-9770-76267a81da49\") " pod="calico-system/goldmane-54d579b49d-9whjr" Sep 13 10:25:49.587973 kubelet[2735]: I0913 10:25:49.587225 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e-tigera-ca-bundle\") pod \"calico-kube-controllers-dd9b5bf75-8xjv4\" (UID: \"a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e\") " pod="calico-system/calico-kube-controllers-dd9b5bf75-8xjv4" Sep 13 10:25:49.588101 kubelet[2735]: I0913 10:25:49.587253 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1583222c-6d16-4797-9770-76267a81da49-config\") pod \"goldmane-54d579b49d-9whjr\" (UID: \"1583222c-6d16-4797-9770-76267a81da49\") " pod="calico-system/goldmane-54d579b49d-9whjr" Sep 13 10:25:49.588101 kubelet[2735]: I0913 10:25:49.587275 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxwtk\" (UniqueName: \"kubernetes.io/projected/0b2c76b8-96c3-4491-a4d0-e4bb679eea01-kube-api-access-zxwtk\") pod \"calico-apiserver-6f964d5944-bnt8t\" (UID: \"0b2c76b8-96c3-4491-a4d0-e4bb679eea01\") " pod="calico-apiserver/calico-apiserver-6f964d5944-bnt8t" Sep 13 10:25:49.829445 kubelet[2735]: E0913 10:25:49.829266 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:49.830167 containerd[1571]: time="2025-09-13T10:25:49.830000600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wgdxp,Uid:332cfa20-cbf3-471e-a252-91d5c9d68d05,Namespace:kube-system,Attempt:0,}" Sep 13 10:25:49.834755 kubelet[2735]: E0913 10:25:49.834724 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:25:49.836731 containerd[1571]: time="2025-09-13T10:25:49.836695607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2pl89,Uid:bc75f4fb-0a1c-49fb-a0d7-141e5083d312,Namespace:kube-system,Attempt:0,}" Sep 13 10:25:49.840353 containerd[1571]: time="2025-09-13T10:25:49.840304150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd9b5bf75-8xjv4,Uid:a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e,Namespace:calico-system,Attempt:0,}" Sep 13 10:25:49.845914 containerd[1571]: time="2025-09-13T10:25:49.845849996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f964d5944-f9ntd,Uid:ff73d452-8bc0-4b29-883d-3a8f8df30df4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:25:49.853788 containerd[1571]: time="2025-09-13T10:25:49.853744498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f964d5944-bnt8t,Uid:0b2c76b8-96c3-4491-a4d0-e4bb679eea01,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:25:49.861767 containerd[1571]: time="2025-09-13T10:25:49.861721654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58f4887dc7-7ddbf,Uid:33be0de5-a55c-4d60-88e7-12b6ab30c6d9,Namespace:calico-system,Attempt:0,}" Sep 13 10:25:49.866571 containerd[1571]: time="2025-09-13T10:25:49.866481823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9whjr,Uid:1583222c-6d16-4797-9770-76267a81da49,Namespace:calico-system,Attempt:0,}" Sep 13 10:25:49.959723 containerd[1571]: time="2025-09-13T10:25:49.959648601Z" level=error msg="Failed to destroy network for sandbox \"01c931e495a9b87ded9fc88aacfb5f1e88065f7498ce7a65a36eba8fc239568b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.969185 containerd[1571]: time="2025-09-13T10:25:49.969050155Z" level=error msg="Failed to destroy network for sandbox \"b74b4066de5182908c98b79c128a7d28d2d2f983dcdf9379517ed2ca65b99ce4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.969421 containerd[1571]: time="2025-09-13T10:25:49.969395925Z" level=error msg="Failed to destroy network for sandbox \"7140b1e951ad7700a661e3f7f1025d11970e7796287606fb5fb647ea1ac34843\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.969640 containerd[1571]: time="2025-09-13T10:25:49.969620136Z" level=error msg="Failed to destroy network for sandbox \"dcc4670f03112c554bce30ac5b05520630b304dfd02f9478e5c4522eafb5cd85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.978646 containerd[1571]: time="2025-09-13T10:25:49.978590440Z" level=error msg="Failed to destroy network for sandbox \"1f48750dcf289837ca2e429f338d73ee2a56ab5293c8534e9a156ff6d9a3aa1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.996182 containerd[1571]: time="2025-09-13T10:25:49.987247385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f964d5944-f9ntd,Uid:ff73d452-8bc0-4b29-883d-3a8f8df30df4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f48750dcf289837ca2e429f338d73ee2a56ab5293c8534e9a156ff6d9a3aa1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.996317 containerd[1571]: time="2025-09-13T10:25:49.987251272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wgdxp,Uid:332cfa20-cbf3-471e-a252-91d5c9d68d05,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c931e495a9b87ded9fc88aacfb5f1e88065f7498ce7a65a36eba8fc239568b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.996360 containerd[1571]: time="2025-09-13T10:25:49.987264176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd9b5bf75-8xjv4,Uid:a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7140b1e951ad7700a661e3f7f1025d11970e7796287606fb5fb647ea1ac34843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.996360 containerd[1571]: time="2025-09-13T10:25:49.987271370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58f4887dc7-7ddbf,Uid:33be0de5-a55c-4d60-88e7-12b6ab30c6d9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcc4670f03112c554bce30ac5b05520630b304dfd02f9478e5c4522eafb5cd85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.996432 containerd[1571]: time="2025-09-13T10:25:49.987277471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9whjr,Uid:1583222c-6d16-4797-9770-76267a81da49,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74b4066de5182908c98b79c128a7d28d2d2f983dcdf9379517ed2ca65b99ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.996432 containerd[1571]: time="2025-09-13T10:25:49.987322896Z" level=error msg="Failed to destroy network for sandbox \"aa659bc399586561ca05625fc8e4f1c8ad0979ce72d851aa809822fcff2e0538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.996499 containerd[1571]: time="2025-09-13T10:25:49.987324610Z" level=error msg="Failed to destroy network for sandbox \"84bf5e471a3208bb18ead73d1bc051e4a8bbfa8b34f6a92dd2c074813eb1fe0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.997669 containerd[1571]: time="2025-09-13T10:25:49.997606269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2pl89,Uid:bc75f4fb-0a1c-49fb-a0d7-141e5083d312,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa659bc399586561ca05625fc8e4f1c8ad0979ce72d851aa809822fcff2e0538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:49.998591 containerd[1571]: time="2025-09-13T10:25:49.998553640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f964d5944-bnt8t,Uid:0b2c76b8-96c3-4491-a4d0-e4bb679eea01,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84bf5e471a3208bb18ead73d1bc051e4a8bbfa8b34f6a92dd2c074813eb1fe0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:50.001854 kubelet[2735]: E0913 10:25:50.001695 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f48750dcf289837ca2e429f338d73ee2a56ab5293c8534e9a156ff6d9a3aa1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:50.001854 kubelet[2735]: E0913 10:25:50.001695 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c931e495a9b87ded9fc88aacfb5f1e88065f7498ce7a65a36eba8fc239568b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:50.001854 kubelet[2735]: E0913 10:25:50.001695 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84bf5e471a3208bb18ead73d1bc051e4a8bbfa8b34f6a92dd2c074813eb1fe0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:50.001854 kubelet[2735]: E0913 10:25:50.001758 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7140b1e951ad7700a661e3f7f1025d11970e7796287606fb5fb647ea1ac34843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:50.001854 kubelet[2735]: E0913 10:25:50.001697 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcc4670f03112c554bce30ac5b05520630b304dfd02f9478e5c4522eafb5cd85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:50.002066 kubelet[2735]: E0913 10:25:50.001779 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa659bc399586561ca05625fc8e4f1c8ad0979ce72d851aa809822fcff2e0538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:50.002066 kubelet[2735]: E0913 10:25:50.001797 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcc4670f03112c554bce30ac5b05520630b304dfd02f9478e5c4522eafb5cd85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58f4887dc7-7ddbf" Sep 13 10:25:50.002066 kubelet[2735]: E0913 10:25:50.001809 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa659bc399586561ca05625fc8e4f1c8ad0979ce72d851aa809822fcff2e0538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2pl89" Sep 13 10:25:50.002066 kubelet[2735]: E0913 10:25:50.001820 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7140b1e951ad7700a661e3f7f1025d11970e7796287606fb5fb647ea1ac34843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dd9b5bf75-8xjv4" Sep 13 10:25:50.002161 kubelet[2735]: E0913 10:25:50.001833 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa659bc399586561ca05625fc8e4f1c8ad0979ce72d851aa809822fcff2e0538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2pl89" Sep 13 10:25:50.002161 kubelet[2735]: E0913 10:25:50.001842 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7140b1e951ad7700a661e3f7f1025d11970e7796287606fb5fb647ea1ac34843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dd9b5bf75-8xjv4" Sep 13 10:25:50.002161 kubelet[2735]: E0913 10:25:50.001819 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcc4670f03112c554bce30ac5b05520630b304dfd02f9478e5c4522eafb5cd85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58f4887dc7-7ddbf" Sep 13 10:25:50.002161 kubelet[2735]: E0913 10:25:50.001796 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f48750dcf289837ca2e429f338d73ee2a56ab5293c8534e9a156ff6d9a3aa1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f964d5944-f9ntd" Sep 13 10:25:50.002267 kubelet[2735]: E0913 10:25:50.001883 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f48750dcf289837ca2e429f338d73ee2a56ab5293c8534e9a156ff6d9a3aa1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f964d5944-f9ntd" Sep 13 10:25:50.002267 kubelet[2735]: E0913 10:25:50.001815 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84bf5e471a3208bb18ead73d1bc051e4a8bbfa8b34f6a92dd2c074813eb1fe0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f964d5944-bnt8t" Sep 13 10:25:50.002267 kubelet[2735]: E0913 10:25:50.001911 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84bf5e471a3208bb18ead73d1bc051e4a8bbfa8b34f6a92dd2c074813eb1fe0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f964d5944-bnt8t" Sep 13 10:25:50.002339 kubelet[2735]: E0913 10:25:50.001935 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f964d5944-bnt8t_calico-apiserver(0b2c76b8-96c3-4491-a4d0-e4bb679eea01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f964d5944-bnt8t_calico-apiserver(0b2c76b8-96c3-4491-a4d0-e4bb679eea01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84bf5e471a3208bb18ead73d1bc051e4a8bbfa8b34f6a92dd2c074813eb1fe0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f964d5944-bnt8t" podUID="0b2c76b8-96c3-4491-a4d0-e4bb679eea01" Sep 13 10:25:50.002339 kubelet[2735]: E0913 10:25:50.001934 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dd9b5bf75-8xjv4_calico-system(a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dd9b5bf75-8xjv4_calico-system(a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7140b1e951ad7700a661e3f7f1025d11970e7796287606fb5fb647ea1ac34843\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dd9b5bf75-8xjv4" podUID="a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e" Sep 13 10:25:50.002427 kubelet[2735]: E0913 10:25:50.001824 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c931e495a9b87ded9fc88aacfb5f1e88065f7498ce7a65a36eba8fc239568b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wgdxp" Sep 13 10:25:50.002427 kubelet[2735]: E0913 10:25:50.001972 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01c931e495a9b87ded9fc88aacfb5f1e88065f7498ce7a65a36eba8fc239568b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wgdxp" Sep 13 10:25:50.002427 kubelet[2735]: E0913 10:25:50.001949 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f964d5944-f9ntd_calico-apiserver(ff73d452-8bc0-4b29-883d-3a8f8df30df4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f964d5944-f9ntd_calico-apiserver(ff73d452-8bc0-4b29-883d-3a8f8df30df4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f48750dcf289837ca2e429f338d73ee2a56ab5293c8534e9a156ff6d9a3aa1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f964d5944-f9ntd" podUID="ff73d452-8bc0-4b29-883d-3a8f8df30df4" Sep 13 10:25:50.002512 kubelet[2735]: E0913 10:25:50.001992 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wgdxp_kube-system(332cfa20-cbf3-471e-a252-91d5c9d68d05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wgdxp_kube-system(332cfa20-cbf3-471e-a252-91d5c9d68d05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01c931e495a9b87ded9fc88aacfb5f1e88065f7498ce7a65a36eba8fc239568b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wgdxp" podUID="332cfa20-cbf3-471e-a252-91d5c9d68d05" Sep 13 10:25:50.002512 kubelet[2735]: E0913 10:25:50.002007 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2pl89_kube-system(bc75f4fb-0a1c-49fb-a0d7-141e5083d312)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2pl89_kube-system(bc75f4fb-0a1c-49fb-a0d7-141e5083d312)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa659bc399586561ca05625fc8e4f1c8ad0979ce72d851aa809822fcff2e0538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2pl89" podUID="bc75f4fb-0a1c-49fb-a0d7-141e5083d312" Sep 13 10:25:50.002512 kubelet[2735]: E0913 10:25:50.001762 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74b4066de5182908c98b79c128a7d28d2d2f983dcdf9379517ed2ca65b99ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:50.002630 kubelet[2735]: E0913 10:25:50.002084 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74b4066de5182908c98b79c128a7d28d2d2f983dcdf9379517ed2ca65b99ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-9whjr" Sep 13 10:25:50.002630 kubelet[2735]: E0913 10:25:50.002121 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74b4066de5182908c98b79c128a7d28d2d2f983dcdf9379517ed2ca65b99ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-9whjr" Sep 13 10:25:50.002630 kubelet[2735]: E0913 10:25:50.002142 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-9whjr_calico-system(1583222c-6d16-4797-9770-76267a81da49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-9whjr_calico-system(1583222c-6d16-4797-9770-76267a81da49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b74b4066de5182908c98b79c128a7d28d2d2f983dcdf9379517ed2ca65b99ce4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-9whjr" podUID="1583222c-6d16-4797-9770-76267a81da49" Sep 13 10:25:50.002718 kubelet[2735]: E0913 10:25:50.002272 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58f4887dc7-7ddbf_calico-system(33be0de5-a55c-4d60-88e7-12b6ab30c6d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58f4887dc7-7ddbf_calico-system(33be0de5-a55c-4d60-88e7-12b6ab30c6d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcc4670f03112c554bce30ac5b05520630b304dfd02f9478e5c4522eafb5cd85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58f4887dc7-7ddbf" podUID="33be0de5-a55c-4d60-88e7-12b6ab30c6d9" Sep 13 10:25:50.390595 systemd[1]: run-netns-cni\x2d70bed55d\x2da406\x2df80d\x2d42be\x2dd92ee36fb645.mount: Deactivated successfully. Sep 13 10:25:50.473336 containerd[1571]: time="2025-09-13T10:25:50.473278169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 10:25:51.383217 systemd[1]: Created slice kubepods-besteffort-pod36ae8918_1257_451c_80bc_5f671f0cac0f.slice - libcontainer container kubepods-besteffort-pod36ae8918_1257_451c_80bc_5f671f0cac0f.slice. Sep 13 10:25:51.385914 containerd[1571]: time="2025-09-13T10:25:51.385875952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2fq7,Uid:36ae8918-1257-451c-80bc-5f671f0cac0f,Namespace:calico-system,Attempt:0,}" Sep 13 10:25:51.440671 containerd[1571]: time="2025-09-13T10:25:51.440618015Z" level=error msg="Failed to destroy network for sandbox \"c0968afc2a585a502c907ed1e2743e9f06d4032cbf82284280ddab290ab9b9f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:51.442365 containerd[1571]: time="2025-09-13T10:25:51.442301249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2fq7,Uid:36ae8918-1257-451c-80bc-5f671f0cac0f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0968afc2a585a502c907ed1e2743e9f06d4032cbf82284280ddab290ab9b9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:51.442600 kubelet[2735]: E0913 10:25:51.442562 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0968afc2a585a502c907ed1e2743e9f06d4032cbf82284280ddab290ab9b9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:25:51.442977 kubelet[2735]: E0913 10:25:51.442631 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0968afc2a585a502c907ed1e2743e9f06d4032cbf82284280ddab290ab9b9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h2fq7" Sep 13 10:25:51.442977 kubelet[2735]: E0913 10:25:51.442653 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0968afc2a585a502c907ed1e2743e9f06d4032cbf82284280ddab290ab9b9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h2fq7" Sep 13 10:25:51.442977 kubelet[2735]: E0913 10:25:51.442696 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h2fq7_calico-system(36ae8918-1257-451c-80bc-5f671f0cac0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h2fq7_calico-system(36ae8918-1257-451c-80bc-5f671f0cac0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0968afc2a585a502c907ed1e2743e9f06d4032cbf82284280ddab290ab9b9f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h2fq7" podUID="36ae8918-1257-451c-80bc-5f671f0cac0f" Sep 13 10:25:51.443913 systemd[1]: run-netns-cni\x2d2f0ca4ca\x2d4546\x2df689\x2d3783\x2d548fb6e331a9.mount: Deactivated successfully. Sep 13 10:25:56.332259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551361847.mount: Deactivated successfully. Sep 13 10:25:58.308644 containerd[1571]: time="2025-09-13T10:25:58.308586510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:58.330393 containerd[1571]: time="2025-09-13T10:25:58.309769322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 10:25:58.330454 containerd[1571]: time="2025-09-13T10:25:58.311988078Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:58.330587 containerd[1571]: time="2025-09-13T10:25:58.315587379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.842267371s" Sep 13 10:25:58.330623 containerd[1571]: time="2025-09-13T10:25:58.330592400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 10:25:58.330983 containerd[1571]: time="2025-09-13T10:25:58.330958647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:25:58.339273 containerd[1571]: time="2025-09-13T10:25:58.339223970Z" level=info msg="CreateContainer within sandbox \"b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 10:25:58.381962 containerd[1571]: time="2025-09-13T10:25:58.380119385Z" level=info msg="Container 740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:25:58.391131 containerd[1571]: time="2025-09-13T10:25:58.391103504Z" level=info msg="CreateContainer within sandbox \"b980ea993ebb213918ca8f44a417681bd3c7a2f663e6e3579fbe84178b968a0e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0\"" Sep 13 10:25:58.391665 containerd[1571]: time="2025-09-13T10:25:58.391621196Z" level=info msg="StartContainer for \"740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0\"" Sep 13 10:25:58.393499 containerd[1571]: time="2025-09-13T10:25:58.393461382Z" level=info msg="connecting to shim 740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0" address="unix:///run/containerd/s/f36b4dcecda3b8945a17d4831e8fead1b9d7dfaa7651463ed58dc14ae9a90330" protocol=ttrpc version=3 Sep 13 10:25:58.426736 systemd[1]: Started cri-containerd-740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0.scope - libcontainer container 740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0. Sep 13 10:25:58.492390 containerd[1571]: time="2025-09-13T10:25:58.492330556Z" level=info msg="StartContainer for \"740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0\" returns successfully" Sep 13 10:25:58.558805 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 10:25:58.559617 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 10:25:58.745780 kubelet[2735]: I0913 10:25:58.745701 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-whisker-ca-bundle\") pod \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\" (UID: \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\") " Sep 13 10:25:58.745780 kubelet[2735]: I0913 10:25:58.745771 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tshc4\" (UniqueName: \"kubernetes.io/projected/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-kube-api-access-tshc4\") pod \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\" (UID: \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\") " Sep 13 10:25:58.746439 kubelet[2735]: I0913 10:25:58.746398 2735 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "33be0de5-a55c-4d60-88e7-12b6ab30c6d9" (UID: "33be0de5-a55c-4d60-88e7-12b6ab30c6d9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 10:25:58.746554 kubelet[2735]: I0913 10:25:58.746494 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-whisker-backend-key-pair\") pod \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\" (UID: \"33be0de5-a55c-4d60-88e7-12b6ab30c6d9\") " Sep 13 10:25:58.746636 kubelet[2735]: I0913 10:25:58.746614 2735 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 10:25:58.752413 kubelet[2735]: I0913 10:25:58.752363 2735 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-kube-api-access-tshc4" (OuterVolumeSpecName: "kube-api-access-tshc4") pod "33be0de5-a55c-4d60-88e7-12b6ab30c6d9" (UID: "33be0de5-a55c-4d60-88e7-12b6ab30c6d9"). InnerVolumeSpecName "kube-api-access-tshc4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 10:25:58.753327 kubelet[2735]: I0913 10:25:58.753270 2735 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "33be0de5-a55c-4d60-88e7-12b6ab30c6d9" (UID: "33be0de5-a55c-4d60-88e7-12b6ab30c6d9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 10:25:58.847770 kubelet[2735]: I0913 10:25:58.847714 2735 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tshc4\" (UniqueName: \"kubernetes.io/projected/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-kube-api-access-tshc4\") on node \"localhost\" DevicePath \"\"" Sep 13 10:25:58.847770 kubelet[2735]: I0913 10:25:58.847748 2735 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33be0de5-a55c-4d60-88e7-12b6ab30c6d9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 10:25:59.307918 systemd[1]: Removed slice kubepods-besteffort-pod33be0de5_a55c_4d60_88e7_12b6ab30c6d9.slice - libcontainer container kubepods-besteffort-pod33be0de5_a55c_4d60_88e7_12b6ab30c6d9.slice. Sep 13 10:25:59.333104 kubelet[2735]: I0913 10:25:59.331985 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-98lqr" podStartSLOduration=2.016528268 podStartE2EDuration="22.331962862s" podCreationTimestamp="2025-09-13 10:25:37 +0000 UTC" firstStartedPulling="2025-09-13 10:25:38.016123399 +0000 UTC m=+21.737988480" lastFinishedPulling="2025-09-13 10:25:58.331557993 +0000 UTC m=+42.053423074" observedRunningTime="2025-09-13 10:25:59.319933953 +0000 UTC m=+43.041799034" watchObservedRunningTime="2025-09-13 10:25:59.331962862 +0000 UTC m=+43.053827943" Sep 13 10:25:59.338994 systemd[1]: var-lib-kubelet-pods-33be0de5\x2da55c\x2d4d60\x2d88e7\x2d12b6ab30c6d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtshc4.mount: Deactivated successfully. Sep 13 10:25:59.339158 systemd[1]: var-lib-kubelet-pods-33be0de5\x2da55c\x2d4d60\x2d88e7\x2d12b6ab30c6d9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 10:25:59.382847 systemd[1]: Created slice kubepods-besteffort-pod173493e4_882a_4c7a_a30a_c4d6a06ae42f.slice - libcontainer container kubepods-besteffort-pod173493e4_882a_4c7a_a30a_c4d6a06ae42f.slice. Sep 13 10:25:59.451873 kubelet[2735]: I0913 10:25:59.451805 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84xr\" (UniqueName: \"kubernetes.io/projected/173493e4-882a-4c7a-a30a-c4d6a06ae42f-kube-api-access-h84xr\") pod \"whisker-6494d87476-7j9dc\" (UID: \"173493e4-882a-4c7a-a30a-c4d6a06ae42f\") " pod="calico-system/whisker-6494d87476-7j9dc" Sep 13 10:25:59.452065 kubelet[2735]: I0913 10:25:59.451888 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/173493e4-882a-4c7a-a30a-c4d6a06ae42f-whisker-backend-key-pair\") pod \"whisker-6494d87476-7j9dc\" (UID: \"173493e4-882a-4c7a-a30a-c4d6a06ae42f\") " pod="calico-system/whisker-6494d87476-7j9dc" Sep 13 10:25:59.452065 kubelet[2735]: I0913 10:25:59.451913 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/173493e4-882a-4c7a-a30a-c4d6a06ae42f-whisker-ca-bundle\") pod \"whisker-6494d87476-7j9dc\" (UID: \"173493e4-882a-4c7a-a30a-c4d6a06ae42f\") " pod="calico-system/whisker-6494d87476-7j9dc" Sep 13 10:25:59.499820 containerd[1571]: time="2025-09-13T10:25:59.499773940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0\" id:\"4afc79b86da722514219d6ff796b1f60dc04691f7cef3790b09a0b7e10f6f688\" pid:3959 exit_status:1 exited_at:{seconds:1757759159 nanos:499398304}" Sep 13 10:25:59.688711 containerd[1571]: time="2025-09-13T10:25:59.688658683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6494d87476-7j9dc,Uid:173493e4-882a-4c7a-a30a-c4d6a06ae42f,Namespace:calico-system,Attempt:0,}" Sep 13 10:25:59.888573 systemd-networkd[1472]: calif4709ead394: Link UP Sep 13 10:25:59.889462 systemd-networkd[1472]: calif4709ead394: Gained carrier Sep 13 10:26:00.092020 containerd[1571]: 2025-09-13 10:25:59.717 [INFO][3974] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:00.092020 containerd[1571]: 2025-09-13 10:25:59.737 [INFO][3974] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6494d87476--7j9dc-eth0 whisker-6494d87476- calico-system 173493e4-882a-4c7a-a30a-c4d6a06ae42f 903 0 2025-09-13 10:25:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6494d87476 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6494d87476-7j9dc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif4709ead394 [] [] }} ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Namespace="calico-system" Pod="whisker-6494d87476-7j9dc" WorkloadEndpoint="localhost-k8s-whisker--6494d87476--7j9dc-" Sep 13 10:26:00.092020 containerd[1571]: 2025-09-13 10:25:59.737 [INFO][3974] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Namespace="calico-system" Pod="whisker-6494d87476-7j9dc" WorkloadEndpoint="localhost-k8s-whisker--6494d87476--7j9dc-eth0" Sep 13 10:26:00.092020 containerd[1571]: 2025-09-13 10:25:59.809 [INFO][3989] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" HandleID="k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Workload="localhost-k8s-whisker--6494d87476--7j9dc-eth0" Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.811 [INFO][3989] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" HandleID="k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Workload="localhost-k8s-whisker--6494d87476--7j9dc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6494d87476-7j9dc", "timestamp":"2025-09-13 10:25:59.809615411 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.811 [INFO][3989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.811 [INFO][3989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.812 [INFO][3989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.824 [INFO][3989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" host="localhost" Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.833 [INFO][3989] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.841 [INFO][3989] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.844 [INFO][3989] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.849 [INFO][3989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:00.092342 containerd[1571]: 2025-09-13 10:25:59.849 [INFO][3989] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" host="localhost" Sep 13 10:26:00.092608 containerd[1571]: 2025-09-13 10:25:59.852 [INFO][3989] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6 Sep 13 10:26:00.092608 containerd[1571]: 2025-09-13 10:25:59.859 [INFO][3989] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" host="localhost" Sep 13 10:26:00.092608 containerd[1571]: 2025-09-13 10:25:59.869 [INFO][3989] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" host="localhost" Sep 13 10:26:00.092608 containerd[1571]: 2025-09-13 10:25:59.869 [INFO][3989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" host="localhost" Sep 13 10:26:00.092608 containerd[1571]: 2025-09-13 10:25:59.869 [INFO][3989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:00.092608 containerd[1571]: 2025-09-13 10:25:59.869 [INFO][3989] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" HandleID="k8s-pod-network.c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Workload="localhost-k8s-whisker--6494d87476--7j9dc-eth0" Sep 13 10:26:00.092742 containerd[1571]: 2025-09-13 10:25:59.875 [INFO][3974] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Namespace="calico-system" Pod="whisker-6494d87476-7j9dc" WorkloadEndpoint="localhost-k8s-whisker--6494d87476--7j9dc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6494d87476--7j9dc-eth0", GenerateName:"whisker-6494d87476-", Namespace:"calico-system", SelfLink:"", UID:"173493e4-882a-4c7a-a30a-c4d6a06ae42f", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6494d87476", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6494d87476-7j9dc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif4709ead394", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:00.092742 containerd[1571]: 2025-09-13 10:25:59.875 [INFO][3974] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Namespace="calico-system" Pod="whisker-6494d87476-7j9dc" WorkloadEndpoint="localhost-k8s-whisker--6494d87476--7j9dc-eth0" Sep 13 10:26:00.092831 containerd[1571]: 2025-09-13 10:25:59.875 [INFO][3974] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4709ead394 ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Namespace="calico-system" Pod="whisker-6494d87476-7j9dc" WorkloadEndpoint="localhost-k8s-whisker--6494d87476--7j9dc-eth0" Sep 13 10:26:00.092831 containerd[1571]: 2025-09-13 10:25:59.889 [INFO][3974] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Namespace="calico-system" Pod="whisker-6494d87476-7j9dc" WorkloadEndpoint="localhost-k8s-whisker--6494d87476--7j9dc-eth0" Sep 13 10:26:00.092877 containerd[1571]: 2025-09-13 10:25:59.894 [INFO][3974] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Namespace="calico-system" Pod="whisker-6494d87476-7j9dc" WorkloadEndpoint="localhost-k8s-whisker--6494d87476--7j9dc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6494d87476--7j9dc-eth0", GenerateName:"whisker-6494d87476-", Namespace:"calico-system", SelfLink:"", UID:"173493e4-882a-4c7a-a30a-c4d6a06ae42f", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6494d87476", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6", Pod:"whisker-6494d87476-7j9dc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif4709ead394", MAC:"52:31:f9:c9:5d:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:00.092926 containerd[1571]: 2025-09-13 10:26:00.086 [INFO][3974] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" Namespace="calico-system" Pod="whisker-6494d87476-7j9dc" WorkloadEndpoint="localhost-k8s-whisker--6494d87476--7j9dc-eth0" Sep 13 10:26:00.380734 kubelet[2735]: I0913 10:26:00.380598 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33be0de5-a55c-4d60-88e7-12b6ab30c6d9" path="/var/lib/kubelet/pods/33be0de5-a55c-4d60-88e7-12b6ab30c6d9/volumes" Sep 13 10:26:01.071222 containerd[1571]: time="2025-09-13T10:26:01.071165767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0\" id:\"fa6cce3495b09579a0b0b9d5ef3ff361f777897c43de822c946d17bed605da1f\" pid:4117 exit_status:1 exited_at:{seconds:1757759161 nanos:70803878}" Sep 13 10:26:01.192759 systemd-networkd[1472]: calif4709ead394: Gained IPv6LL Sep 13 10:26:01.243834 containerd[1571]: time="2025-09-13T10:26:01.243783494Z" level=info msg="connecting to shim c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6" address="unix:///run/containerd/s/a25563808964b299ad5214d68530191105cdd5b1c9f39213f7c205e826bfde3a" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:01.274776 systemd[1]: Started cri-containerd-c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6.scope - libcontainer container c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6. Sep 13 10:26:01.300074 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:01.355614 containerd[1571]: time="2025-09-13T10:26:01.353910816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6494d87476-7j9dc,Uid:173493e4-882a-4c7a-a30a-c4d6a06ae42f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6\"" Sep 13 10:26:01.363821 containerd[1571]: time="2025-09-13T10:26:01.363765881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 10:26:01.378773 containerd[1571]: time="2025-09-13T10:26:01.378714961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f964d5944-bnt8t,Uid:0b2c76b8-96c3-4491-a4d0-e4bb679eea01,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:26:01.379059 containerd[1571]: time="2025-09-13T10:26:01.379036324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd9b5bf75-8xjv4,Uid:a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:01.379412 containerd[1571]: time="2025-09-13T10:26:01.379352718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9whjr,Uid:1583222c-6d16-4797-9770-76267a81da49,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:01.507076 systemd-networkd[1472]: cali8233a9cbeed: Link UP Sep 13 10:26:01.507619 systemd-networkd[1472]: cali8233a9cbeed: Gained carrier Sep 13 10:26:01.522693 containerd[1571]: 2025-09-13 10:26:01.426 [INFO][4205] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:01.522693 containerd[1571]: 2025-09-13 10:26:01.439 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--9whjr-eth0 goldmane-54d579b49d- calico-system 1583222c-6d16-4797-9770-76267a81da49 834 0 2025-09-13 10:25:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-9whjr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8233a9cbeed [] [] }} ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Namespace="calico-system" Pod="goldmane-54d579b49d-9whjr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--9whjr-" Sep 13 10:26:01.522693 containerd[1571]: 2025-09-13 10:26:01.440 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Namespace="calico-system" Pod="goldmane-54d579b49d-9whjr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" Sep 13 10:26:01.522693 containerd[1571]: 2025-09-13 10:26:01.469 [INFO][4238] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" HandleID="k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Workload="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.469 [INFO][4238] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" HandleID="k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Workload="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a2610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-9whjr", "timestamp":"2025-09-13 10:26:01.469621963 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.469 [INFO][4238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.470 [INFO][4238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.470 [INFO][4238] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.479 [INFO][4238] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" host="localhost" Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.483 [INFO][4238] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.486 [INFO][4238] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.488 [INFO][4238] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.490 [INFO][4238] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:01.522924 containerd[1571]: 2025-09-13 10:26:01.490 [INFO][4238] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" host="localhost" Sep 13 10:26:01.523220 containerd[1571]: 2025-09-13 10:26:01.491 [INFO][4238] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79 Sep 13 10:26:01.523220 containerd[1571]: 2025-09-13 10:26:01.496 [INFO][4238] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" host="localhost" Sep 13 10:26:01.523220 containerd[1571]: 2025-09-13 10:26:01.500 [INFO][4238] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" host="localhost" Sep 13 10:26:01.523220 containerd[1571]: 2025-09-13 10:26:01.500 [INFO][4238] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" host="localhost" Sep 13 10:26:01.523220 containerd[1571]: 2025-09-13 10:26:01.500 [INFO][4238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:01.523220 containerd[1571]: 2025-09-13 10:26:01.500 [INFO][4238] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" HandleID="k8s-pod-network.81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Workload="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" Sep 13 10:26:01.523392 containerd[1571]: 2025-09-13 10:26:01.504 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Namespace="calico-system" Pod="goldmane-54d579b49d-9whjr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--9whjr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"1583222c-6d16-4797-9770-76267a81da49", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-9whjr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8233a9cbeed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:01.523392 containerd[1571]: 2025-09-13 10:26:01.504 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Namespace="calico-system" Pod="goldmane-54d579b49d-9whjr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" Sep 13 10:26:01.523524 containerd[1571]: 2025-09-13 10:26:01.505 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8233a9cbeed ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Namespace="calico-system" Pod="goldmane-54d579b49d-9whjr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" Sep 13 10:26:01.523524 containerd[1571]: 2025-09-13 10:26:01.507 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Namespace="calico-system" Pod="goldmane-54d579b49d-9whjr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" Sep 13 10:26:01.523613 containerd[1571]: 2025-09-13 10:26:01.508 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Namespace="calico-system" Pod="goldmane-54d579b49d-9whjr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--9whjr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"1583222c-6d16-4797-9770-76267a81da49", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79", Pod:"goldmane-54d579b49d-9whjr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8233a9cbeed", MAC:"56:f7:29:f6:9e:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:01.523687 containerd[1571]: 2025-09-13 10:26:01.517 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" Namespace="calico-system" Pod="goldmane-54d579b49d-9whjr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--9whjr-eth0" Sep 13 10:26:01.563332 containerd[1571]: time="2025-09-13T10:26:01.563274070Z" level=info msg="connecting to shim 81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79" address="unix:///run/containerd/s/dff1c2b7735b964df74da33774302f57e91be269ccce3e5102f0f87aeb87c7ab" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:01.584702 systemd[1]: Started cri-containerd-81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79.scope - libcontainer container 81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79. Sep 13 10:26:01.599589 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:01.615990 systemd-networkd[1472]: calie1997f85229: Link UP Sep 13 10:26:01.617386 systemd-networkd[1472]: calie1997f85229: Gained carrier Sep 13 10:26:01.635364 containerd[1571]: 2025-09-13 10:26:01.429 [INFO][4195] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:01.635364 containerd[1571]: 2025-09-13 10:26:01.442 [INFO][4195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0 calico-apiserver-6f964d5944- calico-apiserver 0b2c76b8-96c3-4491-a4d0-e4bb679eea01 833 0 2025-09-13 10:25:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f964d5944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f964d5944-bnt8t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie1997f85229 [] [] }} ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-bnt8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-" Sep 13 10:26:01.635364 containerd[1571]: 2025-09-13 10:26:01.442 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-bnt8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" Sep 13 10:26:01.635364 containerd[1571]: 2025-09-13 10:26:01.472 [INFO][4241] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" HandleID="k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Workload="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.472 [INFO][4241] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" HandleID="k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Workload="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034dca0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f964d5944-bnt8t", "timestamp":"2025-09-13 10:26:01.472556493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.472 [INFO][4241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.500 [INFO][4241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.501 [INFO][4241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.584 [INFO][4241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" host="localhost" Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.589 [INFO][4241] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.594 [INFO][4241] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.595 [INFO][4241] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.597 [INFO][4241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:01.635613 containerd[1571]: 2025-09-13 10:26:01.597 [INFO][4241] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" host="localhost" Sep 13 10:26:01.635929 containerd[1571]: 2025-09-13 10:26:01.599 [INFO][4241] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260 Sep 13 10:26:01.635929 containerd[1571]: 2025-09-13 10:26:01.602 [INFO][4241] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" host="localhost" Sep 13 10:26:01.635929 containerd[1571]: 2025-09-13 10:26:01.608 [INFO][4241] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" host="localhost" Sep 13 10:26:01.635929 containerd[1571]: 2025-09-13 10:26:01.608 [INFO][4241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" host="localhost" Sep 13 10:26:01.635929 containerd[1571]: 2025-09-13 10:26:01.608 [INFO][4241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:01.635929 containerd[1571]: 2025-09-13 10:26:01.608 [INFO][4241] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" HandleID="k8s-pod-network.6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Workload="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" Sep 13 10:26:01.636072 containerd[1571]: 2025-09-13 10:26:01.613 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-bnt8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0", GenerateName:"calico-apiserver-6f964d5944-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b2c76b8-96c3-4491-a4d0-e4bb679eea01", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f964d5944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f964d5944-bnt8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1997f85229", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:01.636124 containerd[1571]: 2025-09-13 10:26:01.614 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-bnt8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" Sep 13 10:26:01.636124 containerd[1571]: 2025-09-13 10:26:01.614 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1997f85229 ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-bnt8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" Sep 13 10:26:01.636124 containerd[1571]: 2025-09-13 10:26:01.616 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-bnt8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" Sep 13 10:26:01.636189 containerd[1571]: 2025-09-13 10:26:01.616 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-bnt8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0", GenerateName:"calico-apiserver-6f964d5944-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b2c76b8-96c3-4491-a4d0-e4bb679eea01", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f964d5944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260", Pod:"calico-apiserver-6f964d5944-bnt8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1997f85229", MAC:"42:28:c0:20:4b:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:01.636241 containerd[1571]: 2025-09-13 10:26:01.627 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-bnt8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--bnt8t-eth0" Sep 13 10:26:01.642876 containerd[1571]: time="2025-09-13T10:26:01.642840258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9whjr,Uid:1583222c-6d16-4797-9770-76267a81da49,Namespace:calico-system,Attempt:0,} returns sandbox id \"81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79\"" Sep 13 10:26:01.660560 containerd[1571]: time="2025-09-13T10:26:01.660122108Z" level=info msg="connecting to shim 6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260" address="unix:///run/containerd/s/9939ac367948126c7b0afbe503b8f04371c0347f132a8baf48b45e6a83577e82" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:01.689746 systemd[1]: Started cri-containerd-6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260.scope - libcontainer container 6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260. Sep 13 10:26:01.708340 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:01.719800 systemd-networkd[1472]: calif18072688e3: Link UP Sep 13 10:26:01.722826 systemd-networkd[1472]: calif18072688e3: Gained carrier Sep 13 10:26:01.742625 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:47230.service - OpenSSH per-connection server daemon (10.0.0.1:47230). Sep 13 10:26:01.746007 containerd[1571]: 2025-09-13 10:26:01.426 [INFO][4217] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:01.746007 containerd[1571]: 2025-09-13 10:26:01.439 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0 calico-kube-controllers-dd9b5bf75- calico-system a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e 830 0 2025-09-13 10:25:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dd9b5bf75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dd9b5bf75-8xjv4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif18072688e3 [] [] }} ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Namespace="calico-system" Pod="calico-kube-controllers-dd9b5bf75-8xjv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-" Sep 13 10:26:01.746007 containerd[1571]: 2025-09-13 10:26:01.439 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Namespace="calico-system" Pod="calico-kube-controllers-dd9b5bf75-8xjv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" Sep 13 10:26:01.746007 containerd[1571]: 2025-09-13 10:26:01.477 [INFO][4236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" HandleID="k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Workload="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.477 [INFO][4236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" HandleID="k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Workload="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7a60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dd9b5bf75-8xjv4", "timestamp":"2025-09-13 10:26:01.477338222 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.477 [INFO][4236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.608 [INFO][4236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.608 [INFO][4236] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.682 [INFO][4236] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" host="localhost" Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.689 [INFO][4236] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.695 [INFO][4236] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.696 [INFO][4236] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.698 [INFO][4236] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:01.746261 containerd[1571]: 2025-09-13 10:26:01.699 [INFO][4236] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" host="localhost" Sep 13 10:26:01.746612 containerd[1571]: 2025-09-13 10:26:01.700 [INFO][4236] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c Sep 13 10:26:01.746612 containerd[1571]: 2025-09-13 10:26:01.704 [INFO][4236] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" host="localhost" Sep 13 10:26:01.746612 containerd[1571]: 2025-09-13 10:26:01.709 [INFO][4236] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" host="localhost" Sep 13 10:26:01.746612 containerd[1571]: 2025-09-13 10:26:01.709 [INFO][4236] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" host="localhost" Sep 13 10:26:01.746612 containerd[1571]: 2025-09-13 10:26:01.709 [INFO][4236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:01.746612 containerd[1571]: 2025-09-13 10:26:01.709 [INFO][4236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" HandleID="k8s-pod-network.8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Workload="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" Sep 13 10:26:01.746780 containerd[1571]: 2025-09-13 10:26:01.712 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Namespace="calico-system" Pod="calico-kube-controllers-dd9b5bf75-8xjv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0", GenerateName:"calico-kube-controllers-dd9b5bf75-", Namespace:"calico-system", SelfLink:"", UID:"a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd9b5bf75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dd9b5bf75-8xjv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif18072688e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:01.746855 containerd[1571]: 2025-09-13 10:26:01.713 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Namespace="calico-system" Pod="calico-kube-controllers-dd9b5bf75-8xjv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" Sep 13 10:26:01.746855 containerd[1571]: 2025-09-13 10:26:01.713 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif18072688e3 ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Namespace="calico-system" Pod="calico-kube-controllers-dd9b5bf75-8xjv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" Sep 13 10:26:01.746855 containerd[1571]: 2025-09-13 10:26:01.721 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Namespace="calico-system" Pod="calico-kube-controllers-dd9b5bf75-8xjv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" Sep 13 10:26:01.746934 containerd[1571]: 2025-09-13 10:26:01.725 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Namespace="calico-system" Pod="calico-kube-controllers-dd9b5bf75-8xjv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0", GenerateName:"calico-kube-controllers-dd9b5bf75-", Namespace:"calico-system", SelfLink:"", UID:"a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd9b5bf75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c", Pod:"calico-kube-controllers-dd9b5bf75-8xjv4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif18072688e3", MAC:"02:f8:c2:22:31:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:01.747016 containerd[1571]: 2025-09-13 10:26:01.738 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" Namespace="calico-system" Pod="calico-kube-controllers-dd9b5bf75-8xjv4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd9b5bf75--8xjv4-eth0" Sep 13 10:26:01.761247 containerd[1571]: time="2025-09-13T10:26:01.761213022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f964d5944-bnt8t,Uid:0b2c76b8-96c3-4491-a4d0-e4bb679eea01,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260\"" Sep 13 10:26:01.775731 containerd[1571]: time="2025-09-13T10:26:01.775680337Z" level=info msg="connecting to shim 8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c" address="unix:///run/containerd/s/b3b5ca7472ca0a17090f572228d498afda3a610966c0bf04e6a60045c4de0256" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:01.803681 systemd[1]: Started cri-containerd-8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c.scope - libcontainer container 8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c. Sep 13 10:26:01.816582 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 47230 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:01.817369 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:01.818861 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:01.824095 systemd-logind[1517]: New session 10 of user core. Sep 13 10:26:01.828666 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 10:26:01.849109 containerd[1571]: time="2025-09-13T10:26:01.849072463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd9b5bf75-8xjv4,Uid:a9b1d0e6-d124-4ce1-b03f-2f2566e34a5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c\"" Sep 13 10:26:01.981009 sshd[4416]: Connection closed by 10.0.0.1 port 47230 Sep 13 10:26:01.981282 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:01.987151 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:47230.service: Deactivated successfully. Sep 13 10:26:01.990407 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 10:26:01.992070 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Sep 13 10:26:01.993316 systemd-logind[1517]: Removed session 10. Sep 13 10:26:02.378056 kubelet[2735]: E0913 10:26:02.378012 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:02.378529 kubelet[2735]: E0913 10:26:02.378244 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:02.378960 containerd[1571]: time="2025-09-13T10:26:02.378908981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f964d5944-f9ntd,Uid:ff73d452-8bc0-4b29-883d-3a8f8df30df4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:26:02.379649 containerd[1571]: time="2025-09-13T10:26:02.379609356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wgdxp,Uid:332cfa20-cbf3-471e-a252-91d5c9d68d05,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:02.379776 containerd[1571]: time="2025-09-13T10:26:02.379612642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2pl89,Uid:bc75f4fb-0a1c-49fb-a0d7-141e5083d312,Namespace:kube-system,Attempt:0,}" Sep 13 10:26:02.717224 systemd-networkd[1472]: cali8a856e15b58: Link UP Sep 13 10:26:02.719006 systemd-networkd[1472]: cali8a856e15b58: Gained carrier Sep 13 10:26:02.728690 systemd-networkd[1472]: calie1997f85229: Gained IPv6LL Sep 13 10:26:02.735675 containerd[1571]: 2025-09-13 10:26:02.503 [INFO][4475] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:02.735675 containerd[1571]: 2025-09-13 10:26:02.515 [INFO][4475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0 coredns-668d6bf9bc- kube-system 332cfa20-cbf3-471e-a252-91d5c9d68d05 823 0 2025-09-13 10:25:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wgdxp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a856e15b58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Namespace="kube-system" Pod="coredns-668d6bf9bc-wgdxp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wgdxp-" Sep 13 10:26:02.735675 containerd[1571]: 2025-09-13 10:26:02.515 [INFO][4475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Namespace="kube-system" Pod="coredns-668d6bf9bc-wgdxp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" Sep 13 10:26:02.735675 containerd[1571]: 2025-09-13 10:26:02.542 [INFO][4508] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" HandleID="k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Workload="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.542 [INFO][4508] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" HandleID="k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Workload="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cfe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wgdxp", "timestamp":"2025-09-13 10:26:02.542275429 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.542 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.542 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.542 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.549 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" host="localhost" Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.555 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.558 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.560 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.696 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:02.735967 containerd[1571]: 2025-09-13 10:26:02.696 [INFO][4508] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" host="localhost" Sep 13 10:26:02.736286 containerd[1571]: 2025-09-13 10:26:02.699 [INFO][4508] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da Sep 13 10:26:02.736286 containerd[1571]: 2025-09-13 10:26:02.703 [INFO][4508] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" host="localhost" Sep 13 10:26:02.736286 containerd[1571]: 2025-09-13 10:26:02.709 [INFO][4508] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" host="localhost" Sep 13 10:26:02.736286 containerd[1571]: 2025-09-13 10:26:02.710 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" host="localhost" Sep 13 10:26:02.736286 containerd[1571]: 2025-09-13 10:26:02.710 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:02.736286 containerd[1571]: 2025-09-13 10:26:02.710 [INFO][4508] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" HandleID="k8s-pod-network.2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Workload="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" Sep 13 10:26:02.736450 containerd[1571]: 2025-09-13 10:26:02.713 [INFO][4475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Namespace="kube-system" Pod="coredns-668d6bf9bc-wgdxp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"332cfa20-cbf3-471e-a252-91d5c9d68d05", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wgdxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a856e15b58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:02.736790 containerd[1571]: 2025-09-13 10:26:02.713 [INFO][4475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Namespace="kube-system" Pod="coredns-668d6bf9bc-wgdxp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" Sep 13 10:26:02.736790 containerd[1571]: 2025-09-13 10:26:02.713 [INFO][4475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a856e15b58 ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Namespace="kube-system" Pod="coredns-668d6bf9bc-wgdxp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" Sep 13 10:26:02.736790 containerd[1571]: 2025-09-13 10:26:02.719 [INFO][4475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Namespace="kube-system" Pod="coredns-668d6bf9bc-wgdxp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" Sep 13 10:26:02.736900 containerd[1571]: 2025-09-13 10:26:02.719 [INFO][4475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Namespace="kube-system" Pod="coredns-668d6bf9bc-wgdxp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"332cfa20-cbf3-471e-a252-91d5c9d68d05", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da", Pod:"coredns-668d6bf9bc-wgdxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a856e15b58", MAC:"1a:4d:a0:54:0c:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:02.736900 containerd[1571]: 2025-09-13 10:26:02.732 [INFO][4475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" Namespace="kube-system" Pod="coredns-668d6bf9bc-wgdxp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wgdxp-eth0" Sep 13 10:26:02.758877 systemd-networkd[1472]: cali17f232f2a49: Link UP Sep 13 10:26:02.760000 systemd-networkd[1472]: cali17f232f2a49: Gained carrier Sep 13 10:26:02.767151 containerd[1571]: time="2025-09-13T10:26:02.767095747Z" level=info msg="connecting to shim 2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da" address="unix:///run/containerd/s/0fc796612c91ef99bc69ba6a91843fe698bcfc319e97740f0a070b31de9355c5" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.496 [INFO][4463] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.509 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2pl89-eth0 coredns-668d6bf9bc- kube-system bc75f4fb-0a1c-49fb-a0d7-141e5083d312 827 0 2025-09-13 10:25:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2pl89 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali17f232f2a49 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pl89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pl89-" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.510 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pl89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.550 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" HandleID="k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Workload="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.551 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" HandleID="k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Workload="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038e0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2pl89", "timestamp":"2025-09-13 10:26:02.550617771 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.551 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.710 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.710 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.716 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.727 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.734 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.736 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.738 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.738 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.740 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6 Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.743 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.750 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.750 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" host="localhost" Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.750 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:02.775104 containerd[1571]: 2025-09-13 10:26:02.750 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" HandleID="k8s-pod-network.ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Workload="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" Sep 13 10:26:02.776108 containerd[1571]: 2025-09-13 10:26:02.753 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pl89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2pl89-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc75f4fb-0a1c-49fb-a0d7-141e5083d312", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2pl89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17f232f2a49", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:02.776108 containerd[1571]: 2025-09-13 10:26:02.754 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pl89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" Sep 13 10:26:02.776108 containerd[1571]: 2025-09-13 10:26:02.754 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17f232f2a49 ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pl89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" Sep 13 10:26:02.776108 containerd[1571]: 2025-09-13 10:26:02.761 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pl89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" Sep 13 10:26:02.776108 containerd[1571]: 2025-09-13 10:26:02.761 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pl89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2pl89-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc75f4fb-0a1c-49fb-a0d7-141e5083d312", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6", Pod:"coredns-668d6bf9bc-2pl89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17f232f2a49", MAC:"16:1e:5a:66:6a:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:02.776108 containerd[1571]: 2025-09-13 10:26:02.771 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" Namespace="kube-system" Pod="coredns-668d6bf9bc-2pl89" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2pl89-eth0" Sep 13 10:26:02.800799 systemd[1]: Started cri-containerd-2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da.scope - libcontainer container 2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da. Sep 13 10:26:02.810570 containerd[1571]: time="2025-09-13T10:26:02.810482468Z" level=info msg="connecting to shim ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6" address="unix:///run/containerd/s/70ad911654d41be6b4e104e026b65a046e08c1978d0115f29d00953ce082b369" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:02.818837 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:02.845815 systemd[1]: Started cri-containerd-ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6.scope - libcontainer container ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6. Sep 13 10:26:02.860915 containerd[1571]: time="2025-09-13T10:26:02.860866496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wgdxp,Uid:332cfa20-cbf3-471e-a252-91d5c9d68d05,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da\"" Sep 13 10:26:02.861911 kubelet[2735]: E0913 10:26:02.861871 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:02.868130 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:02.872335 containerd[1571]: time="2025-09-13T10:26:02.871793592Z" level=info msg="CreateContainer within sandbox \"2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:26:02.877779 systemd-networkd[1472]: cali17c3d0827a2: Link UP Sep 13 10:26:02.879341 systemd-networkd[1472]: cali17c3d0827a2: Gained carrier Sep 13 10:26:02.895027 containerd[1571]: time="2025-09-13T10:26:02.894976950Z" level=info msg="Container 17316586a0fa759f8f9b8476fe61ded5543ab3fe0e0182551763f554a673da4e: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.505 [INFO][4468] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.517 [INFO][4468] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0 calico-apiserver-6f964d5944- calico-apiserver ff73d452-8bc0-4b29-883d-3a8f8df30df4 832 0 2025-09-13 10:25:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f964d5944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f964d5944-f9ntd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali17c3d0827a2 [] [] }} ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-f9ntd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.517 [INFO][4468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-f9ntd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.553 [INFO][4518] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" HandleID="k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Workload="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.553 [INFO][4518] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" HandleID="k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Workload="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f964d5944-f9ntd", "timestamp":"2025-09-13 10:26:02.553633534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.553 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.751 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.751 [INFO][4518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.817 [INFO][4518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.825 [INFO][4518] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.834 [INFO][4518] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.838 [INFO][4518] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.840 [INFO][4518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.840 [INFO][4518] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.843 [INFO][4518] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590 Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.854 [INFO][4518] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.863 [INFO][4518] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.864 [INFO][4518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" host="localhost" Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.864 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:02.898170 containerd[1571]: 2025-09-13 10:26:02.864 [INFO][4518] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" HandleID="k8s-pod-network.a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Workload="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" Sep 13 10:26:02.898904 containerd[1571]: 2025-09-13 10:26:02.873 [INFO][4468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-f9ntd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0", GenerateName:"calico-apiserver-6f964d5944-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff73d452-8bc0-4b29-883d-3a8f8df30df4", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f964d5944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f964d5944-f9ntd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali17c3d0827a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:02.898904 containerd[1571]: 2025-09-13 10:26:02.874 [INFO][4468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-f9ntd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" Sep 13 10:26:02.898904 containerd[1571]: 2025-09-13 10:26:02.874 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17c3d0827a2 ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-f9ntd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" Sep 13 10:26:02.898904 containerd[1571]: 2025-09-13 10:26:02.880 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-f9ntd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" Sep 13 10:26:02.898904 containerd[1571]: 2025-09-13 10:26:02.880 [INFO][4468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-f9ntd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0", GenerateName:"calico-apiserver-6f964d5944-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff73d452-8bc0-4b29-883d-3a8f8df30df4", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f964d5944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590", Pod:"calico-apiserver-6f964d5944-f9ntd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali17c3d0827a2", MAC:"f6:b6:7e:fe:a5:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:02.898904 containerd[1571]: 2025-09-13 10:26:02.892 [INFO][4468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" Namespace="calico-apiserver" Pod="calico-apiserver-6f964d5944-f9ntd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f964d5944--f9ntd-eth0" Sep 13 10:26:02.904263 containerd[1571]: time="2025-09-13T10:26:02.904211648Z" level=info msg="CreateContainer within sandbox \"2f17d81393a04fc638fa226b45f7f2df86497ade392832df4fe303968f0cd4da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17316586a0fa759f8f9b8476fe61ded5543ab3fe0e0182551763f554a673da4e\"" Sep 13 10:26:02.905933 containerd[1571]: time="2025-09-13T10:26:02.905890610Z" level=info msg="StartContainer for \"17316586a0fa759f8f9b8476fe61ded5543ab3fe0e0182551763f554a673da4e\"" Sep 13 10:26:02.907435 containerd[1571]: time="2025-09-13T10:26:02.907307802Z" level=info msg="connecting to shim 17316586a0fa759f8f9b8476fe61ded5543ab3fe0e0182551763f554a673da4e" address="unix:///run/containerd/s/0fc796612c91ef99bc69ba6a91843fe698bcfc319e97740f0a070b31de9355c5" protocol=ttrpc version=3 Sep 13 10:26:02.912151 containerd[1571]: time="2025-09-13T10:26:02.912099058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2pl89,Uid:bc75f4fb-0a1c-49fb-a0d7-141e5083d312,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6\"" Sep 13 10:26:02.912871 kubelet[2735]: E0913 10:26:02.912844 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:02.915091 containerd[1571]: time="2025-09-13T10:26:02.915017757Z" level=info msg="CreateContainer within sandbox \"ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:26:02.920716 systemd-networkd[1472]: cali8233a9cbeed: Gained IPv6LL Sep 13 10:26:02.922168 systemd-networkd[1472]: calif18072688e3: Gained IPv6LL Sep 13 10:26:02.926225 containerd[1571]: time="2025-09-13T10:26:02.926182960Z" level=info msg="Container 387214491a6d225748089821b0bf558bc049a7f52d75006a534edf0bf3600ed7: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:02.929128 containerd[1571]: time="2025-09-13T10:26:02.929092073Z" level=info msg="connecting to shim a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590" address="unix:///run/containerd/s/eda3b42020048d78983444a42750c7bffbc46a9fb7f38e1cf4fb8b2a49408066" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:02.929789 systemd[1]: Started cri-containerd-17316586a0fa759f8f9b8476fe61ded5543ab3fe0e0182551763f554a673da4e.scope - libcontainer container 17316586a0fa759f8f9b8476fe61ded5543ab3fe0e0182551763f554a673da4e. Sep 13 10:26:02.935730 containerd[1571]: time="2025-09-13T10:26:02.935684170Z" level=info msg="CreateContainer within sandbox \"ce8303e806e2ff64de9c70b05104757b064ca07d75efbc7f85e344a337aaebc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"387214491a6d225748089821b0bf558bc049a7f52d75006a534edf0bf3600ed7\"" Sep 13 10:26:02.936615 containerd[1571]: time="2025-09-13T10:26:02.936503197Z" level=info msg="StartContainer for \"387214491a6d225748089821b0bf558bc049a7f52d75006a534edf0bf3600ed7\"" Sep 13 10:26:02.937500 containerd[1571]: time="2025-09-13T10:26:02.937461707Z" level=info msg="connecting to shim 387214491a6d225748089821b0bf558bc049a7f52d75006a534edf0bf3600ed7" address="unix:///run/containerd/s/70ad911654d41be6b4e104e026b65a046e08c1978d0115f29d00953ce082b369" protocol=ttrpc version=3 Sep 13 10:26:02.964687 systemd[1]: Started cri-containerd-387214491a6d225748089821b0bf558bc049a7f52d75006a534edf0bf3600ed7.scope - libcontainer container 387214491a6d225748089821b0bf558bc049a7f52d75006a534edf0bf3600ed7. Sep 13 10:26:02.969487 systemd[1]: Started cri-containerd-a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590.scope - libcontainer container a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590. Sep 13 10:26:02.998140 containerd[1571]: time="2025-09-13T10:26:02.998094908Z" level=info msg="StartContainer for \"17316586a0fa759f8f9b8476fe61ded5543ab3fe0e0182551763f554a673da4e\" returns successfully" Sep 13 10:26:02.999163 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:03.012573 containerd[1571]: time="2025-09-13T10:26:03.012520391Z" level=info msg="StartContainer for \"387214491a6d225748089821b0bf558bc049a7f52d75006a534edf0bf3600ed7\" returns successfully" Sep 13 10:26:03.033018 containerd[1571]: time="2025-09-13T10:26:03.032882129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f964d5944-f9ntd,Uid:ff73d452-8bc0-4b29-883d-3a8f8df30df4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590\"" Sep 13 10:26:03.288187 containerd[1571]: time="2025-09-13T10:26:03.288071558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:03.289022 containerd[1571]: time="2025-09-13T10:26:03.288935320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 10:26:03.290020 containerd[1571]: time="2025-09-13T10:26:03.289989799Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:03.291878 containerd[1571]: time="2025-09-13T10:26:03.291834032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:03.292401 containerd[1571]: time="2025-09-13T10:26:03.292369307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.928560053s" Sep 13 10:26:03.292401 containerd[1571]: time="2025-09-13T10:26:03.292397189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 10:26:03.295053 containerd[1571]: time="2025-09-13T10:26:03.294836048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 10:26:03.296210 containerd[1571]: time="2025-09-13T10:26:03.296180773Z" level=info msg="CreateContainer within sandbox \"c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 10:26:03.304350 containerd[1571]: time="2025-09-13T10:26:03.304315065Z" level=info msg="Container 6da12b0b00e8f11c5b74fa78047539738c3f825b4dbec3a0c85db52b0a377794: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:03.313553 containerd[1571]: time="2025-09-13T10:26:03.313496521Z" level=info msg="CreateContainer within sandbox \"c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6da12b0b00e8f11c5b74fa78047539738c3f825b4dbec3a0c85db52b0a377794\"" Sep 13 10:26:03.315479 containerd[1571]: time="2025-09-13T10:26:03.313863110Z" level=info msg="StartContainer for \"6da12b0b00e8f11c5b74fa78047539738c3f825b4dbec3a0c85db52b0a377794\"" Sep 13 10:26:03.315479 containerd[1571]: time="2025-09-13T10:26:03.315021384Z" level=info msg="connecting to shim 6da12b0b00e8f11c5b74fa78047539738c3f825b4dbec3a0c85db52b0a377794" address="unix:///run/containerd/s/a25563808964b299ad5214d68530191105cdd5b1c9f39213f7c205e826bfde3a" protocol=ttrpc version=3 Sep 13 10:26:03.334085 kubelet[2735]: E0913 10:26:03.334034 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:03.341279 kubelet[2735]: E0913 10:26:03.341238 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:03.343816 systemd[1]: Started cri-containerd-6da12b0b00e8f11c5b74fa78047539738c3f825b4dbec3a0c85db52b0a377794.scope - libcontainer container 6da12b0b00e8f11c5b74fa78047539738c3f825b4dbec3a0c85db52b0a377794. Sep 13 10:26:03.375616 kubelet[2735]: I0913 10:26:03.375510 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wgdxp" podStartSLOduration=40.375483909 podStartE2EDuration="40.375483909s" podCreationTimestamp="2025-09-13 10:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:26:03.359002235 +0000 UTC m=+47.080867336" watchObservedRunningTime="2025-09-13 10:26:03.375483909 +0000 UTC m=+47.097348990" Sep 13 10:26:03.378487 containerd[1571]: time="2025-09-13T10:26:03.378441351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2fq7,Uid:36ae8918-1257-451c-80bc-5f671f0cac0f,Namespace:calico-system,Attempt:0,}" Sep 13 10:26:03.405967 kubelet[2735]: I0913 10:26:03.405885 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2pl89" podStartSLOduration=40.405861621 podStartE2EDuration="40.405861621s" podCreationTimestamp="2025-09-13 10:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:26:03.37668833 +0000 UTC m=+47.098553411" watchObservedRunningTime="2025-09-13 10:26:03.405861621 +0000 UTC m=+47.127726702" Sep 13 10:26:03.490811 containerd[1571]: time="2025-09-13T10:26:03.490704236Z" level=info msg="StartContainer for \"6da12b0b00e8f11c5b74fa78047539738c3f825b4dbec3a0c85db52b0a377794\" returns successfully" Sep 13 10:26:03.537774 systemd-networkd[1472]: cali3001546b90d: Link UP Sep 13 10:26:03.538389 systemd-networkd[1472]: cali3001546b90d: Gained carrier Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.431 [INFO][4803] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.446 [INFO][4803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h2fq7-eth0 csi-node-driver- calico-system 36ae8918-1257-451c-80bc-5f671f0cac0f 707 0 2025-09-13 10:25:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h2fq7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3001546b90d [] [] }} ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Namespace="calico-system" Pod="csi-node-driver-h2fq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2fq7-" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.446 [INFO][4803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Namespace="calico-system" Pod="csi-node-driver-h2fq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2fq7-eth0" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.496 [INFO][4819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" HandleID="k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Workload="localhost-k8s-csi--node--driver--h2fq7-eth0" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.496 [INFO][4819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" HandleID="k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Workload="localhost-k8s-csi--node--driver--h2fq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h2fq7", "timestamp":"2025-09-13 10:26:03.496630081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.496 [INFO][4819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.496 [INFO][4819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.496 [INFO][4819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.505 [INFO][4819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.510 [INFO][4819] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.513 [INFO][4819] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.517 [INFO][4819] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.520 [INFO][4819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.520 [INFO][4819] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.522 [INFO][4819] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.525 [INFO][4819] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.532 [INFO][4819] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.532 [INFO][4819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" host="localhost" Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.532 [INFO][4819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:26:03.553033 containerd[1571]: 2025-09-13 10:26:03.532 [INFO][4819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" HandleID="k8s-pod-network.a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Workload="localhost-k8s-csi--node--driver--h2fq7-eth0" Sep 13 10:26:03.553590 containerd[1571]: 2025-09-13 10:26:03.536 [INFO][4803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Namespace="calico-system" Pod="csi-node-driver-h2fq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2fq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h2fq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36ae8918-1257-451c-80bc-5f671f0cac0f", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h2fq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3001546b90d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:03.553590 containerd[1571]: 2025-09-13 10:26:03.536 [INFO][4803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Namespace="calico-system" Pod="csi-node-driver-h2fq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2fq7-eth0" Sep 13 10:26:03.553590 containerd[1571]: 2025-09-13 10:26:03.536 [INFO][4803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3001546b90d ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Namespace="calico-system" Pod="csi-node-driver-h2fq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2fq7-eth0" Sep 13 10:26:03.553590 containerd[1571]: 2025-09-13 10:26:03.538 [INFO][4803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Namespace="calico-system" Pod="csi-node-driver-h2fq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2fq7-eth0" Sep 13 10:26:03.553590 containerd[1571]: 2025-09-13 10:26:03.539 [INFO][4803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Namespace="calico-system" Pod="csi-node-driver-h2fq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2fq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h2fq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36ae8918-1257-451c-80bc-5f671f0cac0f", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb", Pod:"csi-node-driver-h2fq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3001546b90d", MAC:"46:80:00:08:9b:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:26:03.553590 containerd[1571]: 2025-09-13 10:26:03.548 [INFO][4803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" Namespace="calico-system" Pod="csi-node-driver-h2fq7" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2fq7-eth0" Sep 13 10:26:03.575139 containerd[1571]: time="2025-09-13T10:26:03.575083281Z" level=info msg="connecting to shim a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb" address="unix:///run/containerd/s/492ee19d2900dfc91ae396cd6c9c60da74311f9e372641d163f5343b47d27086" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:26:03.610670 systemd[1]: Started cri-containerd-a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb.scope - libcontainer container a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb. Sep 13 10:26:03.622715 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:26:03.635307 containerd[1571]: time="2025-09-13T10:26:03.635260299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2fq7,Uid:36ae8918-1257-451c-80bc-5f671f0cac0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb\"" Sep 13 10:26:03.944677 systemd-networkd[1472]: cali17f232f2a49: Gained IPv6LL Sep 13 10:26:04.349485 kubelet[2735]: E0913 10:26:04.349389 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:04.349485 kubelet[2735]: E0913 10:26:04.349460 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:04.520841 systemd-networkd[1472]: cali8a856e15b58: Gained IPv6LL Sep 13 10:26:04.712771 systemd-networkd[1472]: cali17c3d0827a2: Gained IPv6LL Sep 13 10:26:05.289696 systemd-networkd[1472]: cali3001546b90d: Gained IPv6LL Sep 13 10:26:05.351416 kubelet[2735]: E0913 10:26:05.351349 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:05.352174 kubelet[2735]: E0913 10:26:05.352151 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:05.636474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811306666.mount: Deactivated successfully. Sep 13 10:26:06.165300 containerd[1571]: time="2025-09-13T10:26:06.165239745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:06.166075 containerd[1571]: time="2025-09-13T10:26:06.166016153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 10:26:06.167108 containerd[1571]: time="2025-09-13T10:26:06.167068578Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:06.169345 containerd[1571]: time="2025-09-13T10:26:06.169309825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:06.170037 containerd[1571]: time="2025-09-13T10:26:06.169996905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 2.8751303s" Sep 13 10:26:06.170037 containerd[1571]: time="2025-09-13T10:26:06.170021742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 10:26:06.171810 containerd[1571]: time="2025-09-13T10:26:06.171779181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 10:26:06.173819 containerd[1571]: time="2025-09-13T10:26:06.173775669Z" level=info msg="CreateContainer within sandbox \"81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 10:26:06.183146 containerd[1571]: time="2025-09-13T10:26:06.182720148Z" level=info msg="Container d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:06.192382 containerd[1571]: time="2025-09-13T10:26:06.192317464Z" level=info msg="CreateContainer within sandbox \"81682304e4f96924eaa026f081804987d80d3a035afbb5f82194b09e4d514a79\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10\"" Sep 13 10:26:06.192936 containerd[1571]: time="2025-09-13T10:26:06.192898575Z" level=info msg="StartContainer for \"d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10\"" Sep 13 10:26:06.194254 containerd[1571]: time="2025-09-13T10:26:06.194220686Z" level=info msg="connecting to shim d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10" address="unix:///run/containerd/s/dff1c2b7735b964df74da33774302f57e91be269ccce3e5102f0f87aeb87c7ab" protocol=ttrpc version=3 Sep 13 10:26:06.227186 systemd[1]: Started cri-containerd-d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10.scope - libcontainer container d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10. Sep 13 10:26:06.289683 containerd[1571]: time="2025-09-13T10:26:06.289636513Z" level=info msg="StartContainer for \"d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10\" returns successfully" Sep 13 10:26:06.454727 containerd[1571]: time="2025-09-13T10:26:06.454518668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10\" id:\"dd0e5e925cba7fb068ab4e7042839eb9a993d5fa41028f3f2a30d409e2a3b00f\" pid:5007 exit_status:1 exited_at:{seconds:1757759166 nanos:454116073}" Sep 13 10:26:06.851149 kubelet[2735]: I0913 10:26:06.851116 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:26:06.851651 kubelet[2735]: E0913 10:26:06.851499 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:06.868571 kubelet[2735]: I0913 10:26:06.868491 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-9whjr" podStartSLOduration=25.348629067 podStartE2EDuration="29.868472511s" podCreationTimestamp="2025-09-13 10:25:37 +0000 UTC" firstStartedPulling="2025-09-13 10:26:01.651243517 +0000 UTC m=+45.373108599" lastFinishedPulling="2025-09-13 10:26:06.171086962 +0000 UTC m=+49.892952043" observedRunningTime="2025-09-13 10:26:06.368458242 +0000 UTC m=+50.090323323" watchObservedRunningTime="2025-09-13 10:26:06.868472511 +0000 UTC m=+50.590337592" Sep 13 10:26:06.997047 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:47234.service - OpenSSH per-connection server daemon (10.0.0.1:47234). Sep 13 10:26:07.135852 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 47234 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:07.137618 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:07.142271 systemd-logind[1517]: New session 11 of user core. Sep 13 10:26:07.152681 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 10:26:07.294260 sshd[5049]: Connection closed by 10.0.0.1 port 47234 Sep 13 10:26:07.294793 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:07.300986 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:47234.service: Deactivated successfully. Sep 13 10:26:07.303650 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 10:26:07.305098 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Sep 13 10:26:07.307458 systemd-logind[1517]: Removed session 11. Sep 13 10:26:07.357954 kubelet[2735]: E0913 10:26:07.357900 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:07.447983 containerd[1571]: time="2025-09-13T10:26:07.447526339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10\" id:\"cf04ec6115d29e95d435f8b2ccdd03150267d5b05d968c9b999da8241e120c8b\" pid:5093 exit_status:1 exited_at:{seconds:1757759167 nanos:447146816}" Sep 13 10:26:07.660447 systemd-networkd[1472]: vxlan.calico: Link UP Sep 13 10:26:07.660457 systemd-networkd[1472]: vxlan.calico: Gained carrier Sep 13 10:26:09.128741 systemd-networkd[1472]: vxlan.calico: Gained IPv6LL Sep 13 10:26:09.219853 containerd[1571]: time="2025-09-13T10:26:09.219805004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:09.220782 containerd[1571]: time="2025-09-13T10:26:09.220745639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 10:26:09.222728 containerd[1571]: time="2025-09-13T10:26:09.222676563Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:09.225228 containerd[1571]: time="2025-09-13T10:26:09.225183959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:09.225747 containerd[1571]: time="2025-09-13T10:26:09.225717450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.053907722s" Sep 13 10:26:09.225819 containerd[1571]: time="2025-09-13T10:26:09.225747296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 10:26:09.240390 containerd[1571]: time="2025-09-13T10:26:09.240361285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 10:26:09.251151 containerd[1571]: time="2025-09-13T10:26:09.251078009Z" level=info msg="CreateContainer within sandbox \"6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 10:26:09.258609 containerd[1571]: time="2025-09-13T10:26:09.258550274Z" level=info msg="Container 5b40b3ae2ca6e3e3e674e08ac6ed95604a695c19ae87669f6c96eb5f21b5c18e: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:09.266240 containerd[1571]: time="2025-09-13T10:26:09.266195373Z" level=info msg="CreateContainer within sandbox \"6630cd2caaf04b6c3c75bcb1f8b7036888fe0d1ada1ad52955bcd6c6ade9c260\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b40b3ae2ca6e3e3e674e08ac6ed95604a695c19ae87669f6c96eb5f21b5c18e\"" Sep 13 10:26:09.266824 containerd[1571]: time="2025-09-13T10:26:09.266773287Z" level=info msg="StartContainer for \"5b40b3ae2ca6e3e3e674e08ac6ed95604a695c19ae87669f6c96eb5f21b5c18e\"" Sep 13 10:26:09.267796 containerd[1571]: time="2025-09-13T10:26:09.267751503Z" level=info msg="connecting to shim 5b40b3ae2ca6e3e3e674e08ac6ed95604a695c19ae87669f6c96eb5f21b5c18e" address="unix:///run/containerd/s/9939ac367948126c7b0afbe503b8f04371c0347f132a8baf48b45e6a83577e82" protocol=ttrpc version=3 Sep 13 10:26:09.310677 systemd[1]: Started cri-containerd-5b40b3ae2ca6e3e3e674e08ac6ed95604a695c19ae87669f6c96eb5f21b5c18e.scope - libcontainer container 5b40b3ae2ca6e3e3e674e08ac6ed95604a695c19ae87669f6c96eb5f21b5c18e. Sep 13 10:26:09.365844 containerd[1571]: time="2025-09-13T10:26:09.365781393Z" level=info msg="StartContainer for \"5b40b3ae2ca6e3e3e674e08ac6ed95604a695c19ae87669f6c96eb5f21b5c18e\" returns successfully" Sep 13 10:26:10.390213 kubelet[2735]: I0913 10:26:10.390143 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f964d5944-bnt8t" podStartSLOduration=27.912622349 podStartE2EDuration="35.390126774s" podCreationTimestamp="2025-09-13 10:25:35 +0000 UTC" firstStartedPulling="2025-09-13 10:26:01.762681279 +0000 UTC m=+45.484546361" lastFinishedPulling="2025-09-13 10:26:09.240185705 +0000 UTC m=+52.962050786" observedRunningTime="2025-09-13 10:26:10.389263244 +0000 UTC m=+54.111128335" watchObservedRunningTime="2025-09-13 10:26:10.390126774 +0000 UTC m=+54.111991855" Sep 13 10:26:11.372965 kubelet[2735]: I0913 10:26:11.372907 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:26:12.840635 containerd[1571]: time="2025-09-13T10:26:12.840583233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:12.841933 containerd[1571]: time="2025-09-13T10:26:12.841866482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 10:26:12.843066 containerd[1571]: time="2025-09-13T10:26:12.843029264Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:12.845324 containerd[1571]: time="2025-09-13T10:26:12.845293022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:12.849938 containerd[1571]: time="2025-09-13T10:26:12.849903975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.609515037s" Sep 13 10:26:12.849997 containerd[1571]: time="2025-09-13T10:26:12.849940183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 10:26:12.850866 containerd[1571]: time="2025-09-13T10:26:12.850839110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 10:26:12.851399 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:38118.service - OpenSSH per-connection server daemon (10.0.0.1:38118). Sep 13 10:26:12.862430 containerd[1571]: time="2025-09-13T10:26:12.862393233Z" level=info msg="CreateContainer within sandbox \"8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 10:26:12.872076 containerd[1571]: time="2025-09-13T10:26:12.872025579Z" level=info msg="Container 1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:12.882240 containerd[1571]: time="2025-09-13T10:26:12.882190125Z" level=info msg="CreateContainer within sandbox \"8d38e88181c54ac939450eb9f60a6e6e073a97ca3c3e3f8a953d5f8e470c0c3c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79\"" Sep 13 10:26:12.883966 containerd[1571]: time="2025-09-13T10:26:12.883933808Z" level=info msg="StartContainer for \"1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79\"" Sep 13 10:26:12.885331 containerd[1571]: time="2025-09-13T10:26:12.885300643Z" level=info msg="connecting to shim 1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79" address="unix:///run/containerd/s/b3b5ca7472ca0a17090f572228d498afda3a610966c0bf04e6a60045c4de0256" protocol=ttrpc version=3 Sep 13 10:26:12.928759 systemd[1]: Started cri-containerd-1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79.scope - libcontainer container 1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79. Sep 13 10:26:12.936559 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 38118 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:12.937271 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:12.942964 systemd-logind[1517]: New session 12 of user core. Sep 13 10:26:12.954799 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 10:26:12.996740 containerd[1571]: time="2025-09-13T10:26:12.996701776Z" level=info msg="StartContainer for \"1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79\" returns successfully" Sep 13 10:26:13.112731 sshd[5306]: Connection closed by 10.0.0.1 port 38118 Sep 13 10:26:13.112996 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:13.118241 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:38118.service: Deactivated successfully. Sep 13 10:26:13.120957 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 10:26:13.123573 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Sep 13 10:26:13.125171 systemd-logind[1517]: Removed session 12. Sep 13 10:26:13.432950 containerd[1571]: time="2025-09-13T10:26:13.432825933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79\" id:\"a28b6f7abc4f0937ea4357776f3e18d1848ad393e5cec433256204679cc99e62\" pid:5359 exited_at:{seconds:1757759173 nanos:432431292}" Sep 13 10:26:13.478165 kubelet[2735]: I0913 10:26:13.478065 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dd9b5bf75-8xjv4" podStartSLOduration=24.477416382 podStartE2EDuration="35.478046094s" podCreationTimestamp="2025-09-13 10:25:38 +0000 UTC" firstStartedPulling="2025-09-13 10:26:01.850064055 +0000 UTC m=+45.571929136" lastFinishedPulling="2025-09-13 10:26:12.850693777 +0000 UTC m=+56.572558848" observedRunningTime="2025-09-13 10:26:13.477643958 +0000 UTC m=+57.199509029" watchObservedRunningTime="2025-09-13 10:26:13.478046094 +0000 UTC m=+57.199911185" Sep 13 10:26:13.663756 containerd[1571]: time="2025-09-13T10:26:13.663683402Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:13.664489 containerd[1571]: time="2025-09-13T10:26:13.664446514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 10:26:13.671878 containerd[1571]: time="2025-09-13T10:26:13.671825903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 820.957008ms" Sep 13 10:26:13.671878 containerd[1571]: time="2025-09-13T10:26:13.671858434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 10:26:13.673098 containerd[1571]: time="2025-09-13T10:26:13.673043548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 10:26:13.674003 containerd[1571]: time="2025-09-13T10:26:13.673981819Z" level=info msg="CreateContainer within sandbox \"a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 10:26:13.952995 containerd[1571]: time="2025-09-13T10:26:13.951926032Z" level=info msg="Container 9d39b1c209530ed1454c0dc4d7e379d10b5071fa5d3462c90c30fe0bea4ba22b: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:14.052793 containerd[1571]: time="2025-09-13T10:26:14.052748735Z" level=info msg="CreateContainer within sandbox \"a2591befb6a1ae7d40c614aff8a340c4331eda474bc96332b4ee5fcc96425590\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9d39b1c209530ed1454c0dc4d7e379d10b5071fa5d3462c90c30fe0bea4ba22b\"" Sep 13 10:26:14.053299 containerd[1571]: time="2025-09-13T10:26:14.053263941Z" level=info msg="StartContainer for \"9d39b1c209530ed1454c0dc4d7e379d10b5071fa5d3462c90c30fe0bea4ba22b\"" Sep 13 10:26:14.054552 containerd[1571]: time="2025-09-13T10:26:14.054493418Z" level=info msg="connecting to shim 9d39b1c209530ed1454c0dc4d7e379d10b5071fa5d3462c90c30fe0bea4ba22b" address="unix:///run/containerd/s/eda3b42020048d78983444a42750c7bffbc46a9fb7f38e1cf4fb8b2a49408066" protocol=ttrpc version=3 Sep 13 10:26:14.081703 systemd[1]: Started cri-containerd-9d39b1c209530ed1454c0dc4d7e379d10b5071fa5d3462c90c30fe0bea4ba22b.scope - libcontainer container 9d39b1c209530ed1454c0dc4d7e379d10b5071fa5d3462c90c30fe0bea4ba22b. Sep 13 10:26:14.133402 containerd[1571]: time="2025-09-13T10:26:14.133351200Z" level=info msg="StartContainer for \"9d39b1c209530ed1454c0dc4d7e379d10b5071fa5d3462c90c30fe0bea4ba22b\" returns successfully" Sep 13 10:26:14.392561 kubelet[2735]: I0913 10:26:14.392073 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f964d5944-f9ntd" podStartSLOduration=28.754467894 podStartE2EDuration="39.392053348s" podCreationTimestamp="2025-09-13 10:25:35 +0000 UTC" firstStartedPulling="2025-09-13 10:26:03.034990968 +0000 UTC m=+46.756856049" lastFinishedPulling="2025-09-13 10:26:13.672576422 +0000 UTC m=+57.394441503" observedRunningTime="2025-09-13 10:26:14.391617079 +0000 UTC m=+58.113482170" watchObservedRunningTime="2025-09-13 10:26:14.392053348 +0000 UTC m=+58.113918430" Sep 13 10:26:15.208367 kubelet[2735]: I0913 10:26:15.208317 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:26:15.631900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839143494.mount: Deactivated successfully. Sep 13 10:26:16.429889 containerd[1571]: time="2025-09-13T10:26:16.429812207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:16.430713 containerd[1571]: time="2025-09-13T10:26:16.430675467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 10:26:16.432009 containerd[1571]: time="2025-09-13T10:26:16.431961645Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:16.434213 containerd[1571]: time="2025-09-13T10:26:16.434172742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:16.434792 containerd[1571]: time="2025-09-13T10:26:16.434759127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.761652309s" Sep 13 10:26:16.434792 containerd[1571]: time="2025-09-13T10:26:16.434788723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 10:26:16.435687 containerd[1571]: time="2025-09-13T10:26:16.435637375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 10:26:16.437923 containerd[1571]: time="2025-09-13T10:26:16.437876537Z" level=info msg="CreateContainer within sandbox \"c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 10:26:16.447522 containerd[1571]: time="2025-09-13T10:26:16.447461239Z" level=info msg="Container 195e87ff1818b1cda1fc36876a3ed6a9e3d3bff039724e386975f2673c6641dd: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:16.463025 containerd[1571]: time="2025-09-13T10:26:16.462988637Z" level=info msg="CreateContainer within sandbox \"c301eef96e8fdc94c342ba0f0b83b0ac6d16dd4f944afb2297588013c90b97d6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"195e87ff1818b1cda1fc36876a3ed6a9e3d3bff039724e386975f2673c6641dd\"" Sep 13 10:26:16.463488 containerd[1571]: time="2025-09-13T10:26:16.463456291Z" level=info msg="StartContainer for \"195e87ff1818b1cda1fc36876a3ed6a9e3d3bff039724e386975f2673c6641dd\"" Sep 13 10:26:16.464504 containerd[1571]: time="2025-09-13T10:26:16.464478480Z" level=info msg="connecting to shim 195e87ff1818b1cda1fc36876a3ed6a9e3d3bff039724e386975f2673c6641dd" address="unix:///run/containerd/s/a25563808964b299ad5214d68530191105cdd5b1c9f39213f7c205e826bfde3a" protocol=ttrpc version=3 Sep 13 10:26:16.494685 systemd[1]: Started cri-containerd-195e87ff1818b1cda1fc36876a3ed6a9e3d3bff039724e386975f2673c6641dd.scope - libcontainer container 195e87ff1818b1cda1fc36876a3ed6a9e3d3bff039724e386975f2673c6641dd. Sep 13 10:26:16.545277 containerd[1571]: time="2025-09-13T10:26:16.545214074Z" level=info msg="StartContainer for \"195e87ff1818b1cda1fc36876a3ed6a9e3d3bff039724e386975f2673c6641dd\" returns successfully" Sep 13 10:26:17.401525 kubelet[2735]: I0913 10:26:17.401278 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6494d87476-7j9dc" podStartSLOduration=3.3285320609999998 podStartE2EDuration="18.401260114s" podCreationTimestamp="2025-09-13 10:25:59 +0000 UTC" firstStartedPulling="2025-09-13 10:26:01.362743241 +0000 UTC m=+45.084608323" lastFinishedPulling="2025-09-13 10:26:16.435471295 +0000 UTC m=+60.157336376" observedRunningTime="2025-09-13 10:26:17.400128807 +0000 UTC m=+61.121993908" watchObservedRunningTime="2025-09-13 10:26:17.401260114 +0000 UTC m=+61.123125195" Sep 13 10:26:18.019922 containerd[1571]: time="2025-09-13T10:26:18.019861112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:18.020853 containerd[1571]: time="2025-09-13T10:26:18.020821858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 10:26:18.022273 containerd[1571]: time="2025-09-13T10:26:18.022240618Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:18.024424 containerd[1571]: time="2025-09-13T10:26:18.024355071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:18.024944 containerd[1571]: time="2025-09-13T10:26:18.024911235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.589242649s" Sep 13 10:26:18.024997 containerd[1571]: time="2025-09-13T10:26:18.024944179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 10:26:18.027018 containerd[1571]: time="2025-09-13T10:26:18.026990020Z" level=info msg="CreateContainer within sandbox \"a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 10:26:18.038987 containerd[1571]: time="2025-09-13T10:26:18.038939649Z" level=info msg="Container 4a98a01faef9fc2894a394bba3f8505d30370335a00be9f6da6b420bdd69199c: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:18.054111 containerd[1571]: time="2025-09-13T10:26:18.054051415Z" level=info msg="CreateContainer within sandbox \"a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4a98a01faef9fc2894a394bba3f8505d30370335a00be9f6da6b420bdd69199c\"" Sep 13 10:26:18.054693 containerd[1571]: time="2025-09-13T10:26:18.054665100Z" level=info msg="StartContainer for \"4a98a01faef9fc2894a394bba3f8505d30370335a00be9f6da6b420bdd69199c\"" Sep 13 10:26:18.056545 containerd[1571]: time="2025-09-13T10:26:18.056467470Z" level=info msg="connecting to shim 4a98a01faef9fc2894a394bba3f8505d30370335a00be9f6da6b420bdd69199c" address="unix:///run/containerd/s/492ee19d2900dfc91ae396cd6c9c60da74311f9e372641d163f5343b47d27086" protocol=ttrpc version=3 Sep 13 10:26:18.082692 systemd[1]: Started cri-containerd-4a98a01faef9fc2894a394bba3f8505d30370335a00be9f6da6b420bdd69199c.scope - libcontainer container 4a98a01faef9fc2894a394bba3f8505d30370335a00be9f6da6b420bdd69199c. Sep 13 10:26:18.124042 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:38132.service - OpenSSH per-connection server daemon (10.0.0.1:38132). Sep 13 10:26:18.157454 containerd[1571]: time="2025-09-13T10:26:18.156637721Z" level=info msg="StartContainer for \"4a98a01faef9fc2894a394bba3f8505d30370335a00be9f6da6b420bdd69199c\" returns successfully" Sep 13 10:26:18.160774 containerd[1571]: time="2025-09-13T10:26:18.160014693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 10:26:18.205898 sshd[5495]: Accepted publickey for core from 10.0.0.1 port 38132 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:18.207977 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:18.213347 systemd-logind[1517]: New session 13 of user core. Sep 13 10:26:18.221738 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 10:26:18.390494 sshd[5502]: Connection closed by 10.0.0.1 port 38132 Sep 13 10:26:18.390165 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:18.402841 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:38132.service: Deactivated successfully. Sep 13 10:26:18.405139 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 10:26:18.405916 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Sep 13 10:26:18.408256 systemd-logind[1517]: Removed session 13. Sep 13 10:26:18.409459 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:38140.service - OpenSSH per-connection server daemon (10.0.0.1:38140). Sep 13 10:26:18.468968 sshd[5520]: Accepted publickey for core from 10.0.0.1 port 38140 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:18.470685 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:18.475339 systemd-logind[1517]: New session 14 of user core. Sep 13 10:26:18.483688 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 10:26:18.647929 sshd[5523]: Connection closed by 10.0.0.1 port 38140 Sep 13 10:26:18.649259 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:18.663990 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:38140.service: Deactivated successfully. Sep 13 10:26:18.666521 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 10:26:18.670835 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Sep 13 10:26:18.675913 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:38150.service - OpenSSH per-connection server daemon (10.0.0.1:38150). Sep 13 10:26:18.677472 systemd-logind[1517]: Removed session 14. Sep 13 10:26:18.741550 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 38150 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:18.743223 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:18.748365 systemd-logind[1517]: New session 15 of user core. Sep 13 10:26:18.754691 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 10:26:18.878721 sshd[5547]: Connection closed by 10.0.0.1 port 38150 Sep 13 10:26:18.879103 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:18.884211 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:38150.service: Deactivated successfully. Sep 13 10:26:18.886499 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 10:26:18.887322 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Sep 13 10:26:18.888512 systemd-logind[1517]: Removed session 15. Sep 13 10:26:20.512004 containerd[1571]: time="2025-09-13T10:26:20.511931871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:20.512722 containerd[1571]: time="2025-09-13T10:26:20.512643233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 10:26:20.513872 containerd[1571]: time="2025-09-13T10:26:20.513837035Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:20.516090 containerd[1571]: time="2025-09-13T10:26:20.516038821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:26:20.516675 containerd[1571]: time="2025-09-13T10:26:20.516642254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.355702495s" Sep 13 10:26:20.516675 containerd[1571]: time="2025-09-13T10:26:20.516675669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 10:26:20.518903 containerd[1571]: time="2025-09-13T10:26:20.518851685Z" level=info msg="CreateContainer within sandbox \"a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 10:26:20.526445 containerd[1571]: time="2025-09-13T10:26:20.526395040Z" level=info msg="Container 151edea7b5e2cb505bbad2f622152739be3b75fa9cca48f70a4699f48d0afd68: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:26:20.538567 containerd[1571]: time="2025-09-13T10:26:20.538513699Z" level=info msg="CreateContainer within sandbox \"a7a1a547a92c264f8188c6998f8091b4803cf875e9198a06a5717d1c8250e8eb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"151edea7b5e2cb505bbad2f622152739be3b75fa9cca48f70a4699f48d0afd68\"" Sep 13 10:26:20.539373 containerd[1571]: time="2025-09-13T10:26:20.539343109Z" level=info msg="StartContainer for \"151edea7b5e2cb505bbad2f622152739be3b75fa9cca48f70a4699f48d0afd68\"" Sep 13 10:26:20.541129 containerd[1571]: time="2025-09-13T10:26:20.541061833Z" level=info msg="connecting to shim 151edea7b5e2cb505bbad2f622152739be3b75fa9cca48f70a4699f48d0afd68" address="unix:///run/containerd/s/492ee19d2900dfc91ae396cd6c9c60da74311f9e372641d163f5343b47d27086" protocol=ttrpc version=3 Sep 13 10:26:20.572875 systemd[1]: Started cri-containerd-151edea7b5e2cb505bbad2f622152739be3b75fa9cca48f70a4699f48d0afd68.scope - libcontainer container 151edea7b5e2cb505bbad2f622152739be3b75fa9cca48f70a4699f48d0afd68. Sep 13 10:26:20.624424 containerd[1571]: time="2025-09-13T10:26:20.624377140Z" level=info msg="StartContainer for \"151edea7b5e2cb505bbad2f622152739be3b75fa9cca48f70a4699f48d0afd68\" returns successfully" Sep 13 10:26:21.447007 kubelet[2735]: I0913 10:26:21.446957 2735 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 10:26:21.447007 kubelet[2735]: I0913 10:26:21.447004 2735 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 10:26:23.893171 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:37946.service - OpenSSH per-connection server daemon (10.0.0.1:37946). Sep 13 10:26:23.986258 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 37946 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:23.988060 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:23.995397 systemd-logind[1517]: New session 16 of user core. Sep 13 10:26:24.002699 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 10:26:24.213059 sshd[5607]: Connection closed by 10.0.0.1 port 37946 Sep 13 10:26:24.213988 sshd-session[5603]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:24.226789 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Sep 13 10:26:24.227688 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:37946.service: Deactivated successfully. Sep 13 10:26:24.230624 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 10:26:24.232482 systemd-logind[1517]: Removed session 16. Sep 13 10:26:29.236165 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:37954.service - OpenSSH per-connection server daemon (10.0.0.1:37954). Sep 13 10:26:29.302449 sshd[5631]: Accepted publickey for core from 10.0.0.1 port 37954 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:29.304450 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:29.309909 systemd-logind[1517]: New session 17 of user core. Sep 13 10:26:29.324784 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 10:26:29.455812 sshd[5635]: Connection closed by 10.0.0.1 port 37954 Sep 13 10:26:29.456186 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:29.461343 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:37954.service: Deactivated successfully. Sep 13 10:26:29.463841 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 10:26:29.464714 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Sep 13 10:26:29.466363 systemd-logind[1517]: Removed session 17. Sep 13 10:26:30.395823 containerd[1571]: time="2025-09-13T10:26:30.395778711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0\" id:\"19c0e77e06cee3b11785e0cc792172b3b25e89d6ecd14fcbfa02c205c14796d2\" pid:5661 exited_at:{seconds:1757759190 nanos:395468497}" Sep 13 10:26:30.414332 kubelet[2735]: I0913 10:26:30.414047 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h2fq7" podStartSLOduration=36.533006866 podStartE2EDuration="53.414025448s" podCreationTimestamp="2025-09-13 10:25:37 +0000 UTC" firstStartedPulling="2025-09-13 10:26:03.636375753 +0000 UTC m=+47.358240834" lastFinishedPulling="2025-09-13 10:26:20.517394335 +0000 UTC m=+64.239259416" observedRunningTime="2025-09-13 10:26:21.426459592 +0000 UTC m=+65.148324673" watchObservedRunningTime="2025-09-13 10:26:30.414025448 +0000 UTC m=+74.135890529" Sep 13 10:26:34.473945 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:55174.service - OpenSSH per-connection server daemon (10.0.0.1:55174). Sep 13 10:26:34.545238 sshd[5675]: Accepted publickey for core from 10.0.0.1 port 55174 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:34.547170 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:34.554160 systemd-logind[1517]: New session 18 of user core. Sep 13 10:26:34.567681 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 10:26:34.776167 sshd[5678]: Connection closed by 10.0.0.1 port 55174 Sep 13 10:26:34.777222 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:34.780953 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:55174.service: Deactivated successfully. Sep 13 10:26:34.783367 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 10:26:34.785497 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Sep 13 10:26:34.787080 systemd-logind[1517]: Removed session 18. Sep 13 10:26:37.454676 containerd[1571]: time="2025-09-13T10:26:37.454624201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10\" id:\"f2c946c506ae442c7e9e5ea06c43bd0f5c92b1c6bb4193eb319beb76a92132ab\" pid:5702 exited_at:{seconds:1757759197 nanos:454238554}" Sep 13 10:26:39.377985 kubelet[2735]: E0913 10:26:39.377923 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:39.790104 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:55178.service - OpenSSH per-connection server daemon (10.0.0.1:55178). Sep 13 10:26:39.871331 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:39.873448 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:39.880247 systemd-logind[1517]: New session 19 of user core. Sep 13 10:26:39.888075 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 10:26:40.054739 sshd[5720]: Connection closed by 10.0.0.1 port 55178 Sep 13 10:26:40.055926 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:40.067830 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:55178.service: Deactivated successfully. Sep 13 10:26:40.071050 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 10:26:40.072156 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Sep 13 10:26:40.077581 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:44252.service - OpenSSH per-connection server daemon (10.0.0.1:44252). Sep 13 10:26:40.078992 systemd-logind[1517]: Removed session 19. Sep 13 10:26:40.150593 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 44252 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:40.153341 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:40.161344 systemd-logind[1517]: New session 20 of user core. Sep 13 10:26:40.165683 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 10:26:40.377498 kubelet[2735]: E0913 10:26:40.377393 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:41.092144 sshd[5736]: Connection closed by 10.0.0.1 port 44252 Sep 13 10:26:41.092879 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:41.103278 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:44252.service: Deactivated successfully. Sep 13 10:26:41.105946 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 10:26:41.106828 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. Sep 13 10:26:41.110712 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:44258.service - OpenSSH per-connection server daemon (10.0.0.1:44258). Sep 13 10:26:41.111633 systemd-logind[1517]: Removed session 20. Sep 13 10:26:41.182270 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 44258 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:41.184677 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:41.190907 systemd-logind[1517]: New session 21 of user core. Sep 13 10:26:41.197849 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 10:26:42.224735 sshd[5751]: Connection closed by 10.0.0.1 port 44258 Sep 13 10:26:42.225990 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:42.238750 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:44258.service: Deactivated successfully. Sep 13 10:26:42.244585 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 10:26:42.246832 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. Sep 13 10:26:42.250142 systemd-logind[1517]: Removed session 21. Sep 13 10:26:42.252102 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:44268.service - OpenSSH per-connection server daemon (10.0.0.1:44268). Sep 13 10:26:42.316329 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 44268 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:42.317939 sshd-session[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:42.322597 systemd-logind[1517]: New session 22 of user core. Sep 13 10:26:42.333751 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 10:26:42.760513 sshd[5775]: Connection closed by 10.0.0.1 port 44268 Sep 13 10:26:42.761777 sshd-session[5772]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:42.773901 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:44268.service: Deactivated successfully. Sep 13 10:26:42.776959 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 10:26:42.779591 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. Sep 13 10:26:42.782238 systemd-logind[1517]: Removed session 22. Sep 13 10:26:42.784834 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:44272.service - OpenSSH per-connection server daemon (10.0.0.1:44272). Sep 13 10:26:42.883194 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 44272 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:42.884791 sshd-session[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:42.892662 systemd-logind[1517]: New session 23 of user core. Sep 13 10:26:42.899700 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 10:26:43.026203 sshd[5790]: Connection closed by 10.0.0.1 port 44272 Sep 13 10:26:43.026915 sshd-session[5787]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:43.032500 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:44272.service: Deactivated successfully. Sep 13 10:26:43.034905 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 10:26:43.035784 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. Sep 13 10:26:43.039154 systemd-logind[1517]: Removed session 23. Sep 13 10:26:43.437022 containerd[1571]: time="2025-09-13T10:26:43.436966284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79\" id:\"72e78983ea0b413c396443541d514d714d21e87459027410b6add4dde75ae658\" pid:5816 exited_at:{seconds:1757759203 nanos:436603654}" Sep 13 10:26:48.044396 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:44278.service - OpenSSH per-connection server daemon (10.0.0.1:44278). Sep 13 10:26:48.110958 sshd[5827]: Accepted publickey for core from 10.0.0.1 port 44278 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:48.112784 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:48.118623 systemd-logind[1517]: New session 24 of user core. Sep 13 10:26:48.124671 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 10:26:48.264624 sshd[5830]: Connection closed by 10.0.0.1 port 44278 Sep 13 10:26:48.264948 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:48.270182 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:44278.service: Deactivated successfully. Sep 13 10:26:48.272508 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 10:26:48.273288 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. Sep 13 10:26:48.274517 systemd-logind[1517]: Removed session 24. Sep 13 10:26:50.377623 kubelet[2735]: E0913 10:26:50.377570 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:53.285201 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:55244.service - OpenSSH per-connection server daemon (10.0.0.1:55244). Sep 13 10:26:53.336983 sshd[5854]: Accepted publickey for core from 10.0.0.1 port 55244 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:53.338660 sshd-session[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:53.343115 systemd-logind[1517]: New session 25 of user core. Sep 13 10:26:53.354681 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 10:26:53.463689 sshd[5857]: Connection closed by 10.0.0.1 port 55244 Sep 13 10:26:53.464062 sshd-session[5854]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:53.468751 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:55244.service: Deactivated successfully. Sep 13 10:26:53.471086 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 10:26:53.471968 systemd-logind[1517]: Session 25 logged out. Waiting for processes to exit. Sep 13 10:26:53.473272 systemd-logind[1517]: Removed session 25. Sep 13 10:26:54.378039 kubelet[2735]: E0913 10:26:54.377926 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:26:58.476986 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:55254.service - OpenSSH per-connection server daemon (10.0.0.1:55254). Sep 13 10:26:58.736713 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 55254 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:26:58.738681 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:26:58.743583 systemd-logind[1517]: New session 26 of user core. Sep 13 10:26:58.750676 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 10:26:58.943558 sshd[5876]: Connection closed by 10.0.0.1 port 55254 Sep 13 10:26:58.944209 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Sep 13 10:26:58.950313 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:55254.service: Deactivated successfully. Sep 13 10:26:58.953877 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 10:26:58.955952 systemd-logind[1517]: Session 26 logged out. Waiting for processes to exit. Sep 13 10:26:58.960842 systemd-logind[1517]: Removed session 26. Sep 13 10:27:00.628642 containerd[1571]: time="2025-09-13T10:27:00.628565524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"740f2c4cde14b3cf9df190a2e29540a63bc88757347db9def21700e7d5b83de0\" id:\"e07f5397ff138ba1d54c6b3c7d95b14791d12ebe3d8b5fe91c0e38401e4dd2ab\" pid:5900 exited_at:{seconds:1757759220 nanos:588261306}" Sep 13 10:27:03.965681 systemd[1]: Started sshd@26-10.0.0.117:22-10.0.0.1:55660.service - OpenSSH per-connection server daemon (10.0.0.1:55660). Sep 13 10:27:04.072710 sshd[5915]: Accepted publickey for core from 10.0.0.1 port 55660 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:04.074471 sshd-session[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:04.079186 systemd-logind[1517]: New session 27 of user core. Sep 13 10:27:04.085659 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 10:27:04.255633 sshd[5918]: Connection closed by 10.0.0.1 port 55660 Sep 13 10:27:04.255783 sshd-session[5915]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:04.261563 systemd[1]: sshd@26-10.0.0.117:22-10.0.0.1:55660.service: Deactivated successfully. Sep 13 10:27:04.264102 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 10:27:04.265126 systemd-logind[1517]: Session 27 logged out. Waiting for processes to exit. Sep 13 10:27:04.266971 systemd-logind[1517]: Removed session 27. Sep 13 10:27:06.565999 containerd[1571]: time="2025-09-13T10:27:06.565935417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10\" id:\"e5fdaef6d2c893b974ca13a61cc7cadce8855c4a6eb5230b29e301d0dcdd2459\" pid:5941 exited_at:{seconds:1757759226 nanos:565439028}" Sep 13 10:27:07.494570 containerd[1571]: time="2025-09-13T10:27:07.494459206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9454f7df1a8b2dd18b2258688895f444ee969b39fd7423443f8564ac5b24b10\" id:\"6784b5a561f4fbd6d1dbff38a2be8bb39f86f8915ef4f52220ffcabe0848c730\" pid:5965 exited_at:{seconds:1757759227 nanos:493749994}" Sep 13 10:27:09.273217 systemd[1]: Started sshd@27-10.0.0.117:22-10.0.0.1:55670.service - OpenSSH per-connection server daemon (10.0.0.1:55670). Sep 13 10:27:09.322336 sshd[5978]: Accepted publickey for core from 10.0.0.1 port 55670 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:27:09.324282 sshd-session[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:27:09.329118 systemd-logind[1517]: New session 28 of user core. Sep 13 10:27:09.343691 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 10:27:09.466597 sshd[5981]: Connection closed by 10.0.0.1 port 55670 Sep 13 10:27:09.466969 sshd-session[5978]: pam_unix(sshd:session): session closed for user core Sep 13 10:27:09.472840 systemd[1]: sshd@27-10.0.0.117:22-10.0.0.1:55670.service: Deactivated successfully. Sep 13 10:27:09.475269 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 10:27:09.476286 systemd-logind[1517]: Session 28 logged out. Waiting for processes to exit. Sep 13 10:27:09.478391 systemd-logind[1517]: Removed session 28. Sep 13 10:27:09.625503 containerd[1571]: time="2025-09-13T10:27:09.625446365Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cf7f9845c0a2afdcf19d57dee880b401e729f618a2fe3cc248a955143ff9c79\" id:\"02f8a78a1a4a07303838a47480a01590520318241d94a3b03678059a28b5f594\" pid:6005 exited_at:{seconds:1757759229 nanos:625106102}" Sep 13 10:27:10.381851 kubelet[2735]: E0913 10:27:10.381800 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"