Sep 13 10:34:32.819121 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Sep 13 08:30:13 -00 2025 Sep 13 10:34:32.819144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:34:32.819152 kernel: BIOS-provided physical RAM map: Sep 13 10:34:32.819159 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 10:34:32.819165 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 10:34:32.819172 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 10:34:32.819179 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 10:34:32.819186 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 10:34:32.819195 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 10:34:32.819202 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 10:34:32.819208 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 10:34:32.819215 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 10:34:32.819221 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 10:34:32.819227 kernel: NX (Execute Disable) protection: active Sep 13 10:34:32.819237 kernel: APIC: Static calls initialized Sep 13 10:34:32.819244 kernel: SMBIOS 2.8 present. Sep 13 10:34:32.819252 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 10:34:32.819258 kernel: DMI: Memory slots populated: 1/1 Sep 13 10:34:32.819265 kernel: Hypervisor detected: KVM Sep 13 10:34:32.819272 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 10:34:32.819279 kernel: kvm-clock: using sched offset of 3285208932 cycles Sep 13 10:34:32.819286 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 10:34:32.819293 kernel: tsc: Detected 2794.748 MHz processor Sep 13 10:34:32.819301 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 10:34:32.819310 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 10:34:32.819318 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 10:34:32.819325 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 10:34:32.819332 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 10:34:32.819339 kernel: Using GB pages for direct mapping Sep 13 10:34:32.819346 kernel: ACPI: Early table checksum verification disabled Sep 13 10:34:32.819353 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 10:34:32.819361 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:34:32.819370 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:34:32.819377 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:34:32.819384 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 10:34:32.819392 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:34:32.819399 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:34:32.819406 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:34:32.819413 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:34:32.819421 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 10:34:32.819444 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 10:34:32.819459 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 10:34:32.819473 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 10:34:32.819488 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 10:34:32.819495 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 10:34:32.819502 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 10:34:32.819512 kernel: No NUMA configuration found Sep 13 10:34:32.819520 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 10:34:32.819527 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 13 10:34:32.819534 kernel: Zone ranges: Sep 13 10:34:32.819542 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 10:34:32.819549 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 10:34:32.819556 kernel: Normal empty Sep 13 10:34:32.819564 kernel: Device empty Sep 13 10:34:32.819571 kernel: Movable zone start for each node Sep 13 10:34:32.819578 kernel: Early memory node ranges Sep 13 10:34:32.819588 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 10:34:32.819596 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 10:34:32.819603 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 10:34:32.819610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 10:34:32.819618 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 10:34:32.819625 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 10:34:32.819632 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 10:34:32.819640 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 10:34:32.819647 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 10:34:32.819657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 10:34:32.819664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 10:34:32.819672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 10:34:32.819679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 10:34:32.819687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 10:34:32.819694 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 10:34:32.819709 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 10:34:32.819718 kernel: TSC deadline timer available Sep 13 10:34:32.819733 kernel: CPU topo: Max. logical packages: 1 Sep 13 10:34:32.819743 kernel: CPU topo: Max. logical dies: 1 Sep 13 10:34:32.819750 kernel: CPU topo: Max. dies per package: 1 Sep 13 10:34:32.819757 kernel: CPU topo: Max. threads per core: 1 Sep 13 10:34:32.819765 kernel: CPU topo: Num. cores per package: 4 Sep 13 10:34:32.819772 kernel: CPU topo: Num. threads per package: 4 Sep 13 10:34:32.819779 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 10:34:32.819786 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 10:34:32.819794 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 10:34:32.819801 kernel: kvm-guest: setup PV sched yield Sep 13 10:34:32.819809 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 10:34:32.819818 kernel: Booting paravirtualized kernel on KVM Sep 13 10:34:32.819826 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 10:34:32.819833 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 10:34:32.819841 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 10:34:32.819848 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 10:34:32.819855 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 10:34:32.819863 kernel: kvm-guest: PV spinlocks enabled Sep 13 10:34:32.819870 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 10:34:32.819879 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:34:32.819889 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 10:34:32.819896 kernel: random: crng init done Sep 13 10:34:32.819904 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 10:34:32.819911 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 10:34:32.819918 kernel: Fallback order for Node 0: 0 Sep 13 10:34:32.819926 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 13 10:34:32.819933 kernel: Policy zone: DMA32 Sep 13 10:34:32.819941 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 10:34:32.819950 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 10:34:32.819957 kernel: ftrace: allocating 40125 entries in 157 pages Sep 13 10:34:32.819965 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 10:34:32.819972 kernel: Dynamic Preempt: voluntary Sep 13 10:34:32.819980 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 10:34:32.819988 kernel: rcu: RCU event tracing is enabled. Sep 13 10:34:32.819996 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 10:34:32.820003 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 10:34:32.820011 kernel: Rude variant of Tasks RCU enabled. Sep 13 10:34:32.820018 kernel: Tracing variant of Tasks RCU enabled. Sep 13 10:34:32.820060 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 10:34:32.820067 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 10:34:32.820075 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:34:32.820083 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:34:32.820090 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:34:32.820098 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 10:34:32.820105 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 10:34:32.820122 kernel: Console: colour VGA+ 80x25 Sep 13 10:34:32.820129 kernel: printk: legacy console [ttyS0] enabled Sep 13 10:34:32.820137 kernel: ACPI: Core revision 20240827 Sep 13 10:34:32.820145 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 10:34:32.820155 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 10:34:32.820163 kernel: x2apic enabled Sep 13 10:34:32.820170 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 10:34:32.820178 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 10:34:32.820186 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 10:34:32.820196 kernel: kvm-guest: setup PV IPIs Sep 13 10:34:32.820204 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 10:34:32.820211 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:34:32.820219 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 10:34:32.820227 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 10:34:32.820235 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 10:34:32.820242 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 10:34:32.820250 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 10:34:32.820258 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 10:34:32.820268 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 10:34:32.820276 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 10:34:32.820283 kernel: active return thunk: retbleed_return_thunk Sep 13 10:34:32.820291 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 10:34:32.820299 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 10:34:32.820307 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 10:34:32.820314 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 10:34:32.820323 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 10:34:32.820333 kernel: active return thunk: srso_return_thunk Sep 13 10:34:32.820340 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 10:34:32.820348 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 10:34:32.820356 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 10:34:32.820363 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 10:34:32.820371 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 10:34:32.820379 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 10:34:32.820387 kernel: Freeing SMP alternatives memory: 32K Sep 13 10:34:32.820394 kernel: pid_max: default: 32768 minimum: 301 Sep 13 10:34:32.820404 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 10:34:32.820412 kernel: landlock: Up and running. Sep 13 10:34:32.820419 kernel: SELinux: Initializing. Sep 13 10:34:32.820427 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:34:32.820435 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:34:32.820443 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 10:34:32.820451 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 10:34:32.820458 kernel: ... version: 0 Sep 13 10:34:32.820466 kernel: ... bit width: 48 Sep 13 10:34:32.820475 kernel: ... generic registers: 6 Sep 13 10:34:32.820483 kernel: ... value mask: 0000ffffffffffff Sep 13 10:34:32.820491 kernel: ... max period: 00007fffffffffff Sep 13 10:34:32.820498 kernel: ... fixed-purpose events: 0 Sep 13 10:34:32.820506 kernel: ... event mask: 000000000000003f Sep 13 10:34:32.820513 kernel: signal: max sigframe size: 1776 Sep 13 10:34:32.820521 kernel: rcu: Hierarchical SRCU implementation. Sep 13 10:34:32.820529 kernel: rcu: Max phase no-delay instances is 400. Sep 13 10:34:32.820537 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 10:34:32.820546 kernel: smp: Bringing up secondary CPUs ... Sep 13 10:34:32.820554 kernel: smpboot: x86: Booting SMP configuration: Sep 13 10:34:32.820562 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 10:34:32.820569 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 10:34:32.820577 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 10:34:32.820585 kernel: Memory: 2428920K/2571752K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54088K init, 2876K bss, 136904K reserved, 0K cma-reserved) Sep 13 10:34:32.820593 kernel: devtmpfs: initialized Sep 13 10:34:32.820601 kernel: x86/mm: Memory block size: 128MB Sep 13 10:34:32.820609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 10:34:32.820619 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 10:34:32.820626 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 10:34:32.820634 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 10:34:32.820642 kernel: audit: initializing netlink subsys (disabled) Sep 13 10:34:32.820649 kernel: audit: type=2000 audit(1757759670.066:1): state=initialized audit_enabled=0 res=1 Sep 13 10:34:32.820657 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 10:34:32.820665 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 10:34:32.820672 kernel: cpuidle: using governor menu Sep 13 10:34:32.820680 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 10:34:32.820690 kernel: dca service started, version 1.12.1 Sep 13 10:34:32.820698 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 13 10:34:32.820705 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 10:34:32.820713 kernel: PCI: Using configuration type 1 for base access Sep 13 10:34:32.820721 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 10:34:32.820728 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 10:34:32.820736 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 10:34:32.820744 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 10:34:32.820751 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 10:34:32.820761 kernel: ACPI: Added _OSI(Module Device) Sep 13 10:34:32.820769 kernel: ACPI: Added _OSI(Processor Device) Sep 13 10:34:32.820776 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 10:34:32.820784 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 10:34:32.820792 kernel: ACPI: Interpreter enabled Sep 13 10:34:32.820799 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 10:34:32.820807 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 10:34:32.820814 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 10:34:32.820822 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 10:34:32.820842 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 10:34:32.820858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 10:34:32.821120 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 10:34:32.821250 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 10:34:32.821365 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 10:34:32.821375 kernel: PCI host bridge to bus 0000:00 Sep 13 10:34:32.821492 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 10:34:32.821602 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 10:34:32.821708 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 10:34:32.821832 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 10:34:32.821965 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 10:34:32.822099 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 10:34:32.822205 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 10:34:32.822339 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 10:34:32.822470 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 10:34:32.822586 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 13 10:34:32.822699 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 13 10:34:32.822812 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 13 10:34:32.822926 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 10:34:32.823076 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 10:34:32.823200 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 13 10:34:32.823317 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 13 10:34:32.823432 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 10:34:32.823557 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 10:34:32.823674 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 13 10:34:32.823789 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 13 10:34:32.823903 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 10:34:32.824060 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 10:34:32.824196 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 13 10:34:32.824311 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 13 10:34:32.824426 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 10:34:32.824540 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 13 10:34:32.824663 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 10:34:32.824777 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 10:34:32.824910 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 10:34:32.825049 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 13 10:34:32.825167 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 13 10:34:32.825292 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 10:34:32.825407 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 13 10:34:32.825417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 10:34:32.825425 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 10:34:32.825436 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 10:34:32.825444 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 10:34:32.825452 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 10:34:32.825460 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 10:34:32.825467 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 10:34:32.825475 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 10:34:32.825483 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 10:34:32.825491 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 10:34:32.825499 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 10:34:32.825508 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 10:34:32.825516 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 10:34:32.825524 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 10:34:32.825531 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 10:34:32.825539 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 10:34:32.825547 kernel: iommu: Default domain type: Translated Sep 13 10:34:32.825554 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 10:34:32.825562 kernel: PCI: Using ACPI for IRQ routing Sep 13 10:34:32.825570 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 10:34:32.825579 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 10:34:32.825587 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 10:34:32.825702 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 10:34:32.825816 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 10:34:32.825929 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 10:34:32.825939 kernel: vgaarb: loaded Sep 13 10:34:32.825947 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 10:34:32.825955 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 10:34:32.825966 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 10:34:32.825974 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 10:34:32.825982 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 10:34:32.825989 kernel: pnp: PnP ACPI init Sep 13 10:34:32.826140 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 10:34:32.826152 kernel: pnp: PnP ACPI: found 6 devices Sep 13 10:34:32.826161 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 10:34:32.826168 kernel: NET: Registered PF_INET protocol family Sep 13 10:34:32.826180 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 10:34:32.826188 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 10:34:32.826196 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 10:34:32.826204 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 10:34:32.826211 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 10:34:32.826219 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 10:34:32.826227 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:34:32.826235 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:34:32.826243 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 10:34:32.826253 kernel: NET: Registered PF_XDP protocol family Sep 13 10:34:32.826362 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 10:34:32.826469 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 10:34:32.826574 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 10:34:32.826688 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 10:34:32.826798 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 10:34:32.826902 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 10:34:32.826912 kernel: PCI: CLS 0 bytes, default 64 Sep 13 10:34:32.826923 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:34:32.826931 kernel: Initialise system trusted keyrings Sep 13 10:34:32.826939 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 10:34:32.826947 kernel: Key type asymmetric registered Sep 13 10:34:32.826955 kernel: Asymmetric key parser 'x509' registered Sep 13 10:34:32.826963 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 10:34:32.826971 kernel: io scheduler mq-deadline registered Sep 13 10:34:32.826978 kernel: io scheduler kyber registered Sep 13 10:34:32.826986 kernel: io scheduler bfq registered Sep 13 10:34:32.826996 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 10:34:32.827004 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 10:34:32.827012 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 10:34:32.827053 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 10:34:32.827061 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 10:34:32.827069 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 10:34:32.827077 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 10:34:32.827085 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 10:34:32.827093 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 10:34:32.827225 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 10:34:32.827336 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 10:34:32.827445 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T10:34:32 UTC (1757759672) Sep 13 10:34:32.827552 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 10:34:32.827562 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 10:34:32.827570 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 13 10:34:32.827578 kernel: NET: Registered PF_INET6 protocol family Sep 13 10:34:32.827585 kernel: Segment Routing with IPv6 Sep 13 10:34:32.827597 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 10:34:32.827604 kernel: NET: Registered PF_PACKET protocol family Sep 13 10:34:32.827612 kernel: Key type dns_resolver registered Sep 13 10:34:32.827620 kernel: IPI shorthand broadcast: enabled Sep 13 10:34:32.827628 kernel: sched_clock: Marking stable (2644002516, 107370570)->(2765348433, -13975347) Sep 13 10:34:32.827636 kernel: registered taskstats version 1 Sep 13 10:34:32.827644 kernel: Loading compiled-in X.509 certificates Sep 13 10:34:32.827652 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: cbb54677ad1c578839cdade5ab8500bbdb72e350' Sep 13 10:34:32.827660 kernel: Demotion targets for Node 0: null Sep 13 10:34:32.827669 kernel: Key type .fscrypt registered Sep 13 10:34:32.827677 kernel: Key type fscrypt-provisioning registered Sep 13 10:34:32.827685 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 10:34:32.827692 kernel: ima: Allocated hash algorithm: sha1 Sep 13 10:34:32.827700 kernel: ima: No architecture policies found Sep 13 10:34:32.827708 kernel: clk: Disabling unused clocks Sep 13 10:34:32.827716 kernel: Warning: unable to open an initial console. Sep 13 10:34:32.827724 kernel: Freeing unused kernel image (initmem) memory: 54088K Sep 13 10:34:32.827732 kernel: Write protecting the kernel read-only data: 24576k Sep 13 10:34:32.827741 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 13 10:34:32.827749 kernel: Run /init as init process Sep 13 10:34:32.827757 kernel: with arguments: Sep 13 10:34:32.827765 kernel: /init Sep 13 10:34:32.827772 kernel: with environment: Sep 13 10:34:32.827780 kernel: HOME=/ Sep 13 10:34:32.827788 kernel: TERM=linux Sep 13 10:34:32.827795 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 10:34:32.827805 systemd[1]: Successfully made /usr/ read-only. Sep 13 10:34:32.827825 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:34:32.827836 systemd[1]: Detected virtualization kvm. Sep 13 10:34:32.827844 systemd[1]: Detected architecture x86-64. Sep 13 10:34:32.827853 systemd[1]: Running in initrd. Sep 13 10:34:32.827861 systemd[1]: No hostname configured, using default hostname. Sep 13 10:34:32.827871 systemd[1]: Hostname set to . Sep 13 10:34:32.827880 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:34:32.827888 systemd[1]: Queued start job for default target initrd.target. Sep 13 10:34:32.827897 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:34:32.827905 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:34:32.827915 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 10:34:32.827924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:34:32.827932 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 10:34:32.827944 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 10:34:32.827954 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 10:34:32.827962 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 10:34:32.827971 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:34:32.827980 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:34:32.827988 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:34:32.827997 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:34:32.828007 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:34:32.828016 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:34:32.828049 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:34:32.828057 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:34:32.828066 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 10:34:32.828075 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 10:34:32.828083 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:34:32.828094 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:34:32.828105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:34:32.828113 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:34:32.828121 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 10:34:32.828130 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:34:32.828141 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 10:34:32.828150 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 10:34:32.828160 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 10:34:32.828169 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:34:32.828177 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:34:32.828186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:34:32.828195 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 10:34:32.828205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:34:32.828214 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 10:34:32.828247 systemd-journald[221]: Collecting audit messages is disabled. Sep 13 10:34:32.828269 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:34:32.828278 systemd-journald[221]: Journal started Sep 13 10:34:32.828296 systemd-journald[221]: Runtime Journal (/run/log/journal/1a170b6f25514a72ba8e0ce9a4fab5f0) is 6M, max 48.6M, 42.5M free. Sep 13 10:34:32.819760 systemd-modules-load[223]: Inserted module 'overlay' Sep 13 10:34:32.858441 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:34:32.858464 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 10:34:32.858486 kernel: Bridge firewalling registered Sep 13 10:34:32.845842 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 13 10:34:32.857584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:34:32.858835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:34:32.861548 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:34:32.864662 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 10:34:32.866394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:34:32.870121 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:34:32.875964 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:34:32.886978 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 10:34:32.888450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:34:32.888945 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:34:32.890948 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:34:32.893517 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:34:32.898439 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 10:34:32.900381 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:34:32.932853 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:34:32.950679 systemd-resolved[263]: Positive Trust Anchors: Sep 13 10:34:32.950694 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:34:32.950722 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:34:32.953195 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 13 10:34:32.954160 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:34:32.959365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:34:33.039047 kernel: SCSI subsystem initialized Sep 13 10:34:33.048056 kernel: Loading iSCSI transport class v2.0-870. Sep 13 10:34:33.058055 kernel: iscsi: registered transport (tcp) Sep 13 10:34:33.083056 kernel: iscsi: registered transport (qla4xxx) Sep 13 10:34:33.083123 kernel: QLogic iSCSI HBA Driver Sep 13 10:34:33.105466 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:34:33.131234 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:34:33.133486 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:34:33.186253 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 10:34:33.188568 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 10:34:33.252050 kernel: raid6: avx2x4 gen() 30339 MB/s Sep 13 10:34:33.269043 kernel: raid6: avx2x2 gen() 30963 MB/s Sep 13 10:34:33.286081 kernel: raid6: avx2x1 gen() 25809 MB/s Sep 13 10:34:33.286102 kernel: raid6: using algorithm avx2x2 gen() 30963 MB/s Sep 13 10:34:33.304078 kernel: raid6: .... xor() 19981 MB/s, rmw enabled Sep 13 10:34:33.304102 kernel: raid6: using avx2x2 recovery algorithm Sep 13 10:34:33.324051 kernel: xor: automatically using best checksumming function avx Sep 13 10:34:33.483054 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 10:34:33.491457 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:34:33.494132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:34:33.522411 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 13 10:34:33.527884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:34:33.528998 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 10:34:33.553622 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 13 10:34:33.583318 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:34:33.586738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:34:33.659479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:34:33.661330 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 10:34:33.693097 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 10:34:33.695246 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 10:34:33.697818 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 10:34:33.697836 kernel: GPT:9289727 != 19775487 Sep 13 10:34:33.697847 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 10:34:33.699803 kernel: GPT:9289727 != 19775487 Sep 13 10:34:33.699860 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 10:34:33.699873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:34:33.709678 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 10:34:33.714041 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 10:34:33.726048 kernel: libata version 3.00 loaded. Sep 13 10:34:33.731054 kernel: AES CTR mode by8 optimization enabled Sep 13 10:34:33.746351 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:34:33.747946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:34:33.752045 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 10:34:33.752486 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 10:34:33.750444 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:34:33.754210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:34:33.761122 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 10:34:33.761321 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 10:34:33.761465 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 10:34:33.770126 kernel: scsi host0: ahci Sep 13 10:34:33.771047 kernel: scsi host1: ahci Sep 13 10:34:33.774853 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 10:34:33.776043 kernel: scsi host2: ahci Sep 13 10:34:33.778224 kernel: scsi host3: ahci Sep 13 10:34:33.778416 kernel: scsi host4: ahci Sep 13 10:34:33.779075 kernel: scsi host5: ahci Sep 13 10:34:33.781540 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 13 10:34:33.781553 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 13 10:34:33.781564 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 13 10:34:33.781575 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 13 10:34:33.781589 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 13 10:34:33.781599 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 13 10:34:33.803840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 10:34:33.823448 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 10:34:33.823708 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 10:34:33.826958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:34:33.838081 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:34:33.839184 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 10:34:33.863416 disk-uuid[632]: Primary Header is updated. Sep 13 10:34:33.863416 disk-uuid[632]: Secondary Entries is updated. Sep 13 10:34:33.863416 disk-uuid[632]: Secondary Header is updated. Sep 13 10:34:33.866540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:34:33.871051 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:34:34.084057 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 10:34:34.092737 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 10:34:34.092780 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 10:34:34.092792 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 10:34:34.092803 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 10:34:34.094054 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 10:34:34.095055 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:34:34.095069 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 10:34:34.095333 kernel: ata3.00: applying bridge limits Sep 13 10:34:34.096476 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:34:34.096487 kernel: ata3.00: configured for UDMA/100 Sep 13 10:34:34.099047 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 10:34:34.151504 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 10:34:34.151697 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 10:34:34.164055 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 10:34:34.547501 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 10:34:34.550091 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:34:34.552423 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:34:34.554568 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:34:34.557339 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 10:34:34.593202 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:34:34.871721 disk-uuid[633]: The operation has completed successfully. Sep 13 10:34:34.872927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:34:34.900325 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 10:34:34.900444 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 10:34:34.933881 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 10:34:34.955012 sh[662]: Success Sep 13 10:34:34.972365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 10:34:34.972393 kernel: device-mapper: uevent: version 1.0.3 Sep 13 10:34:34.973383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 10:34:34.982056 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 10:34:35.008339 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 10:34:35.010447 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 10:34:35.032211 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 10:34:35.037846 kernel: BTRFS: device fsid fbf3e737-db97-4ff7-a1f5-c4d4b7390663 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (674) Sep 13 10:34:35.037870 kernel: BTRFS info (device dm-0): first mount of filesystem fbf3e737-db97-4ff7-a1f5-c4d4b7390663 Sep 13 10:34:35.037881 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:34:35.043051 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 10:34:35.043097 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 10:34:35.044035 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 10:34:35.045366 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:34:35.046800 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 10:34:35.047543 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 10:34:35.049221 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 10:34:35.074047 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (706) Sep 13 10:34:35.076044 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:34:35.076074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:34:35.079260 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:34:35.079282 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:34:35.084072 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:34:35.084454 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 10:34:35.086408 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 10:34:35.174597 ignition[746]: Ignition 2.22.0 Sep 13 10:34:35.174611 ignition[746]: Stage: fetch-offline Sep 13 10:34:35.174641 ignition[746]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:34:35.174650 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:34:35.174733 ignition[746]: parsed url from cmdline: "" Sep 13 10:34:35.174737 ignition[746]: no config URL provided Sep 13 10:34:35.174742 ignition[746]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 10:34:35.179257 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:34:35.174752 ignition[746]: no config at "/usr/lib/ignition/user.ign" Sep 13 10:34:35.181516 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:34:35.174774 ignition[746]: op(1): [started] loading QEMU firmware config module Sep 13 10:34:35.174779 ignition[746]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 10:34:35.190067 ignition[746]: op(1): [finished] loading QEMU firmware config module Sep 13 10:34:35.190091 ignition[746]: QEMU firmware config was not found. Ignoring... Sep 13 10:34:35.226427 systemd-networkd[851]: lo: Link UP Sep 13 10:34:35.226437 systemd-networkd[851]: lo: Gained carrier Sep 13 10:34:35.227900 systemd-networkd[851]: Enumeration completed Sep 13 10:34:35.228172 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:34:35.228273 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:34:35.228277 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:34:35.234299 systemd-networkd[851]: eth0: Link UP Sep 13 10:34:35.234437 systemd-networkd[851]: eth0: Gained carrier Sep 13 10:34:35.234446 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:34:35.238854 systemd[1]: Reached target network.target - Network. Sep 13 10:34:35.241078 ignition[746]: parsing config with SHA512: 124ddfb9fb6bf1d83c8afa972364a47a2db35956dcf491f8b7bf9d6093f59aefadf91e64a15dd3214390a659db55d78e1b55ad6e965760ca06c3989094ae433d Sep 13 10:34:35.244382 unknown[746]: fetched base config from "system" Sep 13 10:34:35.244393 unknown[746]: fetched user config from "qemu" Sep 13 10:34:35.244683 ignition[746]: fetch-offline: fetch-offline passed Sep 13 10:34:35.244735 ignition[746]: Ignition finished successfully Sep 13 10:34:35.247505 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.4/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:34:35.249778 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:34:35.252136 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 10:34:35.252908 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 10:34:35.287110 ignition[861]: Ignition 2.22.0 Sep 13 10:34:35.287123 ignition[861]: Stage: kargs Sep 13 10:34:35.287237 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:34:35.287247 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:34:35.287893 ignition[861]: kargs: kargs passed Sep 13 10:34:35.287927 ignition[861]: Ignition finished successfully Sep 13 10:34:35.293857 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 10:34:35.296646 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 10:34:35.332790 ignition[869]: Ignition 2.22.0 Sep 13 10:34:35.332804 ignition[869]: Stage: disks Sep 13 10:34:35.332939 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:34:35.332949 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:34:35.333693 ignition[869]: disks: disks passed Sep 13 10:34:35.333741 ignition[869]: Ignition finished successfully Sep 13 10:34:35.338611 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 10:34:35.339238 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 10:34:35.340659 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 10:34:35.342807 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:34:35.343294 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:34:35.343602 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:34:35.344808 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 10:34:35.372293 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 10:34:35.379363 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 10:34:35.380415 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 10:34:35.482048 kernel: EXT4-fs (vda9): mounted filesystem 1fad58d4-1271-484a-aa8e-8f7f5dca764c r/w with ordered data mode. Quota mode: none. Sep 13 10:34:35.482432 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 10:34:35.484474 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 10:34:35.487598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:34:35.489955 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 10:34:35.491805 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 10:34:35.491850 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 10:34:35.493504 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:34:35.507061 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 10:34:35.508693 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 10:34:35.514042 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Sep 13 10:34:35.514067 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:34:35.515626 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:34:35.518501 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:34:35.518541 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:34:35.520465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:34:35.543747 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 10:34:35.548717 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 13 10:34:35.553500 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 10:34:35.557905 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 10:34:35.641017 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 10:34:35.642202 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 10:34:35.645682 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 10:34:35.663047 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:34:35.673779 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 10:34:35.690373 ignition[1003]: INFO : Ignition 2.22.0 Sep 13 10:34:35.690373 ignition[1003]: INFO : Stage: mount Sep 13 10:34:35.691968 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:34:35.691968 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:34:35.694595 ignition[1003]: INFO : mount: mount passed Sep 13 10:34:35.695346 ignition[1003]: INFO : Ignition finished successfully Sep 13 10:34:35.698486 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 10:34:35.701232 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 10:34:36.036792 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 10:34:36.038201 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:34:36.060611 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 13 10:34:36.060648 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:34:36.060660 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:34:36.064160 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:34:36.064196 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:34:36.065655 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:34:36.096194 ignition[1032]: INFO : Ignition 2.22.0 Sep 13 10:34:36.096194 ignition[1032]: INFO : Stage: files Sep 13 10:34:36.097768 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:34:36.097768 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:34:36.097768 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 13 10:34:36.100953 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 10:34:36.100953 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 10:34:36.104360 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 10:34:36.105753 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 10:34:36.107504 unknown[1032]: wrote ssh authorized keys file for user: core Sep 13 10:34:36.108593 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 10:34:36.110892 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 10:34:36.112779 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 10:34:36.155233 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 10:34:36.317869 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 10:34:36.319803 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 10:34:36.319803 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 10:34:36.319803 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:34:36.319803 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:34:36.319803 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:34:36.319803 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:34:36.319803 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:34:36.319803 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:34:36.333441 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:34:36.333441 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:34:36.333441 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:34:36.333441 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:34:36.333441 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:34:36.333441 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 10:34:36.645156 systemd-networkd[851]: eth0: Gained IPv6LL Sep 13 10:34:36.757740 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 10:34:37.105790 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:34:37.105790 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 10:34:37.109800 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:34:37.112206 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:34:37.112206 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 10:34:37.112206 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 10:34:37.116923 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:34:37.116923 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:34:37.116923 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 10:34:37.116923 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 10:34:37.131705 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:34:37.136452 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:34:37.137964 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 10:34:37.137964 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 10:34:37.137964 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 10:34:37.137964 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:34:37.137964 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:34:37.137964 ignition[1032]: INFO : files: files passed Sep 13 10:34:37.137964 ignition[1032]: INFO : Ignition finished successfully Sep 13 10:34:37.139956 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 10:34:37.144855 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 10:34:37.146221 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 10:34:37.175002 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 10:34:37.175137 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 10:34:37.177947 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 10:34:37.181004 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:34:37.182822 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:34:37.184838 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:34:37.188357 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:34:37.190849 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 10:34:37.192009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 10:34:37.239172 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 10:34:37.240230 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 10:34:37.242994 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 10:34:37.244878 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 10:34:37.245381 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 10:34:37.248220 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 10:34:37.280298 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:34:37.283005 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 10:34:37.312648 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:34:37.312989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:34:37.315121 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 10:34:37.317384 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 10:34:37.317494 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:34:37.320561 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 10:34:37.321095 systemd[1]: Stopped target basic.target - Basic System. Sep 13 10:34:37.321551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 10:34:37.321866 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:34:37.322356 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 10:34:37.322669 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:34:37.322996 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 10:34:37.332698 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:34:37.333036 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 10:34:37.333672 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 10:34:37.333977 systemd[1]: Stopped target swap.target - Swaps. Sep 13 10:34:37.334427 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 10:34:37.334530 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:34:37.341623 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:34:37.341965 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:34:37.342406 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 10:34:37.347421 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:34:37.347984 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 10:34:37.348109 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 10:34:37.348770 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 10:34:37.348874 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:34:37.353441 systemd[1]: Stopped target paths.target - Path Units. Sep 13 10:34:37.355493 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 10:34:37.360114 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:34:37.360470 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 10:34:37.362890 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 10:34:37.365941 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 10:34:37.366058 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:34:37.366603 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 10:34:37.366677 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:34:37.368881 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 10:34:37.369000 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:34:37.370541 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 10:34:37.370641 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 10:34:37.375217 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 10:34:37.376231 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 10:34:37.378518 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 10:34:37.378633 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:34:37.379296 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 10:34:37.379395 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:34:37.387635 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 10:34:37.391218 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 10:34:37.407050 ignition[1087]: INFO : Ignition 2.22.0 Sep 13 10:34:37.407050 ignition[1087]: INFO : Stage: umount Sep 13 10:34:37.407050 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:34:37.407050 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:34:37.410927 ignition[1087]: INFO : umount: umount passed Sep 13 10:34:37.410927 ignition[1087]: INFO : Ignition finished successfully Sep 13 10:34:37.412543 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 10:34:37.413215 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 10:34:37.413333 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 10:34:37.414223 systemd[1]: Stopped target network.target - Network. Sep 13 10:34:37.415790 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 10:34:37.415839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 10:34:37.416298 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 10:34:37.416340 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 10:34:37.416610 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 10:34:37.416655 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 10:34:37.416939 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 10:34:37.416978 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 10:34:37.417496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 10:34:37.417818 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 10:34:37.429453 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 10:34:37.429567 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 10:34:37.434486 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 10:34:37.435118 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 10:34:37.435199 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:34:37.439156 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:34:37.446984 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 10:34:37.447179 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 10:34:37.450629 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 10:34:37.450778 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 10:34:37.452868 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 10:34:37.452916 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:34:37.454008 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 10:34:37.455958 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 10:34:37.456010 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:34:37.456474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:34:37.456515 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:34:37.461111 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 10:34:37.461157 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 10:34:37.461645 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:34:37.462726 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 10:34:37.481951 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 10:34:37.482094 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 10:34:37.490819 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 10:34:37.491012 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:34:37.491498 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 10:34:37.491539 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 10:34:37.494397 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 10:34:37.494431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:34:37.494690 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 10:34:37.494732 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:34:37.495479 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 10:34:37.495525 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 10:34:37.496277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 10:34:37.496320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:34:37.497611 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 10:34:37.506860 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 10:34:37.506925 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:34:37.512602 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 10:34:37.512651 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:34:37.516195 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 10:34:37.516239 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:34:37.519726 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 10:34:37.519788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:34:37.520443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:34:37.520486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:34:37.528566 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 10:34:37.528689 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 10:34:37.529472 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 10:34:37.529575 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 10:34:37.532827 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 10:34:37.533686 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 10:34:37.533772 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 10:34:37.534842 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 10:34:37.554856 systemd[1]: Switching root. Sep 13 10:34:37.597462 systemd-journald[221]: Journal stopped Sep 13 10:34:38.787361 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Sep 13 10:34:38.787425 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 10:34:38.787444 kernel: SELinux: policy capability open_perms=1 Sep 13 10:34:38.787455 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 10:34:38.787466 kernel: SELinux: policy capability always_check_network=0 Sep 13 10:34:38.787481 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 10:34:38.787492 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 10:34:38.787506 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 10:34:38.787517 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 10:34:38.787534 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 10:34:38.787545 kernel: audit: type=1403 audit(1757759678.058:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 10:34:38.787559 systemd[1]: Successfully loaded SELinux policy in 55.447ms. Sep 13 10:34:38.787582 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.335ms. Sep 13 10:34:38.787595 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:34:38.787607 systemd[1]: Detected virtualization kvm. Sep 13 10:34:38.787620 systemd[1]: Detected architecture x86-64. Sep 13 10:34:38.787632 systemd[1]: Detected first boot. Sep 13 10:34:38.787643 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:34:38.787655 zram_generator::config[1132]: No configuration found. Sep 13 10:34:38.787667 kernel: Guest personality initialized and is inactive Sep 13 10:34:38.787678 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 10:34:38.787694 kernel: Initialized host personality Sep 13 10:34:38.787705 kernel: NET: Registered PF_VSOCK protocol family Sep 13 10:34:38.787717 systemd[1]: Populated /etc with preset unit settings. Sep 13 10:34:38.787731 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 10:34:38.787743 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 10:34:38.787754 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 10:34:38.787766 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 10:34:38.787778 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 10:34:38.787790 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 10:34:38.787801 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 10:34:38.787812 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 10:34:38.787827 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 10:34:38.787839 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 10:34:38.787851 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 10:34:38.787862 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 10:34:38.787883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:34:38.787896 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:34:38.787907 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 10:34:38.787919 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 10:34:38.787931 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 10:34:38.787946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:34:38.787957 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 10:34:38.787969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:34:38.787988 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:34:38.788008 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 10:34:38.790374 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 10:34:38.790392 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 10:34:38.790404 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 10:34:38.790420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:34:38.790432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:34:38.790443 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:34:38.790455 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:34:38.790467 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 10:34:38.790479 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 10:34:38.790490 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 10:34:38.790501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:34:38.790513 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:34:38.790527 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:34:38.790539 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 10:34:38.790550 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 10:34:38.790568 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 10:34:38.790580 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 10:34:38.790591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:34:38.790603 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 10:34:38.790615 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 10:34:38.790626 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 10:34:38.790641 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 10:34:38.790653 systemd[1]: Reached target machines.target - Containers. Sep 13 10:34:38.790665 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 10:34:38.790677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:34:38.790689 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:34:38.790701 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 10:34:38.790713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:34:38.790724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:34:38.790738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:34:38.790749 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 10:34:38.790761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:34:38.790773 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 10:34:38.790785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 10:34:38.790796 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 10:34:38.790808 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 10:34:38.790820 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 10:34:38.790832 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:34:38.790846 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:34:38.790858 kernel: loop: module loaded Sep 13 10:34:38.790878 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:34:38.790891 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:34:38.790903 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 10:34:38.790915 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 10:34:38.790926 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:34:38.790941 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 10:34:38.790952 systemd[1]: Stopped verity-setup.service. Sep 13 10:34:38.790965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:34:38.790977 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 10:34:38.790988 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 10:34:38.790999 kernel: ACPI: bus type drm_connector registered Sep 13 10:34:38.791013 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 10:34:38.791059 systemd-journald[1203]: Collecting audit messages is disabled. Sep 13 10:34:38.791085 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 10:34:38.791097 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 10:34:38.791114 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 10:34:38.791126 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 10:34:38.791137 kernel: fuse: init (API version 7.41) Sep 13 10:34:38.791149 systemd-journald[1203]: Journal started Sep 13 10:34:38.791172 systemd-journald[1203]: Runtime Journal (/run/log/journal/1a170b6f25514a72ba8e0ce9a4fab5f0) is 6M, max 48.6M, 42.5M free. Sep 13 10:34:38.565471 systemd[1]: Queued start job for default target multi-user.target. Sep 13 10:34:38.577860 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 10:34:38.578326 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 10:34:38.794686 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:34:38.795649 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:34:38.797130 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 10:34:38.797349 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 10:34:38.798782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:34:38.798995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:34:38.800434 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:34:38.800643 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:34:38.801950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:34:38.802176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:34:38.803616 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 10:34:38.803815 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 10:34:38.805239 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:34:38.805463 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:34:38.806849 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:34:38.808275 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:34:38.809889 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 10:34:38.811388 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 10:34:38.825202 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:34:38.827547 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 10:34:38.829600 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 10:34:38.830699 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 10:34:38.830727 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:34:38.832618 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 10:34:38.844123 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 10:34:38.845456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:34:38.847113 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 10:34:38.851160 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 10:34:38.852368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:34:38.854339 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 10:34:38.855425 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:34:38.856938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:34:38.858907 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 10:34:38.861179 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:34:38.864168 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 10:34:38.865479 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 10:34:38.868660 systemd-journald[1203]: Time spent on flushing to /var/log/journal/1a170b6f25514a72ba8e0ce9a4fab5f0 is 29.780ms for 984 entries. Sep 13 10:34:38.868660 systemd-journald[1203]: System Journal (/var/log/journal/1a170b6f25514a72ba8e0ce9a4fab5f0) is 8M, max 195.6M, 187.6M free. Sep 13 10:34:38.913506 systemd-journald[1203]: Received client request to flush runtime journal. Sep 13 10:34:38.913542 kernel: loop0: detected capacity change from 0 to 128016 Sep 13 10:34:38.913556 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 10:34:38.876001 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:34:38.891386 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 13 10:34:38.891399 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 13 10:34:38.893257 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 10:34:38.895457 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 10:34:38.899228 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 10:34:38.903272 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:34:38.910555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:34:38.913740 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 10:34:38.919926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 10:34:38.928041 kernel: loop1: detected capacity change from 0 to 110984 Sep 13 10:34:38.938400 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 10:34:38.957192 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 10:34:38.958043 kernel: loop2: detected capacity change from 0 to 229808 Sep 13 10:34:38.959930 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:34:38.981053 kernel: loop3: detected capacity change from 0 to 128016 Sep 13 10:34:38.991547 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 13 10:34:38.991563 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 13 10:34:38.997388 kernel: loop4: detected capacity change from 0 to 110984 Sep 13 10:34:38.997469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:34:39.006056 kernel: loop5: detected capacity change from 0 to 229808 Sep 13 10:34:39.013536 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 10:34:39.014077 (sd-merge)[1275]: Merged extensions into '/usr'. Sep 13 10:34:39.020109 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 10:34:39.020125 systemd[1]: Reloading... Sep 13 10:34:39.076519 zram_generator::config[1304]: No configuration found. Sep 13 10:34:39.178615 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 10:34:39.271092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 10:34:39.271331 systemd[1]: Reloading finished in 250 ms. Sep 13 10:34:39.311483 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 10:34:39.312983 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 10:34:39.327263 systemd[1]: Starting ensure-sysext.service... Sep 13 10:34:39.328994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:34:39.339759 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Sep 13 10:34:39.339848 systemd[1]: Reloading... Sep 13 10:34:39.345955 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 10:34:39.345996 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 10:34:39.346607 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 10:34:39.346873 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 10:34:39.347942 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 10:34:39.348305 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 13 10:34:39.348424 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Sep 13 10:34:39.352601 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:34:39.352667 systemd-tmpfiles[1341]: Skipping /boot Sep 13 10:34:39.362191 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:34:39.362250 systemd-tmpfiles[1341]: Skipping /boot Sep 13 10:34:39.388300 zram_generator::config[1371]: No configuration found. Sep 13 10:34:39.557435 systemd[1]: Reloading finished in 217 ms. Sep 13 10:34:39.577412 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 10:34:39.603070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:34:39.611629 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:34:39.614294 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 10:34:39.617298 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 10:34:39.633228 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:34:39.636136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:34:39.638926 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 10:34:39.645048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:34:39.645228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:34:39.651280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:34:39.653607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:34:39.657879 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:34:39.659195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:34:39.659298 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:34:39.662615 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 10:34:39.663794 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:34:39.665319 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 10:34:39.667413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:34:39.667832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:34:39.669671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:34:39.670100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:34:39.672743 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:34:39.673653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:34:39.685333 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 10:34:39.691642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:34:39.691932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:34:39.694086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:34:39.696834 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Sep 13 10:34:39.698128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:34:39.700923 augenrules[1443]: No rules Sep 13 10:34:39.707302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:34:39.709336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:34:39.710468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:34:39.710506 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:34:39.711988 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 10:34:39.713080 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:34:39.713572 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 10:34:39.715453 systemd[1]: Finished ensure-sysext.service. Sep 13 10:34:39.716582 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:34:39.717012 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:34:39.719277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:34:39.719475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:34:39.720912 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:34:39.721382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:34:39.729385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:34:39.729614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:34:39.731434 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 10:34:39.732897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:34:39.734471 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:34:39.734684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:34:39.736312 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 10:34:39.747632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:34:39.749249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:34:39.749305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:34:39.753225 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 10:34:39.754727 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 10:34:39.809836 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 10:34:39.840662 systemd-resolved[1411]: Positive Trust Anchors: Sep 13 10:34:39.840676 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:34:39.840706 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:34:39.842009 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:34:39.845934 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 10:34:39.847358 systemd-resolved[1411]: Defaulting to hostname 'linux'. Sep 13 10:34:39.848876 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:34:39.849352 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:34:39.862048 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 10:34:39.869822 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 10:34:39.878047 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 13 10:34:39.883057 kernel: ACPI: button: Power Button [PWRF] Sep 13 10:34:39.896718 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 10:34:39.897894 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 10:34:39.906854 systemd-networkd[1482]: lo: Link UP Sep 13 10:34:39.906865 systemd-networkd[1482]: lo: Gained carrier Sep 13 10:34:39.908497 systemd-networkd[1482]: Enumeration completed Sep 13 10:34:39.908575 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:34:39.909867 systemd[1]: Reached target network.target - Network. Sep 13 10:34:39.910388 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:34:39.910400 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:34:39.910933 systemd-networkd[1482]: eth0: Link UP Sep 13 10:34:39.911124 systemd-networkd[1482]: eth0: Gained carrier Sep 13 10:34:39.911145 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:34:39.912698 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 10:34:39.916539 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 10:34:39.924077 systemd-networkd[1482]: eth0: DHCPv4 address 10.0.0.4/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:34:39.948580 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 10:34:39.953986 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 10:34:39.955528 systemd-timesyncd[1484]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 10:34:39.955568 systemd-timesyncd[1484]: Initial clock synchronization to Sat 2025-09-13 10:34:40.173077 UTC. Sep 13 10:34:39.955569 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:34:39.956872 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 10:34:39.958100 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 10:34:39.959308 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 10:34:39.960469 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 10:34:39.962155 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 10:34:39.962183 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:34:39.963102 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 10:34:39.964224 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 10:34:39.965319 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 10:34:39.966515 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:34:39.968399 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 10:34:39.970996 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 10:34:39.974187 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 10:34:39.975571 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 10:34:39.977806 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 10:34:39.980916 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 10:34:39.982209 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 10:34:39.983923 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 10:34:39.989166 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:34:39.990109 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:34:39.991066 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:34:39.991094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:34:39.994108 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 10:34:39.996194 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 10:34:40.017212 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 10:34:40.022142 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 10:34:40.025216 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 10:34:40.026208 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 10:34:40.029245 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 10:34:40.031295 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 10:34:40.037705 jq[1529]: false Sep 13 10:34:40.040275 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 10:34:40.046122 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 10:34:40.047641 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Refreshing passwd entry cache Sep 13 10:34:40.047645 oslogin_cache_refresh[1531]: Refreshing passwd entry cache Sep 13 10:34:40.051267 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 10:34:40.057248 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Failure getting users, quitting Sep 13 10:34:40.057290 oslogin_cache_refresh[1531]: Failure getting users, quitting Sep 13 10:34:40.057360 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:34:40.057389 oslogin_cache_refresh[1531]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:34:40.057481 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Refreshing group entry cache Sep 13 10:34:40.057521 oslogin_cache_refresh[1531]: Refreshing group entry cache Sep 13 10:34:40.063916 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 10:34:40.065026 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Failure getting groups, quitting Sep 13 10:34:40.065076 oslogin_cache_refresh[1531]: Failure getting groups, quitting Sep 13 10:34:40.065125 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:34:40.065153 oslogin_cache_refresh[1531]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:34:40.065761 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 10:34:40.066466 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 10:34:40.068953 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 10:34:40.072210 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 10:34:40.078975 extend-filesystems[1530]: Found /dev/vda6 Sep 13 10:34:40.082301 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 10:34:40.083947 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 10:34:40.084247 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 10:34:40.084633 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 10:34:40.084877 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 10:34:40.084978 jq[1547]: true Sep 13 10:34:40.087526 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 10:34:40.087774 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 10:34:40.088131 extend-filesystems[1530]: Found /dev/vda9 Sep 13 10:34:40.094400 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 10:34:40.095199 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 10:34:40.098137 update_engine[1544]: I20250913 10:34:40.096582 1544 main.cc:92] Flatcar Update Engine starting Sep 13 10:34:40.111849 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 10:34:40.113165 extend-filesystems[1530]: Checking size of /dev/vda9 Sep 13 10:34:40.117775 jq[1556]: true Sep 13 10:34:40.124252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:34:40.138603 tar[1555]: linux-amd64/LICENSE Sep 13 10:34:40.138962 tar[1555]: linux-amd64/helm Sep 13 10:34:40.144328 extend-filesystems[1530]: Resized partition /dev/vda9 Sep 13 10:34:40.149710 extend-filesystems[1580]: resize2fs 1.47.3 (8-Jul-2025) Sep 13 10:34:40.158607 kernel: kvm_amd: TSC scaling supported Sep 13 10:34:40.158695 kernel: kvm_amd: Nested Virtualization enabled Sep 13 10:34:40.158709 kernel: kvm_amd: Nested Paging enabled Sep 13 10:34:40.158721 kernel: kvm_amd: LBR virtualization supported Sep 13 10:34:40.160192 dbus-daemon[1525]: [system] SELinux support is enabled Sep 13 10:34:40.160368 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 10:34:40.162390 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 10:34:40.162417 kernel: kvm_amd: Virtual GIF supported Sep 13 10:34:40.168939 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 10:34:40.168979 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 10:34:40.171723 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 10:34:40.171758 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 10:34:40.179891 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 10:34:40.183467 systemd[1]: Started update-engine.service - Update Engine. Sep 13 10:34:40.185530 update_engine[1544]: I20250913 10:34:40.185469 1544 update_check_scheduler.cc:74] Next update check in 9m7s Sep 13 10:34:40.188061 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 10:34:40.190211 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 10:34:40.194107 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Sep 13 10:34:40.195488 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 10:34:40.201172 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 10:34:40.218222 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 10:34:40.228553 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 10:34:40.234905 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 10:34:40.248310 extend-filesystems[1580]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 10:34:40.248310 extend-filesystems[1580]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 10:34:40.248310 extend-filesystems[1580]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 10:34:40.265698 kernel: EDAC MC: Ver: 3.0.0 Sep 13 10:34:40.246866 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 10:34:40.265901 extend-filesystems[1530]: Resized filesystem in /dev/vda9 Sep 13 10:34:40.247141 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 10:34:40.271554 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 10:34:40.292075 systemd-logind[1543]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 10:34:40.292102 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 10:34:40.293449 systemd-logind[1543]: New seat seat0. Sep 13 10:34:40.347971 containerd[1557]: time="2025-09-13T10:34:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 10:34:40.348541 containerd[1557]: time="2025-09-13T10:34:40.348505447Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 13 10:34:40.363544 containerd[1557]: time="2025-09-13T10:34:40.363491775Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.126µs" Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.363645455Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.363670123Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.363862664Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.363877627Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.363903603Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.363969611Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.363983370Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.364281973Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.364294950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.364305592Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.364314422Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 10:34:40.364763 containerd[1557]: time="2025-09-13T10:34:40.364420278Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 10:34:40.365167 containerd[1557]: time="2025-09-13T10:34:40.364649548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:34:40.365167 containerd[1557]: time="2025-09-13T10:34:40.364679516Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:34:40.365167 containerd[1557]: time="2025-09-13T10:34:40.364689180Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 10:34:40.365167 containerd[1557]: time="2025-09-13T10:34:40.364728637Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 10:34:40.365167 containerd[1557]: time="2025-09-13T10:34:40.364964391Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 10:34:40.365167 containerd[1557]: time="2025-09-13T10:34:40.365029668Z" level=info msg="metadata content store policy set" policy=shared Sep 13 10:34:40.371184 containerd[1557]: time="2025-09-13T10:34:40.371145612Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 10:34:40.372129 containerd[1557]: time="2025-09-13T10:34:40.371644103Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 10:34:40.372129 containerd[1557]: time="2025-09-13T10:34:40.371694788Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 10:34:40.372179 containerd[1557]: time="2025-09-13T10:34:40.372144344Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 10:34:40.372179 containerd[1557]: time="2025-09-13T10:34:40.372165522Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 10:34:40.372234 containerd[1557]: time="2025-09-13T10:34:40.372180383Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 10:34:40.372234 containerd[1557]: time="2025-09-13T10:34:40.372196242Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 10:34:40.372234 containerd[1557]: time="2025-09-13T10:34:40.372211711Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 10:34:40.372234 containerd[1557]: time="2025-09-13T10:34:40.372224800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 10:34:40.372304 containerd[1557]: time="2025-09-13T10:34:40.372238457Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 10:34:40.372304 containerd[1557]: time="2025-09-13T10:34:40.372252134Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 10:34:40.372304 containerd[1557]: time="2025-09-13T10:34:40.372268343Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 10:34:40.372440 containerd[1557]: time="2025-09-13T10:34:40.372407008Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 10:34:40.372440 containerd[1557]: time="2025-09-13T10:34:40.372436977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 10:34:40.372497 containerd[1557]: time="2025-09-13T10:34:40.372455789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 10:34:40.372497 containerd[1557]: time="2025-09-13T10:34:40.372469466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 10:34:40.372497 containerd[1557]: time="2025-09-13T10:34:40.372482969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 10:34:40.372497 containerd[1557]: time="2025-09-13T10:34:40.372494062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 10:34:40.372574 containerd[1557]: time="2025-09-13T10:34:40.372509602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 10:34:40.372574 containerd[1557]: time="2025-09-13T10:34:40.372523146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 10:34:40.372574 containerd[1557]: time="2025-09-13T10:34:40.372536401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 10:34:40.372574 containerd[1557]: time="2025-09-13T10:34:40.372547289Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 10:34:40.372574 containerd[1557]: time="2025-09-13T10:34:40.372559742Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 10:34:40.372669 containerd[1557]: time="2025-09-13T10:34:40.372625657Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 10:34:40.372669 containerd[1557]: time="2025-09-13T10:34:40.372639016Z" level=info msg="Start snapshots syncer" Sep 13 10:34:40.372714 containerd[1557]: time="2025-09-13T10:34:40.372668747Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 10:34:40.372993 containerd[1557]: time="2025-09-13T10:34:40.372942126Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 10:34:40.373117 containerd[1557]: time="2025-09-13T10:34:40.373000890Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 10:34:40.373295 containerd[1557]: time="2025-09-13T10:34:40.373269339Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 10:34:40.373428 containerd[1557]: time="2025-09-13T10:34:40.373395284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 10:34:40.373428 containerd[1557]: time="2025-09-13T10:34:40.373423594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 10:34:40.373471 containerd[1557]: time="2025-09-13T10:34:40.373437900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 10:34:40.373471 containerd[1557]: time="2025-09-13T10:34:40.373448932Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 10:34:40.373471 containerd[1557]: time="2025-09-13T10:34:40.373462568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 10:34:40.373531 containerd[1557]: time="2025-09-13T10:34:40.373476215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 10:34:40.373531 containerd[1557]: time="2025-09-13T10:34:40.373488625Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 10:34:40.373531 containerd[1557]: time="2025-09-13T10:34:40.373515588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 10:34:40.373531 containerd[1557]: time="2025-09-13T10:34:40.373527032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 10:34:40.373600 containerd[1557]: time="2025-09-13T10:34:40.373541688Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 10:34:40.373600 containerd[1557]: time="2025-09-13T10:34:40.373580609Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:34:40.373600 containerd[1557]: time="2025-09-13T10:34:40.373594441Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:34:40.373659 containerd[1557]: time="2025-09-13T10:34:40.373605864Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:34:40.373659 containerd[1557]: time="2025-09-13T10:34:40.373619016Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:34:40.373659 containerd[1557]: time="2025-09-13T10:34:40.373629648Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 10:34:40.373659 containerd[1557]: time="2025-09-13T10:34:40.373640340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 10:34:40.373659 containerd[1557]: time="2025-09-13T10:34:40.373653595Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 10:34:40.373760 containerd[1557]: time="2025-09-13T10:34:40.373675784Z" level=info msg="runtime interface created" Sep 13 10:34:40.373760 containerd[1557]: time="2025-09-13T10:34:40.373681866Z" level=info msg="created NRI interface" Sep 13 10:34:40.373760 containerd[1557]: time="2025-09-13T10:34:40.373690253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 10:34:40.373760 containerd[1557]: time="2025-09-13T10:34:40.373703827Z" level=info msg="Connect containerd service" Sep 13 10:34:40.373760 containerd[1557]: time="2025-09-13T10:34:40.373737151Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 10:34:40.374866 containerd[1557]: time="2025-09-13T10:34:40.374842088Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:34:40.417838 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 10:34:40.419402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:34:40.420907 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 10:34:40.421172 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 10:34:40.439513 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 10:34:40.468842 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 10:34:40.674639 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 10:34:40.677391 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 10:34:40.678883 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 10:34:40.709795 containerd[1557]: time="2025-09-13T10:34:40.709759676Z" level=info msg="Start subscribing containerd event" Sep 13 10:34:40.710148 containerd[1557]: time="2025-09-13T10:34:40.710131018Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 10:34:40.710243 containerd[1557]: time="2025-09-13T10:34:40.710230484Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 10:34:40.711302 containerd[1557]: time="2025-09-13T10:34:40.711230635Z" level=info msg="Start recovering state" Sep 13 10:34:40.711414 containerd[1557]: time="2025-09-13T10:34:40.711380013Z" level=info msg="Start event monitor" Sep 13 10:34:40.711414 containerd[1557]: time="2025-09-13T10:34:40.711394637Z" level=info msg="Start cni network conf syncer for default" Sep 13 10:34:40.711414 containerd[1557]: time="2025-09-13T10:34:40.711409950Z" level=info msg="Start streaming server" Sep 13 10:34:40.711470 containerd[1557]: time="2025-09-13T10:34:40.711419356Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 10:34:40.711470 containerd[1557]: time="2025-09-13T10:34:40.711428598Z" level=info msg="runtime interface starting up..." Sep 13 10:34:40.711470 containerd[1557]: time="2025-09-13T10:34:40.711434979Z" level=info msg="starting plugins..." Sep 13 10:34:40.711470 containerd[1557]: time="2025-09-13T10:34:40.711450725Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 10:34:40.711614 containerd[1557]: time="2025-09-13T10:34:40.711593661Z" level=info msg="containerd successfully booted in 0.364156s" Sep 13 10:34:40.711683 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 10:34:40.781959 tar[1555]: linux-amd64/README.md Sep 13 10:34:40.809720 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 10:34:41.701673 systemd-networkd[1482]: eth0: Gained IPv6LL Sep 13 10:34:41.704522 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 10:34:41.706249 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 10:34:41.708652 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 10:34:41.711079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:34:41.718153 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 10:34:41.735797 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 10:34:41.736126 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 10:34:41.737716 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 10:34:41.742787 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 10:34:43.299337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:34:43.301007 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 10:34:43.302858 systemd[1]: Startup finished in 2.697s (kernel) + 5.438s (initrd) + 5.299s (userspace) = 13.435s. Sep 13 10:34:43.333408 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:34:43.948138 kubelet[1671]: E0913 10:34:43.948070 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:34:43.953362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:34:43.953551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:34:43.953969 systemd[1]: kubelet.service: Consumed 2.024s CPU time, 268M memory peak. Sep 13 10:34:45.808340 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 10:34:45.809558 systemd[1]: Started sshd@0-10.0.0.4:22-10.0.0.1:33780.service - OpenSSH per-connection server daemon (10.0.0.1:33780). Sep 13 10:34:45.922383 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 33780 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:34:45.923865 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:34:45.930210 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 10:34:45.931276 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 10:34:45.937746 systemd-logind[1543]: New session 1 of user core. Sep 13 10:34:45.953095 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 10:34:45.956094 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 10:34:45.970385 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 10:34:45.972635 systemd-logind[1543]: New session c1 of user core. Sep 13 10:34:46.121188 systemd[1690]: Queued start job for default target default.target. Sep 13 10:34:46.142201 systemd[1690]: Created slice app.slice - User Application Slice. Sep 13 10:34:46.142224 systemd[1690]: Reached target paths.target - Paths. Sep 13 10:34:46.142258 systemd[1690]: Reached target timers.target - Timers. Sep 13 10:34:46.143571 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 10:34:46.154284 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 10:34:46.154421 systemd[1690]: Reached target sockets.target - Sockets. Sep 13 10:34:46.154471 systemd[1690]: Reached target basic.target - Basic System. Sep 13 10:34:46.154515 systemd[1690]: Reached target default.target - Main User Target. Sep 13 10:34:46.154550 systemd[1690]: Startup finished in 175ms. Sep 13 10:34:46.154587 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 10:34:46.156092 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 10:34:46.221344 systemd[1]: Started sshd@1-10.0.0.4:22-10.0.0.1:33794.service - OpenSSH per-connection server daemon (10.0.0.1:33794). Sep 13 10:34:46.271542 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 33794 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:34:46.272778 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:34:46.276638 systemd-logind[1543]: New session 2 of user core. Sep 13 10:34:46.283149 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 10:34:46.334600 sshd[1704]: Connection closed by 10.0.0.1 port 33794 Sep 13 10:34:46.334885 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Sep 13 10:34:46.342468 systemd[1]: sshd@1-10.0.0.4:22-10.0.0.1:33794.service: Deactivated successfully. Sep 13 10:34:46.344119 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 10:34:46.344800 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Sep 13 10:34:46.347243 systemd[1]: Started sshd@2-10.0.0.4:22-10.0.0.1:33808.service - OpenSSH per-connection server daemon (10.0.0.1:33808). Sep 13 10:34:46.347836 systemd-logind[1543]: Removed session 2. Sep 13 10:34:46.394918 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 33808 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:34:46.395996 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:34:46.399788 systemd-logind[1543]: New session 3 of user core. Sep 13 10:34:46.406148 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 10:34:46.454047 sshd[1713]: Connection closed by 10.0.0.1 port 33808 Sep 13 10:34:46.454282 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Sep 13 10:34:46.465304 systemd[1]: sshd@2-10.0.0.4:22-10.0.0.1:33808.service: Deactivated successfully. Sep 13 10:34:46.466903 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 10:34:46.467567 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Sep 13 10:34:46.469938 systemd[1]: Started sshd@3-10.0.0.4:22-10.0.0.1:33820.service - OpenSSH per-connection server daemon (10.0.0.1:33820). Sep 13 10:34:46.470512 systemd-logind[1543]: Removed session 3. Sep 13 10:34:46.523113 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 33820 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:34:46.524140 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:34:46.527735 systemd-logind[1543]: New session 4 of user core. Sep 13 10:34:46.538137 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 10:34:46.590591 sshd[1722]: Connection closed by 10.0.0.1 port 33820 Sep 13 10:34:46.590897 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Sep 13 10:34:46.606469 systemd[1]: sshd@3-10.0.0.4:22-10.0.0.1:33820.service: Deactivated successfully. Sep 13 10:34:46.608102 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 10:34:46.608755 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Sep 13 10:34:46.611352 systemd[1]: Started sshd@4-10.0.0.4:22-10.0.0.1:33822.service - OpenSSH per-connection server daemon (10.0.0.1:33822). Sep 13 10:34:46.611875 systemd-logind[1543]: Removed session 4. Sep 13 10:34:46.659912 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 33822 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:34:46.660995 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:34:46.664773 systemd-logind[1543]: New session 5 of user core. Sep 13 10:34:46.673146 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 10:34:46.729694 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 10:34:46.729994 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:34:46.748494 sudo[1732]: pam_unix(sudo:session): session closed for user root Sep 13 10:34:46.749943 sshd[1731]: Connection closed by 10.0.0.1 port 33822 Sep 13 10:34:46.750272 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 13 10:34:46.762386 systemd[1]: sshd@4-10.0.0.4:22-10.0.0.1:33822.service: Deactivated successfully. Sep 13 10:34:46.764108 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 10:34:46.764793 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Sep 13 10:34:46.767301 systemd[1]: Started sshd@5-10.0.0.4:22-10.0.0.1:33828.service - OpenSSH per-connection server daemon (10.0.0.1:33828). Sep 13 10:34:46.767835 systemd-logind[1543]: Removed session 5. Sep 13 10:34:46.830164 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 33828 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:34:46.831932 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:34:46.836315 systemd-logind[1543]: New session 6 of user core. Sep 13 10:34:46.847175 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 10:34:46.900926 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 10:34:46.901241 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:34:47.179508 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 13 10:34:47.185817 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 10:34:47.186222 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:34:47.195893 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:34:47.244367 augenrules[1765]: No rules Sep 13 10:34:47.246019 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:34:47.246390 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:34:47.247658 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 13 10:34:47.249063 sshd[1741]: Connection closed by 10.0.0.1 port 33828 Sep 13 10:34:47.249404 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 13 10:34:47.258562 systemd[1]: sshd@5-10.0.0.4:22-10.0.0.1:33828.service: Deactivated successfully. Sep 13 10:34:47.260257 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 10:34:47.261028 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Sep 13 10:34:47.263501 systemd[1]: Started sshd@6-10.0.0.4:22-10.0.0.1:33844.service - OpenSSH per-connection server daemon (10.0.0.1:33844). Sep 13 10:34:47.264284 systemd-logind[1543]: Removed session 6. Sep 13 10:34:47.310940 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 33844 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:34:47.312084 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:34:47.316438 systemd-logind[1543]: New session 7 of user core. Sep 13 10:34:47.327167 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 10:34:47.379538 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 10:34:47.379831 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:34:48.253798 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 10:34:48.273340 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 10:34:48.914772 dockerd[1799]: time="2025-09-13T10:34:48.914687898Z" level=info msg="Starting up" Sep 13 10:34:48.915613 dockerd[1799]: time="2025-09-13T10:34:48.915577994Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 10:34:48.935449 dockerd[1799]: time="2025-09-13T10:34:48.935387716Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 13 10:34:48.996906 dockerd[1799]: time="2025-09-13T10:34:48.996838430Z" level=info msg="Loading containers: start." Sep 13 10:34:49.009075 kernel: Initializing XFRM netlink socket Sep 13 10:34:49.258136 systemd-networkd[1482]: docker0: Link UP Sep 13 10:34:49.264110 dockerd[1799]: time="2025-09-13T10:34:49.264079399Z" level=info msg="Loading containers: done." Sep 13 10:34:49.281566 dockerd[1799]: time="2025-09-13T10:34:49.281515903Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 10:34:49.281698 dockerd[1799]: time="2025-09-13T10:34:49.281620102Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 13 10:34:49.281758 dockerd[1799]: time="2025-09-13T10:34:49.281726766Z" level=info msg="Initializing buildkit" Sep 13 10:34:49.319648 dockerd[1799]: time="2025-09-13T10:34:49.319600449Z" level=info msg="Completed buildkit initialization" Sep 13 10:34:49.328725 dockerd[1799]: time="2025-09-13T10:34:49.328690821Z" level=info msg="Daemon has completed initialization" Sep 13 10:34:49.328839 dockerd[1799]: time="2025-09-13T10:34:49.328775122Z" level=info msg="API listen on /run/docker.sock" Sep 13 10:34:49.328925 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 10:34:49.951656 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck771306536-merged.mount: Deactivated successfully. Sep 13 10:34:50.256218 containerd[1557]: time="2025-09-13T10:34:50.255541655Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 10:34:50.901403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752526628.mount: Deactivated successfully. Sep 13 10:34:52.242177 containerd[1557]: time="2025-09-13T10:34:52.242119109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:52.242866 containerd[1557]: time="2025-09-13T10:34:52.242830633Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 13 10:34:52.243819 containerd[1557]: time="2025-09-13T10:34:52.243771818Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:52.246189 containerd[1557]: time="2025-09-13T10:34:52.246147686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:52.247016 containerd[1557]: time="2025-09-13T10:34:52.246990913Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.991408111s" Sep 13 10:34:52.247071 containerd[1557]: time="2025-09-13T10:34:52.247035862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 10:34:52.248207 containerd[1557]: time="2025-09-13T10:34:52.248188644Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 10:34:53.848971 containerd[1557]: time="2025-09-13T10:34:53.848920042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:53.850403 containerd[1557]: time="2025-09-13T10:34:53.850383124Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 13 10:34:53.851550 containerd[1557]: time="2025-09-13T10:34:53.851528617Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:53.853872 containerd[1557]: time="2025-09-13T10:34:53.853835288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:53.854771 containerd[1557]: time="2025-09-13T10:34:53.854732363Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.606520001s" Sep 13 10:34:53.854828 containerd[1557]: time="2025-09-13T10:34:53.854776225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 10:34:53.855340 containerd[1557]: time="2025-09-13T10:34:53.855308115Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 10:34:54.168683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 10:34:54.170212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:34:54.440261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:34:54.444848 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:34:54.568873 kubelet[2086]: E0913 10:34:54.568789 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:34:54.575793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:34:54.575990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:34:54.576367 systemd[1]: kubelet.service: Consumed 287ms CPU time, 109.6M memory peak. Sep 13 10:34:56.003866 containerd[1557]: time="2025-09-13T10:34:56.003803530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:56.004581 containerd[1557]: time="2025-09-13T10:34:56.004543141Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 13 10:34:56.005698 containerd[1557]: time="2025-09-13T10:34:56.005671964Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:56.008114 containerd[1557]: time="2025-09-13T10:34:56.008075905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:56.009044 containerd[1557]: time="2025-09-13T10:34:56.008992423Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.153654841s" Sep 13 10:34:56.009044 containerd[1557]: time="2025-09-13T10:34:56.009042416Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 10:34:56.009626 containerd[1557]: time="2025-09-13T10:34:56.009587510Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 10:34:57.187062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount308272336.mount: Deactivated successfully. Sep 13 10:34:57.835971 containerd[1557]: time="2025-09-13T10:34:57.835920266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:57.836579 containerd[1557]: time="2025-09-13T10:34:57.836526036Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 13 10:34:57.837610 containerd[1557]: time="2025-09-13T10:34:57.837573792Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:57.839283 containerd[1557]: time="2025-09-13T10:34:57.839230092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:57.839701 containerd[1557]: time="2025-09-13T10:34:57.839677110Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.830062535s" Sep 13 10:34:57.839742 containerd[1557]: time="2025-09-13T10:34:57.839703142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 10:34:57.840327 containerd[1557]: time="2025-09-13T10:34:57.840155194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 10:34:58.437587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31892187.mount: Deactivated successfully. Sep 13 10:34:59.546717 containerd[1557]: time="2025-09-13T10:34:59.546659368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:59.547325 containerd[1557]: time="2025-09-13T10:34:59.547302272Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 13 10:34:59.548498 containerd[1557]: time="2025-09-13T10:34:59.548452915Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:59.550883 containerd[1557]: time="2025-09-13T10:34:59.550853131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:34:59.551672 containerd[1557]: time="2025-09-13T10:34:59.551629039Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.711444534s" Sep 13 10:34:59.551672 containerd[1557]: time="2025-09-13T10:34:59.551656881Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 10:34:59.552105 containerd[1557]: time="2025-09-13T10:34:59.552079459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 10:35:00.095144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752406533.mount: Deactivated successfully. Sep 13 10:35:00.100125 containerd[1557]: time="2025-09-13T10:35:00.100089144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:35:00.100819 containerd[1557]: time="2025-09-13T10:35:00.100798013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 10:35:00.101898 containerd[1557]: time="2025-09-13T10:35:00.101855500Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:35:00.103717 containerd[1557]: time="2025-09-13T10:35:00.103685715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:35:00.104232 containerd[1557]: time="2025-09-13T10:35:00.104198750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 552.089722ms" Sep 13 10:35:00.104232 containerd[1557]: time="2025-09-13T10:35:00.104226805Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 10:35:00.104654 containerd[1557]: time="2025-09-13T10:35:00.104625661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 10:35:00.693672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403552412.mount: Deactivated successfully. Sep 13 10:35:02.319455 containerd[1557]: time="2025-09-13T10:35:02.319400953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:02.320057 containerd[1557]: time="2025-09-13T10:35:02.320007502Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 13 10:35:02.321128 containerd[1557]: time="2025-09-13T10:35:02.321095124Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:02.323514 containerd[1557]: time="2025-09-13T10:35:02.323468425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:02.324409 containerd[1557]: time="2025-09-13T10:35:02.324374978Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.219723945s" Sep 13 10:35:02.324409 containerd[1557]: time="2025-09-13T10:35:02.324405660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 10:35:04.668429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 10:35:04.669930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:35:04.973511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:35:04.982294 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:35:05.013713 kubelet[2250]: E0913 10:35:05.013649 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:35:05.017500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:35:05.017678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:35:05.018041 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.5M memory peak. Sep 13 10:35:05.249943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:35:05.250180 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.5M memory peak. Sep 13 10:35:05.252314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:35:05.274294 systemd[1]: Reload requested from client PID 2265 ('systemctl') (unit session-7.scope)... Sep 13 10:35:05.274308 systemd[1]: Reloading... Sep 13 10:35:05.351054 zram_generator::config[2311]: No configuration found. Sep 13 10:35:05.780217 systemd[1]: Reloading finished in 505 ms. Sep 13 10:35:05.856645 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 10:35:05.856741 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 10:35:05.857048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:35:05.857087 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.2M memory peak. Sep 13 10:35:05.858428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:35:06.011598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:35:06.015994 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:35:06.051820 kubelet[2356]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:35:06.051820 kubelet[2356]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 10:35:06.051820 kubelet[2356]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:35:06.052165 kubelet[2356]: I0913 10:35:06.051820 2356 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:35:06.466210 kubelet[2356]: I0913 10:35:06.466176 2356 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 10:35:06.466210 kubelet[2356]: I0913 10:35:06.466201 2356 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:35:06.466442 kubelet[2356]: I0913 10:35:06.466420 2356 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 10:35:06.489545 kubelet[2356]: I0913 10:35:06.489488 2356 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:35:06.490138 kubelet[2356]: E0913 10:35:06.490107 2356 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 10:35:06.499431 kubelet[2356]: I0913 10:35:06.499398 2356 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:35:06.504998 kubelet[2356]: I0913 10:35:06.504981 2356 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:35:06.505265 kubelet[2356]: I0913 10:35:06.505244 2356 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:35:06.505446 kubelet[2356]: I0913 10:35:06.505263 2356 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:35:06.505546 kubelet[2356]: I0913 10:35:06.505455 2356 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:35:06.505546 kubelet[2356]: I0913 10:35:06.505463 2356 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 10:35:06.506236 kubelet[2356]: I0913 10:35:06.506214 2356 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:35:06.508380 kubelet[2356]: I0913 10:35:06.508351 2356 kubelet.go:480] "Attempting to sync node with API server" Sep 13 10:35:06.508380 kubelet[2356]: I0913 10:35:06.508376 2356 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:35:06.508457 kubelet[2356]: I0913 10:35:06.508412 2356 kubelet.go:386] "Adding apiserver pod source" Sep 13 10:35:06.510450 kubelet[2356]: I0913 10:35:06.510352 2356 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:35:06.513509 kubelet[2356]: E0913 10:35:06.513469 2356 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 10:35:06.513564 kubelet[2356]: E0913 10:35:06.513511 2356 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 10:35:06.513985 kubelet[2356]: I0913 10:35:06.513967 2356 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:35:06.514527 kubelet[2356]: I0913 10:35:06.514498 2356 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 10:35:06.515054 kubelet[2356]: W0913 10:35:06.515011 2356 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 10:35:06.517585 kubelet[2356]: I0913 10:35:06.517556 2356 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 10:35:06.517632 kubelet[2356]: I0913 10:35:06.517610 2356 server.go:1289] "Started kubelet" Sep 13 10:35:06.518946 kubelet[2356]: I0913 10:35:06.518886 2356 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:35:06.520688 kubelet[2356]: I0913 10:35:06.519697 2356 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:35:06.520688 kubelet[2356]: I0913 10:35:06.519724 2356 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:35:06.520773 kubelet[2356]: I0913 10:35:06.520755 2356 server.go:317] "Adding debug handlers to kubelet server" Sep 13 10:35:06.525232 kubelet[2356]: I0913 10:35:06.524545 2356 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:35:06.526327 kubelet[2356]: I0913 10:35:06.519700 2356 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:35:06.526716 kubelet[2356]: I0913 10:35:06.526693 2356 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 10:35:06.526829 kubelet[2356]: I0913 10:35:06.526810 2356 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 10:35:06.526829 kubelet[2356]: E0913 10:35:06.526690 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:06.526904 kubelet[2356]: I0913 10:35:06.526873 2356 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:35:06.527588 kubelet[2356]: E0913 10:35:06.527268 2356 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 10:35:06.528168 kubelet[2356]: E0913 10:35:06.521097 2356 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864d123db799959 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:35:06.517576025 +0000 UTC m=+0.497640680,LastTimestamp:2025-09-13 10:35:06.517576025 +0000 UTC m=+0.497640680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:35:06.528813 kubelet[2356]: E0913 10:35:06.528764 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="200ms" Sep 13 10:35:06.530289 kubelet[2356]: E0913 10:35:06.530265 2356 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:35:06.530771 kubelet[2356]: I0913 10:35:06.530743 2356 factory.go:223] Registration of the containerd container factory successfully Sep 13 10:35:06.530771 kubelet[2356]: I0913 10:35:06.530772 2356 factory.go:223] Registration of the systemd container factory successfully Sep 13 10:35:06.530864 kubelet[2356]: I0913 10:35:06.530847 2356 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:35:06.544849 kubelet[2356]: I0913 10:35:06.544430 2356 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 10:35:06.545756 kubelet[2356]: I0913 10:35:06.545735 2356 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 10:35:06.545811 kubelet[2356]: I0913 10:35:06.545760 2356 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 10:35:06.545811 kubelet[2356]: I0913 10:35:06.545781 2356 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 10:35:06.545811 kubelet[2356]: I0913 10:35:06.545789 2356 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 10:35:06.545893 kubelet[2356]: E0913 10:35:06.545821 2356 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:35:06.546267 kubelet[2356]: E0913 10:35:06.546237 2356 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 10:35:06.546434 kubelet[2356]: I0913 10:35:06.546413 2356 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 10:35:06.546485 kubelet[2356]: I0913 10:35:06.546434 2356 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 10:35:06.546611 kubelet[2356]: I0913 10:35:06.546593 2356 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:35:06.626991 kubelet[2356]: E0913 10:35:06.626954 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:06.646300 kubelet[2356]: E0913 10:35:06.646258 2356 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:35:06.727541 kubelet[2356]: E0913 10:35:06.727455 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:06.729907 kubelet[2356]: E0913 10:35:06.729877 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="400ms" Sep 13 10:35:06.828236 kubelet[2356]: E0913 10:35:06.828199 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:06.846347 kubelet[2356]: E0913 10:35:06.846306 2356 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:35:06.899203 kubelet[2356]: I0913 10:35:06.899165 2356 policy_none.go:49] "None policy: Start" Sep 13 10:35:06.899203 kubelet[2356]: I0913 10:35:06.899204 2356 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 10:35:06.899275 kubelet[2356]: I0913 10:35:06.899241 2356 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:35:06.906098 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 10:35:06.917239 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 10:35:06.920651 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 10:35:06.928736 kubelet[2356]: E0913 10:35:06.928705 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:06.937836 kubelet[2356]: E0913 10:35:06.937815 2356 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 10:35:06.938188 kubelet[2356]: I0913 10:35:06.938043 2356 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:35:06.938188 kubelet[2356]: I0913 10:35:06.938061 2356 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:35:06.938339 kubelet[2356]: I0913 10:35:06.938320 2356 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:35:06.938974 kubelet[2356]: E0913 10:35:06.938927 2356 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 10:35:06.938974 kubelet[2356]: E0913 10:35:06.938964 2356 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 10:35:07.039694 kubelet[2356]: I0913 10:35:07.039619 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:35:07.039955 kubelet[2356]: E0913 10:35:07.039927 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Sep 13 10:35:07.130713 kubelet[2356]: E0913 10:35:07.130673 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="800ms" Sep 13 10:35:07.240926 kubelet[2356]: I0913 10:35:07.240899 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:35:07.241136 kubelet[2356]: E0913 10:35:07.241115 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Sep 13 10:35:07.331772 kubelet[2356]: I0913 10:35:07.331678 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cea0341579e4085306148c91c671d21-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cea0341579e4085306148c91c671d21\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:07.331772 kubelet[2356]: I0913 10:35:07.331707 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cea0341579e4085306148c91c671d21-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cea0341579e4085306148c91c671d21\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:07.331772 kubelet[2356]: I0913 10:35:07.331728 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cea0341579e4085306148c91c671d21-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6cea0341579e4085306148c91c671d21\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:07.435550 kubelet[2356]: E0913 10:35:07.435521 2356 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 10:35:07.636656 systemd[1]: Created slice kubepods-burstable-pod6cea0341579e4085306148c91c671d21.slice - libcontainer container kubepods-burstable-pod6cea0341579e4085306148c91c671d21.slice. Sep 13 10:35:07.642006 kubelet[2356]: E0913 10:35:07.641969 2356 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 10:35:07.642454 kubelet[2356]: I0913 10:35:07.642436 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:35:07.642736 kubelet[2356]: E0913 10:35:07.642708 2356 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Sep 13 10:35:07.644803 kubelet[2356]: E0913 10:35:07.644776 2356 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:35:07.645124 kubelet[2356]: E0913 10:35:07.645098 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:07.645560 containerd[1557]: time="2025-09-13T10:35:07.645520250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6cea0341579e4085306148c91c671d21,Namespace:kube-system,Attempt:0,}" Sep 13 10:35:07.733862 kubelet[2356]: I0913 10:35:07.733841 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:07.733909 kubelet[2356]: I0913 10:35:07.733869 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:07.733909 kubelet[2356]: I0913 10:35:07.733887 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:07.733951 kubelet[2356]: I0913 10:35:07.733915 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:07.733951 kubelet[2356]: I0913 10:35:07.733936 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:07.753276 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 10:35:07.755192 kubelet[2356]: E0913 10:35:07.755170 2356 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:35:07.768147 containerd[1557]: time="2025-09-13T10:35:07.768106184Z" level=info msg="connecting to shim a94db093c6a7650a4ad2b288ecc9e8dfc192fc96b7ec3872853b78499ea0f30f" address="unix:///run/containerd/s/428863ae9aefac71de94b2c4132124a4732aca65c55d795e9c309f7bbc1cb339" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:07.769436 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 10:35:07.771211 kubelet[2356]: E0913 10:35:07.771191 2356 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:35:07.792150 systemd[1]: Started cri-containerd-a94db093c6a7650a4ad2b288ecc9e8dfc192fc96b7ec3872853b78499ea0f30f.scope - libcontainer container a94db093c6a7650a4ad2b288ecc9e8dfc192fc96b7ec3872853b78499ea0f30f. Sep 13 10:35:07.831155 containerd[1557]: time="2025-09-13T10:35:07.831105596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6cea0341579e4085306148c91c671d21,Namespace:kube-system,Attempt:0,} returns sandbox id \"a94db093c6a7650a4ad2b288ecc9e8dfc192fc96b7ec3872853b78499ea0f30f\"" Sep 13 10:35:07.832267 kubelet[2356]: E0913 10:35:07.832228 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:07.834750 kubelet[2356]: I0913 10:35:07.834723 2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:35:07.836474 containerd[1557]: time="2025-09-13T10:35:07.836449108Z" level=info msg="CreateContainer within sandbox \"a94db093c6a7650a4ad2b288ecc9e8dfc192fc96b7ec3872853b78499ea0f30f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 10:35:07.846178 containerd[1557]: time="2025-09-13T10:35:07.846126526Z" level=info msg="Container d7ae28861302e00a7ab96f9b71c60488845afa283d5dd3e4fd3888aab6fdc7ce: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:07.853617 containerd[1557]: time="2025-09-13T10:35:07.853576042Z" level=info msg="CreateContainer within sandbox \"a94db093c6a7650a4ad2b288ecc9e8dfc192fc96b7ec3872853b78499ea0f30f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d7ae28861302e00a7ab96f9b71c60488845afa283d5dd3e4fd3888aab6fdc7ce\"" Sep 13 10:35:07.854170 containerd[1557]: time="2025-09-13T10:35:07.854146413Z" level=info msg="StartContainer for \"d7ae28861302e00a7ab96f9b71c60488845afa283d5dd3e4fd3888aab6fdc7ce\"" Sep 13 10:35:07.855131 containerd[1557]: time="2025-09-13T10:35:07.855106722Z" level=info msg="connecting to shim d7ae28861302e00a7ab96f9b71c60488845afa283d5dd3e4fd3888aab6fdc7ce" address="unix:///run/containerd/s/428863ae9aefac71de94b2c4132124a4732aca65c55d795e9c309f7bbc1cb339" protocol=ttrpc version=3 Sep 13 10:35:07.881146 systemd[1]: Started cri-containerd-d7ae28861302e00a7ab96f9b71c60488845afa283d5dd3e4fd3888aab6fdc7ce.scope - libcontainer container d7ae28861302e00a7ab96f9b71c60488845afa283d5dd3e4fd3888aab6fdc7ce. Sep 13 10:35:07.923748 containerd[1557]: time="2025-09-13T10:35:07.923704943Z" level=info msg="StartContainer for \"d7ae28861302e00a7ab96f9b71c60488845afa283d5dd3e4fd3888aab6fdc7ce\" returns successfully" Sep 13 10:35:07.931813 kubelet[2356]: E0913 10:35:07.931776 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="1.6s" Sep 13 10:35:07.948771 kubelet[2356]: E0913 10:35:07.948721 2356 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 10:35:08.056081 kubelet[2356]: E0913 10:35:08.056019 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:08.056532 containerd[1557]: time="2025-09-13T10:35:08.056472004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 10:35:08.072362 kubelet[2356]: E0913 10:35:08.072336 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:08.072782 containerd[1557]: time="2025-09-13T10:35:08.072749710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 10:35:08.075040 containerd[1557]: time="2025-09-13T10:35:08.074767887Z" level=info msg="connecting to shim f673324a3c0fd0d58540b6502b993211941ea88e92fa248405b271c62c722c85" address="unix:///run/containerd/s/509c24fa12146eba0d5263a31974a0630436bd7e971e3dbd73e7fa5851dae271" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:08.092788 containerd[1557]: time="2025-09-13T10:35:08.092728455Z" level=info msg="connecting to shim b60ba4e02b5ecac8fd009837b919ee3cb0309049500d977a055783bae6b9c625" address="unix:///run/containerd/s/29395130a32a921bbe4f376ddd57217e73925f65bfbae0e6a16c4b8b07440e1d" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:08.100263 systemd[1]: Started cri-containerd-f673324a3c0fd0d58540b6502b993211941ea88e92fa248405b271c62c722c85.scope - libcontainer container f673324a3c0fd0d58540b6502b993211941ea88e92fa248405b271c62c722c85. Sep 13 10:35:08.115150 systemd[1]: Started cri-containerd-b60ba4e02b5ecac8fd009837b919ee3cb0309049500d977a055783bae6b9c625.scope - libcontainer container b60ba4e02b5ecac8fd009837b919ee3cb0309049500d977a055783bae6b9c625. Sep 13 10:35:08.155686 containerd[1557]: time="2025-09-13T10:35:08.155637635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f673324a3c0fd0d58540b6502b993211941ea88e92fa248405b271c62c722c85\"" Sep 13 10:35:08.156773 kubelet[2356]: E0913 10:35:08.156744 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:08.161979 containerd[1557]: time="2025-09-13T10:35:08.161955563Z" level=info msg="CreateContainer within sandbox \"f673324a3c0fd0d58540b6502b993211941ea88e92fa248405b271c62c722c85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 10:35:08.163827 containerd[1557]: time="2025-09-13T10:35:08.163786919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b60ba4e02b5ecac8fd009837b919ee3cb0309049500d977a055783bae6b9c625\"" Sep 13 10:35:08.164657 kubelet[2356]: E0913 10:35:08.164622 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:08.169324 containerd[1557]: time="2025-09-13T10:35:08.168910004Z" level=info msg="CreateContainer within sandbox \"b60ba4e02b5ecac8fd009837b919ee3cb0309049500d977a055783bae6b9c625\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 10:35:08.170082 containerd[1557]: time="2025-09-13T10:35:08.170064444Z" level=info msg="Container cb88d77c3556abc0a8275ac6f5897f57fdfa88b36240ab4b5a8e8aa746f18604: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:08.178376 containerd[1557]: time="2025-09-13T10:35:08.178008992Z" level=info msg="CreateContainer within sandbox \"f673324a3c0fd0d58540b6502b993211941ea88e92fa248405b271c62c722c85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cb88d77c3556abc0a8275ac6f5897f57fdfa88b36240ab4b5a8e8aa746f18604\"" Sep 13 10:35:08.178691 containerd[1557]: time="2025-09-13T10:35:08.178672565Z" level=info msg="StartContainer for \"cb88d77c3556abc0a8275ac6f5897f57fdfa88b36240ab4b5a8e8aa746f18604\"" Sep 13 10:35:08.179691 containerd[1557]: time="2025-09-13T10:35:08.179672175Z" level=info msg="connecting to shim cb88d77c3556abc0a8275ac6f5897f57fdfa88b36240ab4b5a8e8aa746f18604" address="unix:///run/containerd/s/509c24fa12146eba0d5263a31974a0630436bd7e971e3dbd73e7fa5851dae271" protocol=ttrpc version=3 Sep 13 10:35:08.182402 containerd[1557]: time="2025-09-13T10:35:08.182371771Z" level=info msg="Container 3a3da43e4093a8c55b21648a9a63cad87d90e02d86c9ec0308c80ab78d1ef3fc: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:08.189840 containerd[1557]: time="2025-09-13T10:35:08.189793631Z" level=info msg="CreateContainer within sandbox \"b60ba4e02b5ecac8fd009837b919ee3cb0309049500d977a055783bae6b9c625\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a3da43e4093a8c55b21648a9a63cad87d90e02d86c9ec0308c80ab78d1ef3fc\"" Sep 13 10:35:08.190287 containerd[1557]: time="2025-09-13T10:35:08.190256077Z" level=info msg="StartContainer for \"3a3da43e4093a8c55b21648a9a63cad87d90e02d86c9ec0308c80ab78d1ef3fc\"" Sep 13 10:35:08.191719 containerd[1557]: time="2025-09-13T10:35:08.191685430Z" level=info msg="connecting to shim 3a3da43e4093a8c55b21648a9a63cad87d90e02d86c9ec0308c80ab78d1ef3fc" address="unix:///run/containerd/s/29395130a32a921bbe4f376ddd57217e73925f65bfbae0e6a16c4b8b07440e1d" protocol=ttrpc version=3 Sep 13 10:35:08.205187 systemd[1]: Started cri-containerd-cb88d77c3556abc0a8275ac6f5897f57fdfa88b36240ab4b5a8e8aa746f18604.scope - libcontainer container cb88d77c3556abc0a8275ac6f5897f57fdfa88b36240ab4b5a8e8aa746f18604. Sep 13 10:35:08.217496 systemd[1]: Started cri-containerd-3a3da43e4093a8c55b21648a9a63cad87d90e02d86c9ec0308c80ab78d1ef3fc.scope - libcontainer container 3a3da43e4093a8c55b21648a9a63cad87d90e02d86c9ec0308c80ab78d1ef3fc. Sep 13 10:35:08.265528 containerd[1557]: time="2025-09-13T10:35:08.265491572Z" level=info msg="StartContainer for \"3a3da43e4093a8c55b21648a9a63cad87d90e02d86c9ec0308c80ab78d1ef3fc\" returns successfully" Sep 13 10:35:08.265592 containerd[1557]: time="2025-09-13T10:35:08.265566822Z" level=info msg="StartContainer for \"cb88d77c3556abc0a8275ac6f5897f57fdfa88b36240ab4b5a8e8aa746f18604\" returns successfully" Sep 13 10:35:08.444543 kubelet[2356]: I0913 10:35:08.444145 2356 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:35:08.553405 kubelet[2356]: E0913 10:35:08.553388 2356 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:35:08.554129 kubelet[2356]: E0913 10:35:08.554050 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:08.555373 kubelet[2356]: E0913 10:35:08.555358 2356 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:35:08.555508 kubelet[2356]: E0913 10:35:08.555496 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:08.559467 kubelet[2356]: E0913 10:35:08.559355 2356 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:35:08.559467 kubelet[2356]: E0913 10:35:08.559428 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:09.309475 kubelet[2356]: I0913 10:35:09.309318 2356 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 10:35:09.309475 kubelet[2356]: E0913 10:35:09.309363 2356 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 10:35:09.318040 kubelet[2356]: E0913 10:35:09.318007 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:09.353825 kubelet[2356]: E0913 10:35:09.353704 2356 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864d123db799959 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:35:06.517576025 +0000 UTC m=+0.497640680,LastTimestamp:2025-09-13 10:35:06.517576025 +0000 UTC m=+0.497640680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:35:09.418200 kubelet[2356]: E0913 10:35:09.418158 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:09.519189 kubelet[2356]: E0913 10:35:09.519154 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:09.559964 kubelet[2356]: E0913 10:35:09.559635 2356 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:35:09.559964 kubelet[2356]: E0913 10:35:09.559759 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:09.559964 kubelet[2356]: E0913 10:35:09.559755 2356 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:35:09.559964 kubelet[2356]: E0913 10:35:09.559865 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:09.619652 kubelet[2356]: E0913 10:35:09.619612 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:09.719902 kubelet[2356]: E0913 10:35:09.719873 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:09.820622 kubelet[2356]: E0913 10:35:09.820532 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:09.921161 kubelet[2356]: E0913 10:35:09.921121 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.021332 kubelet[2356]: E0913 10:35:10.021300 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.122077 kubelet[2356]: E0913 10:35:10.122006 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.222675 kubelet[2356]: E0913 10:35:10.222636 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.323198 kubelet[2356]: E0913 10:35:10.323154 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.423718 kubelet[2356]: E0913 10:35:10.423679 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.524659 kubelet[2356]: E0913 10:35:10.524626 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.625662 kubelet[2356]: E0913 10:35:10.625637 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.726471 kubelet[2356]: E0913 10:35:10.726365 2356 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:35:10.827277 kubelet[2356]: I0913 10:35:10.827251 2356 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:10.835166 kubelet[2356]: I0913 10:35:10.835135 2356 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:35:10.838400 kubelet[2356]: I0913 10:35:10.838357 2356 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:11.256153 kubelet[2356]: I0913 10:35:11.256121 2356 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:11.259740 kubelet[2356]: E0913 10:35:11.259705 2356 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:11.259881 kubelet[2356]: E0913 10:35:11.259862 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:11.375572 kubelet[2356]: I0913 10:35:11.375543 2356 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:11.380044 kubelet[2356]: E0913 10:35:11.379988 2356 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:11.380175 kubelet[2356]: E0913 10:35:11.380144 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:11.455640 systemd[1]: Reload requested from client PID 2639 ('systemctl') (unit session-7.scope)... Sep 13 10:35:11.455654 systemd[1]: Reloading... Sep 13 10:35:11.513664 kubelet[2356]: I0913 10:35:11.513350 2356 apiserver.go:52] "Watching apiserver" Sep 13 10:35:11.516982 kubelet[2356]: E0913 10:35:11.516956 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:11.527371 kubelet[2356]: I0913 10:35:11.527346 2356 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 10:35:11.533048 zram_generator::config[2682]: No configuration found. Sep 13 10:35:11.561000 kubelet[2356]: E0913 10:35:11.560979 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:11.561223 kubelet[2356]: E0913 10:35:11.561193 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:11.761234 systemd[1]: Reloading finished in 305 ms. Sep 13 10:35:11.790414 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:35:11.812357 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 10:35:11.812644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:35:11.812688 systemd[1]: kubelet.service: Consumed 937ms CPU time, 130.5M memory peak. Sep 13 10:35:11.815141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:35:12.013939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:35:12.018124 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:35:12.062051 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:35:12.062383 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 10:35:12.062383 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:35:12.062540 kubelet[2727]: I0913 10:35:12.062425 2727 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:35:12.070474 kubelet[2727]: I0913 10:35:12.070443 2727 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 10:35:12.070474 kubelet[2727]: I0913 10:35:12.070464 2727 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:35:12.070683 kubelet[2727]: I0913 10:35:12.070659 2727 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 10:35:12.071737 kubelet[2727]: I0913 10:35:12.071714 2727 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 10:35:12.073751 kubelet[2727]: I0913 10:35:12.073704 2727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:35:12.078831 kubelet[2727]: I0913 10:35:12.078806 2727 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:35:12.083425 kubelet[2727]: I0913 10:35:12.083390 2727 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:35:12.083663 kubelet[2727]: I0913 10:35:12.083626 2727 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:35:12.083813 kubelet[2727]: I0913 10:35:12.083651 2727 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:35:12.083894 kubelet[2727]: I0913 10:35:12.083815 2727 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:35:12.083894 kubelet[2727]: I0913 10:35:12.083825 2727 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 10:35:12.083894 kubelet[2727]: I0913 10:35:12.083870 2727 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:35:12.084034 kubelet[2727]: I0913 10:35:12.084006 2727 kubelet.go:480] "Attempting to sync node with API server" Sep 13 10:35:12.084066 kubelet[2727]: I0913 10:35:12.084050 2727 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:35:12.084092 kubelet[2727]: I0913 10:35:12.084075 2727 kubelet.go:386] "Adding apiserver pod source" Sep 13 10:35:12.084116 kubelet[2727]: I0913 10:35:12.084093 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:35:12.084832 kubelet[2727]: I0913 10:35:12.084811 2727 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:35:12.085221 kubelet[2727]: I0913 10:35:12.085203 2727 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 10:35:12.088564 kubelet[2727]: I0913 10:35:12.088540 2727 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 10:35:12.088611 kubelet[2727]: I0913 10:35:12.088583 2727 server.go:1289] "Started kubelet" Sep 13 10:35:12.088700 kubelet[2727]: I0913 10:35:12.088670 2727 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:35:12.089487 kubelet[2727]: I0913 10:35:12.089459 2727 server.go:317] "Adding debug handlers to kubelet server" Sep 13 10:35:12.091283 kubelet[2727]: I0913 10:35:12.091173 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:35:12.091419 kubelet[2727]: I0913 10:35:12.091378 2727 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:35:12.094964 kubelet[2727]: E0913 10:35:12.094931 2727 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:35:12.095016 kubelet[2727]: I0913 10:35:12.094979 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:35:12.097295 kubelet[2727]: I0913 10:35:12.097269 2727 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:35:12.099907 kubelet[2727]: I0913 10:35:12.099480 2727 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 10:35:12.100060 kubelet[2727]: I0913 10:35:12.100039 2727 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 10:35:12.100208 kubelet[2727]: I0913 10:35:12.100174 2727 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:35:12.101775 kubelet[2727]: I0913 10:35:12.101753 2727 factory.go:223] Registration of the systemd container factory successfully Sep 13 10:35:12.101841 kubelet[2727]: I0913 10:35:12.101827 2727 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:35:12.103291 kubelet[2727]: I0913 10:35:12.103267 2727 factory.go:223] Registration of the containerd container factory successfully Sep 13 10:35:12.110725 kubelet[2727]: I0913 10:35:12.110690 2727 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 10:35:12.113089 kubelet[2727]: I0913 10:35:12.113058 2727 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 10:35:12.113089 kubelet[2727]: I0913 10:35:12.113076 2727 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 10:35:12.113089 kubelet[2727]: I0913 10:35:12.113098 2727 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 10:35:12.113257 kubelet[2727]: I0913 10:35:12.113107 2727 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 10:35:12.113257 kubelet[2727]: E0913 10:35:12.113145 2727 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:35:12.139957 kubelet[2727]: I0913 10:35:12.139922 2727 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 10:35:12.139957 kubelet[2727]: I0913 10:35:12.139935 2727 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 10:35:12.140060 kubelet[2727]: I0913 10:35:12.139972 2727 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:35:12.140137 kubelet[2727]: I0913 10:35:12.140117 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 10:35:12.140164 kubelet[2727]: I0913 10:35:12.140130 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 10:35:12.140164 kubelet[2727]: I0913 10:35:12.140147 2727 policy_none.go:49] "None policy: Start" Sep 13 10:35:12.140164 kubelet[2727]: I0913 10:35:12.140157 2727 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 10:35:12.140164 kubelet[2727]: I0913 10:35:12.140166 2727 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:35:12.140269 kubelet[2727]: I0913 10:35:12.140252 2727 state_mem.go:75] "Updated machine memory state" Sep 13 10:35:12.144560 kubelet[2727]: E0913 10:35:12.144519 2727 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 10:35:12.144732 kubelet[2727]: I0913 10:35:12.144708 2727 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:35:12.144772 kubelet[2727]: I0913 10:35:12.144730 2727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:35:12.145045 kubelet[2727]: I0913 10:35:12.144886 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:35:12.146762 kubelet[2727]: E0913 10:35:12.146266 2727 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 10:35:12.214489 kubelet[2727]: I0913 10:35:12.214455 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:12.214489 kubelet[2727]: I0913 10:35:12.214477 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:12.214639 kubelet[2727]: I0913 10:35:12.214475 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:35:12.219140 kubelet[2727]: E0913 10:35:12.219115 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:12.219467 kubelet[2727]: E0913 10:35:12.219438 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:12.219467 kubelet[2727]: E0913 10:35:12.219452 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 10:35:12.251613 kubelet[2727]: I0913 10:35:12.251595 2727 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:35:12.256691 kubelet[2727]: I0913 10:35:12.256663 2727 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 10:35:12.256747 kubelet[2727]: I0913 10:35:12.256732 2727 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 10:35:12.301353 kubelet[2727]: I0913 10:35:12.301317 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cea0341579e4085306148c91c671d21-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cea0341579e4085306148c91c671d21\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:12.301408 kubelet[2727]: I0913 10:35:12.301348 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cea0341579e4085306148c91c671d21-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6cea0341579e4085306148c91c671d21\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:12.301408 kubelet[2727]: I0913 10:35:12.301372 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:12.301408 kubelet[2727]: I0913 10:35:12.301390 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:12.301482 kubelet[2727]: I0913 10:35:12.301468 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:35:12.301505 kubelet[2727]: I0913 10:35:12.301496 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cea0341579e4085306148c91c671d21-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cea0341579e4085306148c91c671d21\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:12.301530 kubelet[2727]: I0913 10:35:12.301515 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:12.301555 kubelet[2727]: I0913 10:35:12.301537 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:12.301584 kubelet[2727]: I0913 10:35:12.301556 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:12.520194 kubelet[2727]: E0913 10:35:12.520163 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:12.520358 kubelet[2727]: E0913 10:35:12.520206 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:12.520358 kubelet[2727]: E0913 10:35:12.520342 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:13.085101 kubelet[2727]: I0913 10:35:13.085072 2727 apiserver.go:52] "Watching apiserver" Sep 13 10:35:13.100589 kubelet[2727]: I0913 10:35:13.100569 2727 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 10:35:13.126064 kubelet[2727]: I0913 10:35:13.125485 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:35:13.126064 kubelet[2727]: I0913 10:35:13.125605 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:13.126064 kubelet[2727]: I0913 10:35:13.125779 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:13.133041 kubelet[2727]: E0913 10:35:13.132881 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:35:13.133651 kubelet[2727]: E0913 10:35:13.133116 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 10:35:13.133651 kubelet[2727]: E0913 10:35:13.133261 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:13.133651 kubelet[2727]: E0913 10:35:13.133285 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:13.134169 kubelet[2727]: E0913 10:35:13.134079 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:35:13.134193 kubelet[2727]: E0913 10:35:13.134186 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:13.149663 kubelet[2727]: I0913 10:35:13.149603 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.149539265 podStartE2EDuration="3.149539265s" podCreationTimestamp="2025-09-13 10:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:35:13.149077792 +0000 UTC m=+1.126026568" watchObservedRunningTime="2025-09-13 10:35:13.149539265 +0000 UTC m=+1.126488031" Sep 13 10:35:13.161046 kubelet[2727]: I0913 10:35:13.159439 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.15941362 podStartE2EDuration="3.15941362s" podCreationTimestamp="2025-09-13 10:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:35:13.159398406 +0000 UTC m=+1.136347182" watchObservedRunningTime="2025-09-13 10:35:13.15941362 +0000 UTC m=+1.136362386" Sep 13 10:35:13.184602 kubelet[2727]: I0913 10:35:13.184511 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.184497947 podStartE2EDuration="3.184497947s" podCreationTimestamp="2025-09-13 10:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:35:13.170195537 +0000 UTC m=+1.147144313" watchObservedRunningTime="2025-09-13 10:35:13.184497947 +0000 UTC m=+1.161446723" Sep 13 10:35:14.127383 kubelet[2727]: E0913 10:35:14.127297 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:14.127731 kubelet[2727]: E0913 10:35:14.127440 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:14.127731 kubelet[2727]: E0913 10:35:14.127596 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:15.128600 kubelet[2727]: E0913 10:35:15.128571 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:15.128951 kubelet[2727]: E0913 10:35:15.128714 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:16.129707 kubelet[2727]: E0913 10:35:16.129668 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:16.377334 kubelet[2727]: E0913 10:35:16.377302 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:16.546658 kubelet[2727]: I0913 10:35:16.546626 2727 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 10:35:16.546989 containerd[1557]: time="2025-09-13T10:35:16.546921188Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 10:35:16.547336 kubelet[2727]: I0913 10:35:16.547136 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 10:35:17.539463 systemd[1]: Created slice kubepods-besteffort-pode4b75980_dfcf_4221_89dd_fc9ba99599b9.slice - libcontainer container kubepods-besteffort-pode4b75980_dfcf_4221_89dd_fc9ba99599b9.slice. Sep 13 10:35:17.634482 kubelet[2727]: I0913 10:35:17.634412 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4b75980-dfcf-4221-89dd-fc9ba99599b9-lib-modules\") pod \"kube-proxy-c9dzb\" (UID: \"e4b75980-dfcf-4221-89dd-fc9ba99599b9\") " pod="kube-system/kube-proxy-c9dzb" Sep 13 10:35:17.634482 kubelet[2727]: I0913 10:35:17.634472 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mp6\" (UniqueName: \"kubernetes.io/projected/e4b75980-dfcf-4221-89dd-fc9ba99599b9-kube-api-access-57mp6\") pod \"kube-proxy-c9dzb\" (UID: \"e4b75980-dfcf-4221-89dd-fc9ba99599b9\") " pod="kube-system/kube-proxy-c9dzb" Sep 13 10:35:17.634926 kubelet[2727]: I0913 10:35:17.634568 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4b75980-dfcf-4221-89dd-fc9ba99599b9-kube-proxy\") pod \"kube-proxy-c9dzb\" (UID: \"e4b75980-dfcf-4221-89dd-fc9ba99599b9\") " pod="kube-system/kube-proxy-c9dzb" Sep 13 10:35:17.634926 kubelet[2727]: I0913 10:35:17.634602 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4b75980-dfcf-4221-89dd-fc9ba99599b9-xtables-lock\") pod \"kube-proxy-c9dzb\" (UID: \"e4b75980-dfcf-4221-89dd-fc9ba99599b9\") " pod="kube-system/kube-proxy-c9dzb" Sep 13 10:35:17.806899 systemd[1]: Created slice kubepods-besteffort-pod04f7173a_d9c2_4992_b28c_d71590976064.slice - libcontainer container kubepods-besteffort-pod04f7173a_d9c2_4992_b28c_d71590976064.slice. Sep 13 10:35:17.836443 kubelet[2727]: I0913 10:35:17.836403 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04f7173a-d9c2-4992-b28c-d71590976064-var-lib-calico\") pod \"tigera-operator-755d956888-89r6q\" (UID: \"04f7173a-d9c2-4992-b28c-d71590976064\") " pod="tigera-operator/tigera-operator-755d956888-89r6q" Sep 13 10:35:17.836443 kubelet[2727]: I0913 10:35:17.836437 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnhhg\" (UniqueName: \"kubernetes.io/projected/04f7173a-d9c2-4992-b28c-d71590976064-kube-api-access-mnhhg\") pod \"tigera-operator-755d956888-89r6q\" (UID: \"04f7173a-d9c2-4992-b28c-d71590976064\") " pod="tigera-operator/tigera-operator-755d956888-89r6q" Sep 13 10:35:17.846658 kubelet[2727]: E0913 10:35:17.846610 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:17.847249 containerd[1557]: time="2025-09-13T10:35:17.847217136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9dzb,Uid:e4b75980-dfcf-4221-89dd-fc9ba99599b9,Namespace:kube-system,Attempt:0,}" Sep 13 10:35:17.864540 containerd[1557]: time="2025-09-13T10:35:17.864494786Z" level=info msg="connecting to shim 7b3163a46659aed9e7c5fc0139a8ebb56ceeb18d891f514698461a9ac39da998" address="unix:///run/containerd/s/143f899005c165aef8a550dd3830c4d3692b6c5964012cbee48252902f731081" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:17.897145 systemd[1]: Started cri-containerd-7b3163a46659aed9e7c5fc0139a8ebb56ceeb18d891f514698461a9ac39da998.scope - libcontainer container 7b3163a46659aed9e7c5fc0139a8ebb56ceeb18d891f514698461a9ac39da998. Sep 13 10:35:17.920141 containerd[1557]: time="2025-09-13T10:35:17.920092041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9dzb,Uid:e4b75980-dfcf-4221-89dd-fc9ba99599b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b3163a46659aed9e7c5fc0139a8ebb56ceeb18d891f514698461a9ac39da998\"" Sep 13 10:35:17.920792 kubelet[2727]: E0913 10:35:17.920756 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:17.925918 containerd[1557]: time="2025-09-13T10:35:17.925849366Z" level=info msg="CreateContainer within sandbox \"7b3163a46659aed9e7c5fc0139a8ebb56ceeb18d891f514698461a9ac39da998\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 10:35:17.936131 containerd[1557]: time="2025-09-13T10:35:17.936086057Z" level=info msg="Container 6ecf78b999b3386f22e06080b711e856369173189aec3f605b9928e53157ddb3: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:17.939669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822854751.mount: Deactivated successfully. Sep 13 10:35:17.945203 containerd[1557]: time="2025-09-13T10:35:17.945161003Z" level=info msg="CreateContainer within sandbox \"7b3163a46659aed9e7c5fc0139a8ebb56ceeb18d891f514698461a9ac39da998\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6ecf78b999b3386f22e06080b711e856369173189aec3f605b9928e53157ddb3\"" Sep 13 10:35:17.945802 containerd[1557]: time="2025-09-13T10:35:17.945767136Z" level=info msg="StartContainer for \"6ecf78b999b3386f22e06080b711e856369173189aec3f605b9928e53157ddb3\"" Sep 13 10:35:17.947069 containerd[1557]: time="2025-09-13T10:35:17.947041640Z" level=info msg="connecting to shim 6ecf78b999b3386f22e06080b711e856369173189aec3f605b9928e53157ddb3" address="unix:///run/containerd/s/143f899005c165aef8a550dd3830c4d3692b6c5964012cbee48252902f731081" protocol=ttrpc version=3 Sep 13 10:35:17.976140 systemd[1]: Started cri-containerd-6ecf78b999b3386f22e06080b711e856369173189aec3f605b9928e53157ddb3.scope - libcontainer container 6ecf78b999b3386f22e06080b711e856369173189aec3f605b9928e53157ddb3. Sep 13 10:35:18.021421 containerd[1557]: time="2025-09-13T10:35:18.021378677Z" level=info msg="StartContainer for \"6ecf78b999b3386f22e06080b711e856369173189aec3f605b9928e53157ddb3\" returns successfully" Sep 13 10:35:18.110521 containerd[1557]: time="2025-09-13T10:35:18.110412759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-89r6q,Uid:04f7173a-d9c2-4992-b28c-d71590976064,Namespace:tigera-operator,Attempt:0,}" Sep 13 10:35:18.127159 containerd[1557]: time="2025-09-13T10:35:18.127108077Z" level=info msg="connecting to shim 6294426b399c76358acd8e3a57820e9ec1bd1f728b136e45ff67435335384d2e" address="unix:///run/containerd/s/ae1ee3e866d2dc868577913a4895690f959b27d819d81cf3473f648e18393897" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:18.135721 kubelet[2727]: E0913 10:35:18.135450 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:18.147177 kubelet[2727]: I0913 10:35:18.147114 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c9dzb" podStartSLOduration=1.147093937 podStartE2EDuration="1.147093937s" podCreationTimestamp="2025-09-13 10:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:35:18.144805272 +0000 UTC m=+6.121754048" watchObservedRunningTime="2025-09-13 10:35:18.147093937 +0000 UTC m=+6.124042713" Sep 13 10:35:18.168343 systemd[1]: Started cri-containerd-6294426b399c76358acd8e3a57820e9ec1bd1f728b136e45ff67435335384d2e.scope - libcontainer container 6294426b399c76358acd8e3a57820e9ec1bd1f728b136e45ff67435335384d2e. Sep 13 10:35:18.211385 containerd[1557]: time="2025-09-13T10:35:18.211337924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-89r6q,Uid:04f7173a-d9c2-4992-b28c-d71590976064,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6294426b399c76358acd8e3a57820e9ec1bd1f728b136e45ff67435335384d2e\"" Sep 13 10:35:18.212880 containerd[1557]: time="2025-09-13T10:35:18.212841015Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 10:35:22.027286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539878110.mount: Deactivated successfully. Sep 13 10:35:22.350690 containerd[1557]: time="2025-09-13T10:35:22.350559364Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:22.351369 containerd[1557]: time="2025-09-13T10:35:22.351315890Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 10:35:22.352488 containerd[1557]: time="2025-09-13T10:35:22.352445468Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:22.354593 containerd[1557]: time="2025-09-13T10:35:22.354561000Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:22.355179 containerd[1557]: time="2025-09-13T10:35:22.355145540Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.142267856s" Sep 13 10:35:22.355179 containerd[1557]: time="2025-09-13T10:35:22.355175444Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 10:35:22.359000 containerd[1557]: time="2025-09-13T10:35:22.358967163Z" level=info msg="CreateContainer within sandbox \"6294426b399c76358acd8e3a57820e9ec1bd1f728b136e45ff67435335384d2e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 10:35:22.366845 containerd[1557]: time="2025-09-13T10:35:22.366806898Z" level=info msg="Container 8616682417e3e98dc7116bc81cafcb2126922939862306d8c14cceaea5d4a872: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:22.371606 containerd[1557]: time="2025-09-13T10:35:22.371569739Z" level=info msg="CreateContainer within sandbox \"6294426b399c76358acd8e3a57820e9ec1bd1f728b136e45ff67435335384d2e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8616682417e3e98dc7116bc81cafcb2126922939862306d8c14cceaea5d4a872\"" Sep 13 10:35:22.372044 containerd[1557]: time="2025-09-13T10:35:22.371988739Z" level=info msg="StartContainer for \"8616682417e3e98dc7116bc81cafcb2126922939862306d8c14cceaea5d4a872\"" Sep 13 10:35:22.372928 containerd[1557]: time="2025-09-13T10:35:22.372891035Z" level=info msg="connecting to shim 8616682417e3e98dc7116bc81cafcb2126922939862306d8c14cceaea5d4a872" address="unix:///run/containerd/s/ae1ee3e866d2dc868577913a4895690f959b27d819d81cf3473f648e18393897" protocol=ttrpc version=3 Sep 13 10:35:22.424150 systemd[1]: Started cri-containerd-8616682417e3e98dc7116bc81cafcb2126922939862306d8c14cceaea5d4a872.scope - libcontainer container 8616682417e3e98dc7116bc81cafcb2126922939862306d8c14cceaea5d4a872. Sep 13 10:35:22.451768 containerd[1557]: time="2025-09-13T10:35:22.451737369Z" level=info msg="StartContainer for \"8616682417e3e98dc7116bc81cafcb2126922939862306d8c14cceaea5d4a872\" returns successfully" Sep 13 10:35:22.899127 kubelet[2727]: E0913 10:35:22.899093 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:23.145628 kubelet[2727]: E0913 10:35:23.145583 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:23.153206 kubelet[2727]: I0913 10:35:23.153099 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-89r6q" podStartSLOduration=2.009907316 podStartE2EDuration="6.153081728s" podCreationTimestamp="2025-09-13 10:35:17 +0000 UTC" firstStartedPulling="2025-09-13 10:35:18.212517949 +0000 UTC m=+6.189466725" lastFinishedPulling="2025-09-13 10:35:22.355692361 +0000 UTC m=+10.332641137" observedRunningTime="2025-09-13 10:35:23.15242931 +0000 UTC m=+11.129378086" watchObservedRunningTime="2025-09-13 10:35:23.153081728 +0000 UTC m=+11.130030504" Sep 13 10:35:25.870539 update_engine[1544]: I20250913 10:35:25.870472 1544 update_attempter.cc:509] Updating boot flags... Sep 13 10:35:26.016668 kubelet[2727]: E0913 10:35:26.016632 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:26.162516 kubelet[2727]: E0913 10:35:26.162414 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:26.381515 kubelet[2727]: E0913 10:35:26.381474 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:27.562753 sudo[1778]: pam_unix(sudo:session): session closed for user root Sep 13 10:35:27.564977 sshd[1777]: Connection closed by 10.0.0.1 port 33844 Sep 13 10:35:27.566306 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Sep 13 10:35:27.572895 systemd[1]: sshd@6-10.0.0.4:22-10.0.0.1:33844.service: Deactivated successfully. Sep 13 10:35:27.575859 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 10:35:27.578353 systemd[1]: session-7.scope: Consumed 5.660s CPU time, 230.1M memory peak. Sep 13 10:35:27.579785 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Sep 13 10:35:27.583744 systemd-logind[1543]: Removed session 7. Sep 13 10:35:29.963951 systemd[1]: Created slice kubepods-besteffort-pod71516130_eaf1_42eb_bbcd_131b15a0e8f1.slice - libcontainer container kubepods-besteffort-pod71516130_eaf1_42eb_bbcd_131b15a0e8f1.slice. Sep 13 10:35:30.013049 kubelet[2727]: I0913 10:35:30.012061 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/71516130-eaf1-42eb-bbcd-131b15a0e8f1-typha-certs\") pod \"calico-typha-77f8f7b774-42jzs\" (UID: \"71516130-eaf1-42eb-bbcd-131b15a0e8f1\") " pod="calico-system/calico-typha-77f8f7b774-42jzs" Sep 13 10:35:30.013049 kubelet[2727]: I0913 10:35:30.012107 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71516130-eaf1-42eb-bbcd-131b15a0e8f1-tigera-ca-bundle\") pod \"calico-typha-77f8f7b774-42jzs\" (UID: \"71516130-eaf1-42eb-bbcd-131b15a0e8f1\") " pod="calico-system/calico-typha-77f8f7b774-42jzs" Sep 13 10:35:30.013049 kubelet[2727]: I0913 10:35:30.012129 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdx4k\" (UniqueName: \"kubernetes.io/projected/71516130-eaf1-42eb-bbcd-131b15a0e8f1-kube-api-access-cdx4k\") pod \"calico-typha-77f8f7b774-42jzs\" (UID: \"71516130-eaf1-42eb-bbcd-131b15a0e8f1\") " pod="calico-system/calico-typha-77f8f7b774-42jzs" Sep 13 10:35:30.185261 systemd[1]: Created slice kubepods-besteffort-pod959b7101_fa43_457d_a7e8_f4932e921ae1.slice - libcontainer container kubepods-besteffort-pod959b7101_fa43_457d_a7e8_f4932e921ae1.slice. Sep 13 10:35:30.213339 kubelet[2727]: I0913 10:35:30.213302 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-cni-net-dir\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213339 kubelet[2727]: I0913 10:35:30.213338 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-var-lib-calico\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213488 kubelet[2727]: I0913 10:35:30.213356 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhwtx\" (UniqueName: \"kubernetes.io/projected/959b7101-fa43-457d-a7e8-f4932e921ae1-kube-api-access-zhwtx\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213488 kubelet[2727]: I0913 10:35:30.213375 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/959b7101-fa43-457d-a7e8-f4932e921ae1-tigera-ca-bundle\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213488 kubelet[2727]: I0913 10:35:30.213390 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-policysync\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213488 kubelet[2727]: I0913 10:35:30.213407 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-flexvol-driver-host\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213488 kubelet[2727]: I0913 10:35:30.213422 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-cni-bin-dir\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213610 kubelet[2727]: I0913 10:35:30.213493 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-cni-log-dir\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213610 kubelet[2727]: I0913 10:35:30.213534 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-lib-modules\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213610 kubelet[2727]: I0913 10:35:30.213554 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/959b7101-fa43-457d-a7e8-f4932e921ae1-node-certs\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213610 kubelet[2727]: I0913 10:35:30.213570 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-var-run-calico\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.213610 kubelet[2727]: I0913 10:35:30.213586 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/959b7101-fa43-457d-a7e8-f4932e921ae1-xtables-lock\") pod \"calico-node-hn4cx\" (UID: \"959b7101-fa43-457d-a7e8-f4932e921ae1\") " pod="calico-system/calico-node-hn4cx" Sep 13 10:35:30.268509 kubelet[2727]: E0913 10:35:30.268103 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:30.268593 containerd[1557]: time="2025-09-13T10:35:30.268514507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f8f7b774-42jzs,Uid:71516130-eaf1-42eb-bbcd-131b15a0e8f1,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:30.307798 containerd[1557]: time="2025-09-13T10:35:30.307549308Z" level=info msg="connecting to shim 94ea158dea790fb2476c756835adedea287facd21b76330ae7935bd03467fdfd" address="unix:///run/containerd/s/73985b37f3e087c69916c1f0fcdb9a62e8687db4025acdefcb4e7f101b5551b3" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:30.321059 kubelet[2727]: E0913 10:35:30.320991 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.321059 kubelet[2727]: W0913 10:35:30.321011 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.321907 kubelet[2727]: E0913 10:35:30.321876 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.323685 kubelet[2727]: E0913 10:35:30.323645 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.323685 kubelet[2727]: W0913 10:35:30.323667 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.323855 kubelet[2727]: E0913 10:35:30.323691 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.325916 kubelet[2727]: E0913 10:35:30.325303 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.325916 kubelet[2727]: W0913 10:35:30.325318 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.325916 kubelet[2727]: E0913 10:35:30.325328 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.342164 systemd[1]: Started cri-containerd-94ea158dea790fb2476c756835adedea287facd21b76330ae7935bd03467fdfd.scope - libcontainer container 94ea158dea790fb2476c756835adedea287facd21b76330ae7935bd03467fdfd. Sep 13 10:35:30.385740 containerd[1557]: time="2025-09-13T10:35:30.385684663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f8f7b774-42jzs,Uid:71516130-eaf1-42eb-bbcd-131b15a0e8f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"94ea158dea790fb2476c756835adedea287facd21b76330ae7935bd03467fdfd\"" Sep 13 10:35:30.386347 kubelet[2727]: E0913 10:35:30.386317 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:30.387241 containerd[1557]: time="2025-09-13T10:35:30.387207262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 10:35:30.478583 kubelet[2727]: E0913 10:35:30.478537 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzp5l" podUID="df2ad2ba-6007-4e6d-89d9-770ca47aef38" Sep 13 10:35:30.488004 containerd[1557]: time="2025-09-13T10:35:30.487970452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hn4cx,Uid:959b7101-fa43-457d-a7e8-f4932e921ae1,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:30.497365 kubelet[2727]: E0913 10:35:30.497341 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.497365 kubelet[2727]: W0913 10:35:30.497361 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.497462 kubelet[2727]: E0913 10:35:30.497381 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.497588 kubelet[2727]: E0913 10:35:30.497574 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.497588 kubelet[2727]: W0913 10:35:30.497584 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.497644 kubelet[2727]: E0913 10:35:30.497592 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.497757 kubelet[2727]: E0913 10:35:30.497743 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.497757 kubelet[2727]: W0913 10:35:30.497754 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.497800 kubelet[2727]: E0913 10:35:30.497762 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.498037 kubelet[2727]: E0913 10:35:30.497997 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.498037 kubelet[2727]: W0913 10:35:30.498008 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.498037 kubelet[2727]: E0913 10:35:30.498016 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.498236 kubelet[2727]: E0913 10:35:30.498213 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.498236 kubelet[2727]: W0913 10:35:30.498221 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.498236 kubelet[2727]: E0913 10:35:30.498229 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.498397 kubelet[2727]: E0913 10:35:30.498381 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.498397 kubelet[2727]: W0913 10:35:30.498390 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.498397 kubelet[2727]: E0913 10:35:30.498397 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.498574 kubelet[2727]: E0913 10:35:30.498558 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.498574 kubelet[2727]: W0913 10:35:30.498568 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.498635 kubelet[2727]: E0913 10:35:30.498578 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.498756 kubelet[2727]: E0913 10:35:30.498741 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.498756 kubelet[2727]: W0913 10:35:30.498751 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.498816 kubelet[2727]: E0913 10:35:30.498759 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.498962 kubelet[2727]: E0913 10:35:30.498926 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.498962 kubelet[2727]: W0913 10:35:30.498936 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.498962 kubelet[2727]: E0913 10:35:30.498943 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.499282 kubelet[2727]: E0913 10:35:30.499233 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.499282 kubelet[2727]: W0913 10:35:30.499249 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.499282 kubelet[2727]: E0913 10:35:30.499272 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.499499 kubelet[2727]: E0913 10:35:30.499485 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.499499 kubelet[2727]: W0913 10:35:30.499495 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.499562 kubelet[2727]: E0913 10:35:30.499504 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.499688 kubelet[2727]: E0913 10:35:30.499669 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.499688 kubelet[2727]: W0913 10:35:30.499680 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.499688 kubelet[2727]: E0913 10:35:30.499687 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.499928 kubelet[2727]: E0913 10:35:30.499911 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.499928 kubelet[2727]: W0913 10:35:30.499922 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.500005 kubelet[2727]: E0913 10:35:30.499930 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.500213 kubelet[2727]: E0913 10:35:30.500188 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.500213 kubelet[2727]: W0913 10:35:30.500209 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.500302 kubelet[2727]: E0913 10:35:30.500236 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.500650 kubelet[2727]: E0913 10:35:30.500628 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.500650 kubelet[2727]: W0913 10:35:30.500644 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.500706 kubelet[2727]: E0913 10:35:30.500657 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.501366 kubelet[2727]: E0913 10:35:30.501342 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.501366 kubelet[2727]: W0913 10:35:30.501357 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.501366 kubelet[2727]: E0913 10:35:30.501366 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.502308 kubelet[2727]: E0913 10:35:30.502269 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.502308 kubelet[2727]: W0913 10:35:30.502303 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.502381 kubelet[2727]: E0913 10:35:30.502313 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.502522 kubelet[2727]: E0913 10:35:30.502504 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.502556 kubelet[2727]: W0913 10:35:30.502537 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.502556 kubelet[2727]: E0913 10:35:30.502547 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.502752 kubelet[2727]: E0913 10:35:30.502737 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.502752 kubelet[2727]: W0913 10:35:30.502748 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.502870 kubelet[2727]: E0913 10:35:30.502780 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.502974 kubelet[2727]: E0913 10:35:30.502958 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.502974 kubelet[2727]: W0913 10:35:30.502968 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.503352 kubelet[2727]: E0913 10:35:30.502977 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.516095 kubelet[2727]: E0913 10:35:30.516064 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.516095 kubelet[2727]: W0913 10:35:30.516088 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.516410 kubelet[2727]: E0913 10:35:30.516209 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.516410 kubelet[2727]: I0913 10:35:30.516247 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/df2ad2ba-6007-4e6d-89d9-770ca47aef38-registration-dir\") pod \"csi-node-driver-wzp5l\" (UID: \"df2ad2ba-6007-4e6d-89d9-770ca47aef38\") " pod="calico-system/csi-node-driver-wzp5l" Sep 13 10:35:30.517059 kubelet[2727]: E0913 10:35:30.517040 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.517059 kubelet[2727]: W0913 10:35:30.517057 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.517187 kubelet[2727]: E0913 10:35:30.517067 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.517335 kubelet[2727]: I0913 10:35:30.517293 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df2ad2ba-6007-4e6d-89d9-770ca47aef38-kubelet-dir\") pod \"csi-node-driver-wzp5l\" (UID: \"df2ad2ba-6007-4e6d-89d9-770ca47aef38\") " pod="calico-system/csi-node-driver-wzp5l" Sep 13 10:35:30.517764 kubelet[2727]: E0913 10:35:30.517632 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.517764 kubelet[2727]: W0913 10:35:30.517692 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.517764 kubelet[2727]: E0913 10:35:30.517719 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.517985 kubelet[2727]: I0913 10:35:30.517963 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/df2ad2ba-6007-4e6d-89d9-770ca47aef38-socket-dir\") pod \"csi-node-driver-wzp5l\" (UID: \"df2ad2ba-6007-4e6d-89d9-770ca47aef38\") " pod="calico-system/csi-node-driver-wzp5l" Sep 13 10:35:30.518385 kubelet[2727]: E0913 10:35:30.518368 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.518385 kubelet[2727]: W0913 10:35:30.518380 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.518385 kubelet[2727]: E0913 10:35:30.518391 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.518683 containerd[1557]: time="2025-09-13T10:35:30.518371036Z" level=info msg="connecting to shim c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d" address="unix:///run/containerd/s/dc03a554cd2c1f996964569656a54051410b0a9a5e534986322f549d5948959e" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:30.518969 kubelet[2727]: E0913 10:35:30.518949 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.519304 kubelet[2727]: W0913 10:35:30.519104 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.519304 kubelet[2727]: E0913 10:35:30.519120 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.519757 kubelet[2727]: E0913 10:35:30.519684 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.519757 kubelet[2727]: W0913 10:35:30.519695 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.519757 kubelet[2727]: E0913 10:35:30.519705 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.520497 kubelet[2727]: E0913 10:35:30.520472 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.521105 kubelet[2727]: W0913 10:35:30.520974 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.521105 kubelet[2727]: E0913 10:35:30.520994 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.521357 kubelet[2727]: E0913 10:35:30.521236 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.521357 kubelet[2727]: W0913 10:35:30.521246 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.521357 kubelet[2727]: E0913 10:35:30.521256 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.521357 kubelet[2727]: I0913 10:35:30.521301 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4wn4\" (UniqueName: \"kubernetes.io/projected/df2ad2ba-6007-4e6d-89d9-770ca47aef38-kube-api-access-h4wn4\") pod \"csi-node-driver-wzp5l\" (UID: \"df2ad2ba-6007-4e6d-89d9-770ca47aef38\") " pod="calico-system/csi-node-driver-wzp5l" Sep 13 10:35:30.521542 kubelet[2727]: E0913 10:35:30.521532 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.521651 kubelet[2727]: W0913 10:35:30.521591 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.521651 kubelet[2727]: E0913 10:35:30.521604 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.522949 kubelet[2727]: E0913 10:35:30.522826 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.522949 kubelet[2727]: W0913 10:35:30.522839 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.522949 kubelet[2727]: E0913 10:35:30.522848 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.523178 kubelet[2727]: E0913 10:35:30.523122 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.523178 kubelet[2727]: W0913 10:35:30.523147 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.523178 kubelet[2727]: E0913 10:35:30.523162 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.523607 kubelet[2727]: E0913 10:35:30.523589 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.523607 kubelet[2727]: W0913 10:35:30.523600 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.523607 kubelet[2727]: E0913 10:35:30.523609 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.523955 kubelet[2727]: E0913 10:35:30.523942 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.524013 kubelet[2727]: W0913 10:35:30.524003 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.524220 kubelet[2727]: E0913 10:35:30.524071 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.524220 kubelet[2727]: I0913 10:35:30.524098 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/df2ad2ba-6007-4e6d-89d9-770ca47aef38-varrun\") pod \"csi-node-driver-wzp5l\" (UID: \"df2ad2ba-6007-4e6d-89d9-770ca47aef38\") " pod="calico-system/csi-node-driver-wzp5l" Sep 13 10:35:30.524353 kubelet[2727]: E0913 10:35:30.524341 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.524400 kubelet[2727]: W0913 10:35:30.524390 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.524450 kubelet[2727]: E0913 10:35:30.524440 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.524701 kubelet[2727]: E0913 10:35:30.524690 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.524755 kubelet[2727]: W0913 10:35:30.524745 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.524810 kubelet[2727]: E0913 10:35:30.524799 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.547172 systemd[1]: Started cri-containerd-c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d.scope - libcontainer container c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d. Sep 13 10:35:30.578046 containerd[1557]: time="2025-09-13T10:35:30.577971472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hn4cx,Uid:959b7101-fa43-457d-a7e8-f4932e921ae1,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d\"" Sep 13 10:35:30.625180 kubelet[2727]: E0913 10:35:30.625147 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.625180 kubelet[2727]: W0913 10:35:30.625168 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.625180 kubelet[2727]: E0913 10:35:30.625189 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.625505 kubelet[2727]: E0913 10:35:30.625476 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.625505 kubelet[2727]: W0913 10:35:30.625505 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.625566 kubelet[2727]: E0913 10:35:30.625532 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.625853 kubelet[2727]: E0913 10:35:30.625778 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.625853 kubelet[2727]: W0913 10:35:30.625792 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.625853 kubelet[2727]: E0913 10:35:30.625799 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.626252 kubelet[2727]: E0913 10:35:30.626215 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.626308 kubelet[2727]: W0913 10:35:30.626252 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.626308 kubelet[2727]: E0913 10:35:30.626279 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.626546 kubelet[2727]: E0913 10:35:30.626524 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.626546 kubelet[2727]: W0913 10:35:30.626536 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.626546 kubelet[2727]: E0913 10:35:30.626544 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.626753 kubelet[2727]: E0913 10:35:30.626729 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.626753 kubelet[2727]: W0913 10:35:30.626740 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.626753 kubelet[2727]: E0913 10:35:30.626748 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.627001 kubelet[2727]: E0913 10:35:30.626986 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.627001 kubelet[2727]: W0913 10:35:30.626995 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.627096 kubelet[2727]: E0913 10:35:30.627002 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.627208 kubelet[2727]: E0913 10:35:30.627193 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.627208 kubelet[2727]: W0913 10:35:30.627202 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.627253 kubelet[2727]: E0913 10:35:30.627212 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.627382 kubelet[2727]: E0913 10:35:30.627368 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.627382 kubelet[2727]: W0913 10:35:30.627377 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.627428 kubelet[2727]: E0913 10:35:30.627384 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.627562 kubelet[2727]: E0913 10:35:30.627548 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.627562 kubelet[2727]: W0913 10:35:30.627557 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.627614 kubelet[2727]: E0913 10:35:30.627565 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.627733 kubelet[2727]: E0913 10:35:30.627719 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.627733 kubelet[2727]: W0913 10:35:30.627729 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.627785 kubelet[2727]: E0913 10:35:30.627735 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.627894 kubelet[2727]: E0913 10:35:30.627881 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.627894 kubelet[2727]: W0913 10:35:30.627890 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.627938 kubelet[2727]: E0913 10:35:30.627897 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.628169 kubelet[2727]: E0913 10:35:30.628136 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.628200 kubelet[2727]: W0913 10:35:30.628167 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.628200 kubelet[2727]: E0913 10:35:30.628187 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.628392 kubelet[2727]: E0913 10:35:30.628376 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.628392 kubelet[2727]: W0913 10:35:30.628387 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.628439 kubelet[2727]: E0913 10:35:30.628395 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.628564 kubelet[2727]: E0913 10:35:30.628551 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.628564 kubelet[2727]: W0913 10:35:30.628560 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.628610 kubelet[2727]: E0913 10:35:30.628567 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.628739 kubelet[2727]: E0913 10:35:30.628726 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.628739 kubelet[2727]: W0913 10:35:30.628735 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.628783 kubelet[2727]: E0913 10:35:30.628742 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.628910 kubelet[2727]: E0913 10:35:30.628896 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.628910 kubelet[2727]: W0913 10:35:30.628906 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.628956 kubelet[2727]: E0913 10:35:30.628914 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.629110 kubelet[2727]: E0913 10:35:30.629097 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.629110 kubelet[2727]: W0913 10:35:30.629107 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.629166 kubelet[2727]: E0913 10:35:30.629114 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.629322 kubelet[2727]: E0913 10:35:30.629307 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.629322 kubelet[2727]: W0913 10:35:30.629317 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.629368 kubelet[2727]: E0913 10:35:30.629325 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.629503 kubelet[2727]: E0913 10:35:30.629489 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.629503 kubelet[2727]: W0913 10:35:30.629498 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.629543 kubelet[2727]: E0913 10:35:30.629505 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.629673 kubelet[2727]: E0913 10:35:30.629659 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.629673 kubelet[2727]: W0913 10:35:30.629668 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.629718 kubelet[2727]: E0913 10:35:30.629676 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.629876 kubelet[2727]: E0913 10:35:30.629862 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.629876 kubelet[2727]: W0913 10:35:30.629871 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.629916 kubelet[2727]: E0913 10:35:30.629879 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.630196 kubelet[2727]: E0913 10:35:30.630159 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.630196 kubelet[2727]: W0913 10:35:30.630174 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.630196 kubelet[2727]: E0913 10:35:30.630185 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.630444 kubelet[2727]: E0913 10:35:30.630428 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.630475 kubelet[2727]: W0913 10:35:30.630439 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.630475 kubelet[2727]: E0913 10:35:30.630459 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.630654 kubelet[2727]: E0913 10:35:30.630638 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.630654 kubelet[2727]: W0913 10:35:30.630652 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.630710 kubelet[2727]: E0913 10:35:30.630661 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:30.636934 kubelet[2727]: E0913 10:35:30.636909 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:30.636934 kubelet[2727]: W0913 10:35:30.636922 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:30.636934 kubelet[2727]: E0913 10:35:30.636932 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:31.937981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922103460.mount: Deactivated successfully. Sep 13 10:35:32.114392 kubelet[2727]: E0913 10:35:32.114289 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzp5l" podUID="df2ad2ba-6007-4e6d-89d9-770ca47aef38" Sep 13 10:35:32.613152 containerd[1557]: time="2025-09-13T10:35:32.613103568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:32.613849 containerd[1557]: time="2025-09-13T10:35:32.613820742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 10:35:32.615016 containerd[1557]: time="2025-09-13T10:35:32.614969881Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:32.616862 containerd[1557]: time="2025-09-13T10:35:32.616827556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:32.617351 containerd[1557]: time="2025-09-13T10:35:32.617321458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.230082482s" Sep 13 10:35:32.617399 containerd[1557]: time="2025-09-13T10:35:32.617353673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 10:35:32.618160 containerd[1557]: time="2025-09-13T10:35:32.618127982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 10:35:32.631653 containerd[1557]: time="2025-09-13T10:35:32.631613927Z" level=info msg="CreateContainer within sandbox \"94ea158dea790fb2476c756835adedea287facd21b76330ae7935bd03467fdfd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 10:35:32.639956 containerd[1557]: time="2025-09-13T10:35:32.639906295Z" level=info msg="Container a827f0aeaa64f431c2ee6278aa7e44d343591839feac84af2714c6de3991a7ce: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:32.647116 containerd[1557]: time="2025-09-13T10:35:32.647075156Z" level=info msg="CreateContainer within sandbox \"94ea158dea790fb2476c756835adedea287facd21b76330ae7935bd03467fdfd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a827f0aeaa64f431c2ee6278aa7e44d343591839feac84af2714c6de3991a7ce\"" Sep 13 10:35:32.647690 containerd[1557]: time="2025-09-13T10:35:32.647504456Z" level=info msg="StartContainer for \"a827f0aeaa64f431c2ee6278aa7e44d343591839feac84af2714c6de3991a7ce\"" Sep 13 10:35:32.648468 containerd[1557]: time="2025-09-13T10:35:32.648444412Z" level=info msg="connecting to shim a827f0aeaa64f431c2ee6278aa7e44d343591839feac84af2714c6de3991a7ce" address="unix:///run/containerd/s/73985b37f3e087c69916c1f0fcdb9a62e8687db4025acdefcb4e7f101b5551b3" protocol=ttrpc version=3 Sep 13 10:35:32.669341 systemd[1]: Started cri-containerd-a827f0aeaa64f431c2ee6278aa7e44d343591839feac84af2714c6de3991a7ce.scope - libcontainer container a827f0aeaa64f431c2ee6278aa7e44d343591839feac84af2714c6de3991a7ce. Sep 13 10:35:32.716531 containerd[1557]: time="2025-09-13T10:35:32.716468160Z" level=info msg="StartContainer for \"a827f0aeaa64f431c2ee6278aa7e44d343591839feac84af2714c6de3991a7ce\" returns successfully" Sep 13 10:35:33.177949 kubelet[2727]: E0913 10:35:33.177642 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:33.186821 kubelet[2727]: I0913 10:35:33.186752 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77f8f7b774-42jzs" podStartSLOduration=1.955707017 podStartE2EDuration="4.186737104s" podCreationTimestamp="2025-09-13 10:35:29 +0000 UTC" firstStartedPulling="2025-09-13 10:35:30.386896988 +0000 UTC m=+18.363845764" lastFinishedPulling="2025-09-13 10:35:32.617927055 +0000 UTC m=+20.594875851" observedRunningTime="2025-09-13 10:35:33.186005115 +0000 UTC m=+21.162953891" watchObservedRunningTime="2025-09-13 10:35:33.186737104 +0000 UTC m=+21.163685870" Sep 13 10:35:33.221787 kubelet[2727]: E0913 10:35:33.221753 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.221787 kubelet[2727]: W0913 10:35:33.221773 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.221787 kubelet[2727]: E0913 10:35:33.221796 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.222015 kubelet[2727]: E0913 10:35:33.221998 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.222015 kubelet[2727]: W0913 10:35:33.222007 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.222015 kubelet[2727]: E0913 10:35:33.222015 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.222290 kubelet[2727]: E0913 10:35:33.222261 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.222290 kubelet[2727]: W0913 10:35:33.222280 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.222339 kubelet[2727]: E0913 10:35:33.222299 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.222565 kubelet[2727]: E0913 10:35:33.222550 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.222565 kubelet[2727]: W0913 10:35:33.222560 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.222628 kubelet[2727]: E0913 10:35:33.222570 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.222841 kubelet[2727]: E0913 10:35:33.222808 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.222841 kubelet[2727]: W0913 10:35:33.222822 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.222841 kubelet[2727]: E0913 10:35:33.222831 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.223174 kubelet[2727]: E0913 10:35:33.223145 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.223174 kubelet[2727]: W0913 10:35:33.223156 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.223174 kubelet[2727]: E0913 10:35:33.223165 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.223363 kubelet[2727]: E0913 10:35:33.223337 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.223363 kubelet[2727]: W0913 10:35:33.223356 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.223409 kubelet[2727]: E0913 10:35:33.223363 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.223544 kubelet[2727]: E0913 10:35:33.223527 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.223544 kubelet[2727]: W0913 10:35:33.223538 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.223544 kubelet[2727]: E0913 10:35:33.223545 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.223721 kubelet[2727]: E0913 10:35:33.223704 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.223721 kubelet[2727]: W0913 10:35:33.223714 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.223721 kubelet[2727]: E0913 10:35:33.223721 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.223896 kubelet[2727]: E0913 10:35:33.223879 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.223896 kubelet[2727]: W0913 10:35:33.223888 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.223896 kubelet[2727]: E0913 10:35:33.223896 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.224076 kubelet[2727]: E0913 10:35:33.224059 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.224076 kubelet[2727]: W0913 10:35:33.224068 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.224076 kubelet[2727]: E0913 10:35:33.224075 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.224562 kubelet[2727]: E0913 10:35:33.224542 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.224562 kubelet[2727]: W0913 10:35:33.224555 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.224627 kubelet[2727]: E0913 10:35:33.224566 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.224787 kubelet[2727]: E0913 10:35:33.224762 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.224787 kubelet[2727]: W0913 10:35:33.224775 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.224787 kubelet[2727]: E0913 10:35:33.224783 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.224996 kubelet[2727]: E0913 10:35:33.224982 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.224996 kubelet[2727]: W0913 10:35:33.224991 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.225082 kubelet[2727]: E0913 10:35:33.224999 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.225190 kubelet[2727]: E0913 10:35:33.225174 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.225190 kubelet[2727]: W0913 10:35:33.225185 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.225238 kubelet[2727]: E0913 10:35:33.225192 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.248018 kubelet[2727]: E0913 10:35:33.248002 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.248018 kubelet[2727]: W0913 10:35:33.248014 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.248108 kubelet[2727]: E0913 10:35:33.248042 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.248252 kubelet[2727]: E0913 10:35:33.248238 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.248252 kubelet[2727]: W0913 10:35:33.248248 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.248310 kubelet[2727]: E0913 10:35:33.248256 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.248458 kubelet[2727]: E0913 10:35:33.248436 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.248458 kubelet[2727]: W0913 10:35:33.248449 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.248458 kubelet[2727]: E0913 10:35:33.248456 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.248671 kubelet[2727]: E0913 10:35:33.248656 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.248671 kubelet[2727]: W0913 10:35:33.248666 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.248719 kubelet[2727]: E0913 10:35:33.248676 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.248856 kubelet[2727]: E0913 10:35:33.248841 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.248856 kubelet[2727]: W0913 10:35:33.248850 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.248907 kubelet[2727]: E0913 10:35:33.248859 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.249044 kubelet[2727]: E0913 10:35:33.249010 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.249044 kubelet[2727]: W0913 10:35:33.249035 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.249044 kubelet[2727]: E0913 10:35:33.249042 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.249240 kubelet[2727]: E0913 10:35:33.249226 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.249240 kubelet[2727]: W0913 10:35:33.249236 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.249293 kubelet[2727]: E0913 10:35:33.249243 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.249590 kubelet[2727]: E0913 10:35:33.249568 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.249590 kubelet[2727]: W0913 10:35:33.249587 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.249647 kubelet[2727]: E0913 10:35:33.249605 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.249804 kubelet[2727]: E0913 10:35:33.249789 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.249804 kubelet[2727]: W0913 10:35:33.249799 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.249856 kubelet[2727]: E0913 10:35:33.249807 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.249988 kubelet[2727]: E0913 10:35:33.249974 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.249988 kubelet[2727]: W0913 10:35:33.249983 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.250050 kubelet[2727]: E0913 10:35:33.249991 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.250175 kubelet[2727]: E0913 10:35:33.250160 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.250175 kubelet[2727]: W0913 10:35:33.250170 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.250215 kubelet[2727]: E0913 10:35:33.250178 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.250370 kubelet[2727]: E0913 10:35:33.250355 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.250370 kubelet[2727]: W0913 10:35:33.250365 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.250418 kubelet[2727]: E0913 10:35:33.250373 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.250593 kubelet[2727]: E0913 10:35:33.250578 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.250593 kubelet[2727]: W0913 10:35:33.250589 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.250645 kubelet[2727]: E0913 10:35:33.250597 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.250873 kubelet[2727]: E0913 10:35:33.250856 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.250873 kubelet[2727]: W0913 10:35:33.250868 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.250925 kubelet[2727]: E0913 10:35:33.250877 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.251080 kubelet[2727]: E0913 10:35:33.251064 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.251080 kubelet[2727]: W0913 10:35:33.251074 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.251129 kubelet[2727]: E0913 10:35:33.251082 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.251294 kubelet[2727]: E0913 10:35:33.251279 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.251294 kubelet[2727]: W0913 10:35:33.251289 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.251344 kubelet[2727]: E0913 10:35:33.251296 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.251659 kubelet[2727]: E0913 10:35:33.251636 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.251659 kubelet[2727]: W0913 10:35:33.251653 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.251739 kubelet[2727]: E0913 10:35:33.251674 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:33.251919 kubelet[2727]: E0913 10:35:33.251903 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 10:35:33.251919 kubelet[2727]: W0913 10:35:33.251914 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 10:35:33.251966 kubelet[2727]: E0913 10:35:33.251923 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 10:35:34.071095 containerd[1557]: time="2025-09-13T10:35:34.071043698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:34.071763 containerd[1557]: time="2025-09-13T10:35:34.071726474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 10:35:34.072833 containerd[1557]: time="2025-09-13T10:35:34.072779494Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:34.074647 containerd[1557]: time="2025-09-13T10:35:34.074610512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:34.075174 containerd[1557]: time="2025-09-13T10:35:34.075144408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.45698917s" Sep 13 10:35:34.075217 containerd[1557]: time="2025-09-13T10:35:34.075177253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 10:35:34.079473 containerd[1557]: time="2025-09-13T10:35:34.079440690Z" level=info msg="CreateContainer within sandbox \"c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 10:35:34.086462 containerd[1557]: time="2025-09-13T10:35:34.086440599Z" level=info msg="Container 34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:34.094822 containerd[1557]: time="2025-09-13T10:35:34.094786609Z" level=info msg="CreateContainer within sandbox \"c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb\"" Sep 13 10:35:34.095220 containerd[1557]: time="2025-09-13T10:35:34.095169911Z" level=info msg="StartContainer for \"34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb\"" Sep 13 10:35:34.096410 containerd[1557]: time="2025-09-13T10:35:34.096383475Z" level=info msg="connecting to shim 34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb" address="unix:///run/containerd/s/dc03a554cd2c1f996964569656a54051410b0a9a5e534986322f549d5948959e" protocol=ttrpc version=3 Sep 13 10:35:34.113670 kubelet[2727]: E0913 10:35:34.113598 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzp5l" podUID="df2ad2ba-6007-4e6d-89d9-770ca47aef38" Sep 13 10:35:34.119157 systemd[1]: Started cri-containerd-34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb.scope - libcontainer container 34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb. Sep 13 10:35:34.159152 containerd[1557]: time="2025-09-13T10:35:34.159113698Z" level=info msg="StartContainer for \"34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb\" returns successfully" Sep 13 10:35:34.169013 systemd[1]: cri-containerd-34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb.scope: Deactivated successfully. Sep 13 10:35:34.171539 containerd[1557]: time="2025-09-13T10:35:34.171490577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb\" id:\"34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb\" pid:3434 exited_at:{seconds:1757759734 nanos:171141995}" Sep 13 10:35:34.171649 containerd[1557]: time="2025-09-13T10:35:34.171545598Z" level=info msg="received exit event container_id:\"34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb\" id:\"34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb\" pid:3434 exited_at:{seconds:1757759734 nanos:171141995}" Sep 13 10:35:34.182131 kubelet[2727]: I0913 10:35:34.182096 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:35:34.182437 kubelet[2727]: E0913 10:35:34.182426 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:34.197442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34382c9f1b55b8a4c1c8480a07eb11d37a01e03c514bce8e1cf4601881d78aeb-rootfs.mount: Deactivated successfully. Sep 13 10:35:35.186135 containerd[1557]: time="2025-09-13T10:35:35.185900010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 10:35:36.114133 kubelet[2727]: E0913 10:35:36.114087 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzp5l" podUID="df2ad2ba-6007-4e6d-89d9-770ca47aef38" Sep 13 10:35:38.114209 kubelet[2727]: E0913 10:35:38.114155 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzp5l" podUID="df2ad2ba-6007-4e6d-89d9-770ca47aef38" Sep 13 10:35:39.951327 containerd[1557]: time="2025-09-13T10:35:39.951278247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:39.951938 containerd[1557]: time="2025-09-13T10:35:39.951893511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 10:35:39.952892 containerd[1557]: time="2025-09-13T10:35:39.952860964Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:39.954872 containerd[1557]: time="2025-09-13T10:35:39.954823537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:39.955388 containerd[1557]: time="2025-09-13T10:35:39.955356326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.769423259s" Sep 13 10:35:39.955388 containerd[1557]: time="2025-09-13T10:35:39.955382068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 10:35:39.959839 containerd[1557]: time="2025-09-13T10:35:39.959811505Z" level=info msg="CreateContainer within sandbox \"c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 10:35:39.968418 containerd[1557]: time="2025-09-13T10:35:39.968389040Z" level=info msg="Container 1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:39.978841 containerd[1557]: time="2025-09-13T10:35:39.978804220Z" level=info msg="CreateContainer within sandbox \"c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901\"" Sep 13 10:35:39.979265 containerd[1557]: time="2025-09-13T10:35:39.979240998Z" level=info msg="StartContainer for \"1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901\"" Sep 13 10:35:39.980512 containerd[1557]: time="2025-09-13T10:35:39.980483799Z" level=info msg="connecting to shim 1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901" address="unix:///run/containerd/s/dc03a554cd2c1f996964569656a54051410b0a9a5e534986322f549d5948959e" protocol=ttrpc version=3 Sep 13 10:35:40.004170 systemd[1]: Started cri-containerd-1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901.scope - libcontainer container 1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901. Sep 13 10:35:40.113930 kubelet[2727]: E0913 10:35:40.113873 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzp5l" podUID="df2ad2ba-6007-4e6d-89d9-770ca47aef38" Sep 13 10:35:40.200131 containerd[1557]: time="2025-09-13T10:35:40.200085823Z" level=info msg="StartContainer for \"1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901\" returns successfully" Sep 13 10:35:41.127949 containerd[1557]: time="2025-09-13T10:35:41.127900899Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:35:41.131412 systemd[1]: cri-containerd-1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901.scope: Deactivated successfully. Sep 13 10:35:41.131843 systemd[1]: cri-containerd-1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901.scope: Consumed 558ms CPU time, 179.2M memory peak, 2.5M read from disk, 171.3M written to disk. Sep 13 10:35:41.132868 containerd[1557]: time="2025-09-13T10:35:41.132825391Z" level=info msg="received exit event container_id:\"1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901\" id:\"1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901\" pid:3495 exited_at:{seconds:1757759741 nanos:132557089}" Sep 13 10:35:41.132919 containerd[1557]: time="2025-09-13T10:35:41.132900720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901\" id:\"1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901\" pid:3495 exited_at:{seconds:1757759741 nanos:132557089}" Sep 13 10:35:41.154087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1020a860e4647170da163ef88605f901555c9cd332787bae7c031a892ae9b901-rootfs.mount: Deactivated successfully. Sep 13 10:35:41.185057 kubelet[2727]: I0913 10:35:41.184958 2727 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 10:35:41.435929 systemd[1]: Created slice kubepods-besteffort-podd0426ace_6336_45df_bbe4_a10ad650afc4.slice - libcontainer container kubepods-besteffort-podd0426ace_6336_45df_bbe4_a10ad650afc4.slice. Sep 13 10:35:41.445717 systemd[1]: Created slice kubepods-besteffort-pod5961d03f_3270_4e65_9942_db58dded5a9d.slice - libcontainer container kubepods-besteffort-pod5961d03f_3270_4e65_9942_db58dded5a9d.slice. Sep 13 10:35:41.452299 systemd[1]: Created slice kubepods-besteffort-pod5c3899f2_5c7f_42b4_adaf_74e62e5f01e1.slice - libcontainer container kubepods-besteffort-pod5c3899f2_5c7f_42b4_adaf_74e62e5f01e1.slice. Sep 13 10:35:41.459214 systemd[1]: Created slice kubepods-burstable-pod353e5176_02d6_4c42_90c4_a5ef2b75b9a6.slice - libcontainer container kubepods-burstable-pod353e5176_02d6_4c42_90c4_a5ef2b75b9a6.slice. Sep 13 10:35:41.466590 systemd[1]: Created slice kubepods-burstable-pod2afe3b9e_e70d_4349_825e_5e225a4b0400.slice - libcontainer container kubepods-burstable-pod2afe3b9e_e70d_4349_825e_5e225a4b0400.slice. Sep 13 10:35:41.472053 systemd[1]: Created slice kubepods-besteffort-podaba9d2ad_8ec3_48bc_8d22_a001517c5dd3.slice - libcontainer container kubepods-besteffort-podaba9d2ad_8ec3_48bc_8d22_a001517c5dd3.slice. Sep 13 10:35:41.479394 systemd[1]: Created slice kubepods-besteffort-pod8675c32c_23d1_40c6_87da_b0d97e289e16.slice - libcontainer container kubepods-besteffort-pod8675c32c_23d1_40c6_87da_b0d97e289e16.slice. Sep 13 10:35:41.511721 kubelet[2727]: I0913 10:35:41.511675 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj7hh\" (UniqueName: \"kubernetes.io/projected/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-kube-api-access-tj7hh\") pod \"whisker-74cf7c7c68-9p8zq\" (UID: \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\") " pod="calico-system/whisker-74cf7c7c68-9p8zq" Sep 13 10:35:41.512036 kubelet[2727]: I0913 10:35:41.511942 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5961d03f-3270-4e65-9942-db58dded5a9d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-gzrf5\" (UID: \"5961d03f-3270-4e65-9942-db58dded5a9d\") " pod="calico-system/goldmane-54d579b49d-gzrf5" Sep 13 10:35:41.512036 kubelet[2727]: I0913 10:35:41.511967 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2afe3b9e-e70d-4349-825e-5e225a4b0400-config-volume\") pod \"coredns-674b8bbfcf-9rmkh\" (UID: \"2afe3b9e-e70d-4349-825e-5e225a4b0400\") " pod="kube-system/coredns-674b8bbfcf-9rmkh" Sep 13 10:35:41.512036 kubelet[2727]: I0913 10:35:41.511984 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/353e5176-02d6-4c42-90c4-a5ef2b75b9a6-config-volume\") pod \"coredns-674b8bbfcf-bxxk6\" (UID: \"353e5176-02d6-4c42-90c4-a5ef2b75b9a6\") " pod="kube-system/coredns-674b8bbfcf-bxxk6" Sep 13 10:35:41.512036 kubelet[2727]: I0913 10:35:41.511998 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpkr\" (UniqueName: \"kubernetes.io/projected/2afe3b9e-e70d-4349-825e-5e225a4b0400-kube-api-access-7kpkr\") pod \"coredns-674b8bbfcf-9rmkh\" (UID: \"2afe3b9e-e70d-4349-825e-5e225a4b0400\") " pod="kube-system/coredns-674b8bbfcf-9rmkh" Sep 13 10:35:41.512036 kubelet[2727]: I0913 10:35:41.512013 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4qc2\" (UniqueName: \"kubernetes.io/projected/353e5176-02d6-4c42-90c4-a5ef2b75b9a6-kube-api-access-b4qc2\") pod \"coredns-674b8bbfcf-bxxk6\" (UID: \"353e5176-02d6-4c42-90c4-a5ef2b75b9a6\") " pod="kube-system/coredns-674b8bbfcf-bxxk6" Sep 13 10:35:41.512206 kubelet[2727]: I0913 10:35:41.512192 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5961d03f-3270-4e65-9942-db58dded5a9d-config\") pod \"goldmane-54d579b49d-gzrf5\" (UID: \"5961d03f-3270-4e65-9942-db58dded5a9d\") " pod="calico-system/goldmane-54d579b49d-gzrf5" Sep 13 10:35:41.512274 kubelet[2727]: I0913 10:35:41.512262 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5961d03f-3270-4e65-9942-db58dded5a9d-goldmane-key-pair\") pod \"goldmane-54d579b49d-gzrf5\" (UID: \"5961d03f-3270-4e65-9942-db58dded5a9d\") " pod="calico-system/goldmane-54d579b49d-gzrf5" Sep 13 10:35:41.512349 kubelet[2727]: I0913 10:35:41.512337 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-whisker-backend-key-pair\") pod \"whisker-74cf7c7c68-9p8zq\" (UID: \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\") " pod="calico-system/whisker-74cf7c7c68-9p8zq" Sep 13 10:35:41.512408 kubelet[2727]: I0913 10:35:41.512398 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-whisker-ca-bundle\") pod \"whisker-74cf7c7c68-9p8zq\" (UID: \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\") " pod="calico-system/whisker-74cf7c7c68-9p8zq" Sep 13 10:35:41.512462 kubelet[2727]: I0913 10:35:41.512452 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7l57\" (UniqueName: \"kubernetes.io/projected/5961d03f-3270-4e65-9942-db58dded5a9d-kube-api-access-k7l57\") pod \"goldmane-54d579b49d-gzrf5\" (UID: \"5961d03f-3270-4e65-9942-db58dded5a9d\") " pod="calico-system/goldmane-54d579b49d-gzrf5" Sep 13 10:35:41.512524 kubelet[2727]: I0913 10:35:41.512511 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99zc\" (UniqueName: \"kubernetes.io/projected/8675c32c-23d1-40c6-87da-b0d97e289e16-kube-api-access-v99zc\") pod \"calico-kube-controllers-695f99646f-wq2kh\" (UID: \"8675c32c-23d1-40c6-87da-b0d97e289e16\") " pod="calico-system/calico-kube-controllers-695f99646f-wq2kh" Sep 13 10:35:41.512600 kubelet[2727]: I0913 10:35:41.512584 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzxc\" (UniqueName: \"kubernetes.io/projected/5c3899f2-5c7f-42b4-adaf-74e62e5f01e1-kube-api-access-hrzxc\") pod \"calico-apiserver-655cf64d6d-wxtsq\" (UID: \"5c3899f2-5c7f-42b4-adaf-74e62e5f01e1\") " pod="calico-apiserver/calico-apiserver-655cf64d6d-wxtsq" Sep 13 10:35:41.512660 kubelet[2727]: I0913 10:35:41.512649 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8675c32c-23d1-40c6-87da-b0d97e289e16-tigera-ca-bundle\") pod \"calico-kube-controllers-695f99646f-wq2kh\" (UID: \"8675c32c-23d1-40c6-87da-b0d97e289e16\") " pod="calico-system/calico-kube-controllers-695f99646f-wq2kh" Sep 13 10:35:41.512735 kubelet[2727]: I0913 10:35:41.512714 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0426ace-6336-45df-bbe4-a10ad650afc4-calico-apiserver-certs\") pod \"calico-apiserver-655cf64d6d-jl8hn\" (UID: \"d0426ace-6336-45df-bbe4-a10ad650afc4\") " pod="calico-apiserver/calico-apiserver-655cf64d6d-jl8hn" Sep 13 10:35:41.512800 kubelet[2727]: I0913 10:35:41.512787 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pvp9\" (UniqueName: \"kubernetes.io/projected/d0426ace-6336-45df-bbe4-a10ad650afc4-kube-api-access-7pvp9\") pod \"calico-apiserver-655cf64d6d-jl8hn\" (UID: \"d0426ace-6336-45df-bbe4-a10ad650afc4\") " pod="calico-apiserver/calico-apiserver-655cf64d6d-jl8hn" Sep 13 10:35:41.512860 kubelet[2727]: I0913 10:35:41.512849 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c3899f2-5c7f-42b4-adaf-74e62e5f01e1-calico-apiserver-certs\") pod \"calico-apiserver-655cf64d6d-wxtsq\" (UID: \"5c3899f2-5c7f-42b4-adaf-74e62e5f01e1\") " pod="calico-apiserver/calico-apiserver-655cf64d6d-wxtsq" Sep 13 10:35:41.742605 containerd[1557]: time="2025-09-13T10:35:41.742512179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655cf64d6d-jl8hn,Uid:d0426ace-6336-45df-bbe4-a10ad650afc4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:35:41.750308 containerd[1557]: time="2025-09-13T10:35:41.750253678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gzrf5,Uid:5961d03f-3270-4e65-9942-db58dded5a9d,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:41.757286 containerd[1557]: time="2025-09-13T10:35:41.757256676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655cf64d6d-wxtsq,Uid:5c3899f2-5c7f-42b4-adaf-74e62e5f01e1,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:35:41.763186 kubelet[2727]: E0913 10:35:41.763145 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:41.765508 containerd[1557]: time="2025-09-13T10:35:41.764480109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bxxk6,Uid:353e5176-02d6-4c42-90c4-a5ef2b75b9a6,Namespace:kube-system,Attempt:0,}" Sep 13 10:35:41.769116 kubelet[2727]: E0913 10:35:41.769068 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:41.770004 containerd[1557]: time="2025-09-13T10:35:41.769966042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9rmkh,Uid:2afe3b9e-e70d-4349-825e-5e225a4b0400,Namespace:kube-system,Attempt:0,}" Sep 13 10:35:41.775637 containerd[1557]: time="2025-09-13T10:35:41.775599858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74cf7c7c68-9p8zq,Uid:aba9d2ad-8ec3-48bc-8d22-a001517c5dd3,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:41.785393 containerd[1557]: time="2025-09-13T10:35:41.785114209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695f99646f-wq2kh,Uid:8675c32c-23d1-40c6-87da-b0d97e289e16,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:41.843564 containerd[1557]: time="2025-09-13T10:35:41.843511328Z" level=error msg="Failed to destroy network for sandbox \"35b87b729ae4f24aaacbb3c751f516e63a3659d8a408cceded15900631ceb2fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.846485 containerd[1557]: time="2025-09-13T10:35:41.846429355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655cf64d6d-jl8hn,Uid:d0426ace-6336-45df-bbe4-a10ad650afc4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b87b729ae4f24aaacbb3c751f516e63a3659d8a408cceded15900631ceb2fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.846728 kubelet[2727]: E0913 10:35:41.846670 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b87b729ae4f24aaacbb3c751f516e63a3659d8a408cceded15900631ceb2fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.846789 kubelet[2727]: E0913 10:35:41.846765 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b87b729ae4f24aaacbb3c751f516e63a3659d8a408cceded15900631ceb2fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655cf64d6d-jl8hn" Sep 13 10:35:41.846817 kubelet[2727]: E0913 10:35:41.846789 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b87b729ae4f24aaacbb3c751f516e63a3659d8a408cceded15900631ceb2fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655cf64d6d-jl8hn" Sep 13 10:35:41.846872 kubelet[2727]: E0913 10:35:41.846842 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655cf64d6d-jl8hn_calico-apiserver(d0426ace-6336-45df-bbe4-a10ad650afc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655cf64d6d-jl8hn_calico-apiserver(d0426ace-6336-45df-bbe4-a10ad650afc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35b87b729ae4f24aaacbb3c751f516e63a3659d8a408cceded15900631ceb2fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655cf64d6d-jl8hn" podUID="d0426ace-6336-45df-bbe4-a10ad650afc4" Sep 13 10:35:41.854249 containerd[1557]: time="2025-09-13T10:35:41.854131057Z" level=error msg="Failed to destroy network for sandbox \"705629bf2cc9223dda3de08e842b101da527ce1b50d3ca321756806467cee8fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.855516 containerd[1557]: time="2025-09-13T10:35:41.855467403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9rmkh,Uid:2afe3b9e-e70d-4349-825e-5e225a4b0400,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"705629bf2cc9223dda3de08e842b101da527ce1b50d3ca321756806467cee8fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.856043 kubelet[2727]: E0913 10:35:41.855703 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705629bf2cc9223dda3de08e842b101da527ce1b50d3ca321756806467cee8fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.856043 kubelet[2727]: E0913 10:35:41.855769 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705629bf2cc9223dda3de08e842b101da527ce1b50d3ca321756806467cee8fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9rmkh" Sep 13 10:35:41.856043 kubelet[2727]: E0913 10:35:41.855795 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705629bf2cc9223dda3de08e842b101da527ce1b50d3ca321756806467cee8fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9rmkh" Sep 13 10:35:41.856159 kubelet[2727]: E0913 10:35:41.855851 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9rmkh_kube-system(2afe3b9e-e70d-4349-825e-5e225a4b0400)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9rmkh_kube-system(2afe3b9e-e70d-4349-825e-5e225a4b0400)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"705629bf2cc9223dda3de08e842b101da527ce1b50d3ca321756806467cee8fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9rmkh" podUID="2afe3b9e-e70d-4349-825e-5e225a4b0400" Sep 13 10:35:41.871750 containerd[1557]: time="2025-09-13T10:35:41.871704125Z" level=error msg="Failed to destroy network for sandbox \"d635941573e090d65c9da25f3ad51aafe2443a213ae28cee79beb36f3949b665\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.872037 containerd[1557]: time="2025-09-13T10:35:41.871998418Z" level=error msg="Failed to destroy network for sandbox \"3f355fe13920d1940cf61f66f85881b76610f5cee86573d9679c200a799ed3c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.873252 containerd[1557]: time="2025-09-13T10:35:41.873205218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695f99646f-wq2kh,Uid:8675c32c-23d1-40c6-87da-b0d97e289e16,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d635941573e090d65c9da25f3ad51aafe2443a213ae28cee79beb36f3949b665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.873498 kubelet[2727]: E0913 10:35:41.873462 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d635941573e090d65c9da25f3ad51aafe2443a213ae28cee79beb36f3949b665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.873576 kubelet[2727]: E0913 10:35:41.873519 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d635941573e090d65c9da25f3ad51aafe2443a213ae28cee79beb36f3949b665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695f99646f-wq2kh" Sep 13 10:35:41.873576 kubelet[2727]: E0913 10:35:41.873544 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d635941573e090d65c9da25f3ad51aafe2443a213ae28cee79beb36f3949b665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-695f99646f-wq2kh" Sep 13 10:35:41.873647 kubelet[2727]: E0913 10:35:41.873588 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-695f99646f-wq2kh_calico-system(8675c32c-23d1-40c6-87da-b0d97e289e16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-695f99646f-wq2kh_calico-system(8675c32c-23d1-40c6-87da-b0d97e289e16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d635941573e090d65c9da25f3ad51aafe2443a213ae28cee79beb36f3949b665\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-695f99646f-wq2kh" podUID="8675c32c-23d1-40c6-87da-b0d97e289e16" Sep 13 10:35:41.874304 containerd[1557]: time="2025-09-13T10:35:41.874274826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74cf7c7c68-9p8zq,Uid:aba9d2ad-8ec3-48bc-8d22-a001517c5dd3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f355fe13920d1940cf61f66f85881b76610f5cee86573d9679c200a799ed3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.874736 kubelet[2727]: E0913 10:35:41.874621 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f355fe13920d1940cf61f66f85881b76610f5cee86573d9679c200a799ed3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.874736 kubelet[2727]: E0913 10:35:41.874696 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f355fe13920d1940cf61f66f85881b76610f5cee86573d9679c200a799ed3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74cf7c7c68-9p8zq" Sep 13 10:35:41.874736 kubelet[2727]: E0913 10:35:41.874712 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f355fe13920d1940cf61f66f85881b76610f5cee86573d9679c200a799ed3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74cf7c7c68-9p8zq" Sep 13 10:35:41.874893 kubelet[2727]: E0913 10:35:41.874874 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74cf7c7c68-9p8zq_calico-system(aba9d2ad-8ec3-48bc-8d22-a001517c5dd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74cf7c7c68-9p8zq_calico-system(aba9d2ad-8ec3-48bc-8d22-a001517c5dd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f355fe13920d1940cf61f66f85881b76610f5cee86573d9679c200a799ed3c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74cf7c7c68-9p8zq" podUID="aba9d2ad-8ec3-48bc-8d22-a001517c5dd3" Sep 13 10:35:41.882258 containerd[1557]: time="2025-09-13T10:35:41.882171112Z" level=error msg="Failed to destroy network for sandbox \"3612f16b81c2c2117ea3ad94b803945c7ab3a88b988fad47b872aff9effc3024\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.883757 containerd[1557]: time="2025-09-13T10:35:41.883709358Z" level=error msg="Failed to destroy network for sandbox \"37ec53be7bd7acf78796e0cbe9bb331599a6497172639917e664e06571bbb634\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.884040 containerd[1557]: time="2025-09-13T10:35:41.883946738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655cf64d6d-wxtsq,Uid:5c3899f2-5c7f-42b4-adaf-74e62e5f01e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3612f16b81c2c2117ea3ad94b803945c7ab3a88b988fad47b872aff9effc3024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.884253 kubelet[2727]: E0913 10:35:41.884196 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3612f16b81c2c2117ea3ad94b803945c7ab3a88b988fad47b872aff9effc3024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.884320 kubelet[2727]: E0913 10:35:41.884261 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3612f16b81c2c2117ea3ad94b803945c7ab3a88b988fad47b872aff9effc3024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655cf64d6d-wxtsq" Sep 13 10:35:41.884320 kubelet[2727]: E0913 10:35:41.884292 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3612f16b81c2c2117ea3ad94b803945c7ab3a88b988fad47b872aff9effc3024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655cf64d6d-wxtsq" Sep 13 10:35:41.884412 kubelet[2727]: E0913 10:35:41.884374 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655cf64d6d-wxtsq_calico-apiserver(5c3899f2-5c7f-42b4-adaf-74e62e5f01e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655cf64d6d-wxtsq_calico-apiserver(5c3899f2-5c7f-42b4-adaf-74e62e5f01e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3612f16b81c2c2117ea3ad94b803945c7ab3a88b988fad47b872aff9effc3024\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655cf64d6d-wxtsq" podUID="5c3899f2-5c7f-42b4-adaf-74e62e5f01e1" Sep 13 10:35:41.885113 containerd[1557]: time="2025-09-13T10:35:41.885005314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gzrf5,Uid:5961d03f-3270-4e65-9942-db58dded5a9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ec53be7bd7acf78796e0cbe9bb331599a6497172639917e664e06571bbb634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.885453 containerd[1557]: time="2025-09-13T10:35:41.885322723Z" level=error msg="Failed to destroy network for sandbox \"232df6ec807cf2e4db8a325bdcbeae072b75684edaef34e71cbec52dfdc7274f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.885498 kubelet[2727]: E0913 10:35:41.885334 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ec53be7bd7acf78796e0cbe9bb331599a6497172639917e664e06571bbb634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.885498 kubelet[2727]: E0913 10:35:41.885364 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ec53be7bd7acf78796e0cbe9bb331599a6497172639917e664e06571bbb634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-gzrf5" Sep 13 10:35:41.885498 kubelet[2727]: E0913 10:35:41.885377 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ec53be7bd7acf78796e0cbe9bb331599a6497172639917e664e06571bbb634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-gzrf5" Sep 13 10:35:41.885592 kubelet[2727]: E0913 10:35:41.885423 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-gzrf5_calico-system(5961d03f-3270-4e65-9942-db58dded5a9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-gzrf5_calico-system(5961d03f-3270-4e65-9942-db58dded5a9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37ec53be7bd7acf78796e0cbe9bb331599a6497172639917e664e06571bbb634\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-gzrf5" podUID="5961d03f-3270-4e65-9942-db58dded5a9d" Sep 13 10:35:41.886703 containerd[1557]: time="2025-09-13T10:35:41.886645072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bxxk6,Uid:353e5176-02d6-4c42-90c4-a5ef2b75b9a6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"232df6ec807cf2e4db8a325bdcbeae072b75684edaef34e71cbec52dfdc7274f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.886953 kubelet[2727]: E0913 10:35:41.886916 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232df6ec807cf2e4db8a325bdcbeae072b75684edaef34e71cbec52dfdc7274f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:41.886992 kubelet[2727]: E0913 10:35:41.886963 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232df6ec807cf2e4db8a325bdcbeae072b75684edaef34e71cbec52dfdc7274f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bxxk6" Sep 13 10:35:41.886992 kubelet[2727]: E0913 10:35:41.886982 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232df6ec807cf2e4db8a325bdcbeae072b75684edaef34e71cbec52dfdc7274f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bxxk6" Sep 13 10:35:41.887142 kubelet[2727]: E0913 10:35:41.887111 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bxxk6_kube-system(353e5176-02d6-4c42-90c4-a5ef2b75b9a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bxxk6_kube-system(353e5176-02d6-4c42-90c4-a5ef2b75b9a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"232df6ec807cf2e4db8a325bdcbeae072b75684edaef34e71cbec52dfdc7274f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bxxk6" podUID="353e5176-02d6-4c42-90c4-a5ef2b75b9a6" Sep 13 10:35:42.119604 systemd[1]: Created slice kubepods-besteffort-poddf2ad2ba_6007_4e6d_89d9_770ca47aef38.slice - libcontainer container kubepods-besteffort-poddf2ad2ba_6007_4e6d_89d9_770ca47aef38.slice. Sep 13 10:35:42.122055 containerd[1557]: time="2025-09-13T10:35:42.122003640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzp5l,Uid:df2ad2ba-6007-4e6d-89d9-770ca47aef38,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:42.154694 systemd[1]: run-netns-cni\x2de20330f5\x2d398a\x2dd7d1\x2d0757\x2d1621e9bb2d98.mount: Deactivated successfully. Sep 13 10:35:42.167996 containerd[1557]: time="2025-09-13T10:35:42.167947778Z" level=error msg="Failed to destroy network for sandbox \"08d0e3518b53a7a8d90bda496dbbf66690fb2e656658fc0833b9ad3c763d296e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:42.170136 containerd[1557]: time="2025-09-13T10:35:42.170102788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzp5l,Uid:df2ad2ba-6007-4e6d-89d9-770ca47aef38,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d0e3518b53a7a8d90bda496dbbf66690fb2e656658fc0833b9ad3c763d296e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:42.170245 systemd[1]: run-netns-cni\x2d6a4e91af\x2d1f8f\x2d9c38\x2d9c56\x2df64f921ab4e6.mount: Deactivated successfully. Sep 13 10:35:42.170361 kubelet[2727]: E0913 10:35:42.170289 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d0e3518b53a7a8d90bda496dbbf66690fb2e656658fc0833b9ad3c763d296e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 10:35:42.170361 kubelet[2727]: E0913 10:35:42.170353 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d0e3518b53a7a8d90bda496dbbf66690fb2e656658fc0833b9ad3c763d296e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzp5l" Sep 13 10:35:42.170436 kubelet[2727]: E0913 10:35:42.170374 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d0e3518b53a7a8d90bda496dbbf66690fb2e656658fc0833b9ad3c763d296e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzp5l" Sep 13 10:35:42.170467 kubelet[2727]: E0913 10:35:42.170426 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wzp5l_calico-system(df2ad2ba-6007-4e6d-89d9-770ca47aef38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wzp5l_calico-system(df2ad2ba-6007-4e6d-89d9-770ca47aef38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08d0e3518b53a7a8d90bda496dbbf66690fb2e656658fc0833b9ad3c763d296e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzp5l" podUID="df2ad2ba-6007-4e6d-89d9-770ca47aef38" Sep 13 10:35:42.210798 containerd[1557]: time="2025-09-13T10:35:42.210764857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 10:35:48.497533 kubelet[2727]: I0913 10:35:48.497493 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:35:48.499150 kubelet[2727]: E0913 10:35:48.497820 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:49.220843 kubelet[2727]: E0913 10:35:49.220799 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:50.359809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324264011.mount: Deactivated successfully. Sep 13 10:35:51.769415 containerd[1557]: time="2025-09-13T10:35:51.769351552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:51.770102 containerd[1557]: time="2025-09-13T10:35:51.770079724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 10:35:51.771402 containerd[1557]: time="2025-09-13T10:35:51.771375222Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:51.775544 containerd[1557]: time="2025-09-13T10:35:51.775477991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:51.776103 containerd[1557]: time="2025-09-13T10:35:51.776058695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.565256253s" Sep 13 10:35:51.776103 containerd[1557]: time="2025-09-13T10:35:51.776098092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 10:35:51.788730 containerd[1557]: time="2025-09-13T10:35:51.788684470Z" level=info msg="CreateContainer within sandbox \"c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 10:35:51.805064 containerd[1557]: time="2025-09-13T10:35:51.804373675Z" level=info msg="Container 7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:51.818676 containerd[1557]: time="2025-09-13T10:35:51.818618241Z" level=info msg="CreateContainer within sandbox \"c7d228725b864a287cb220bb33e56bd228d3111fa1f3cef6c6b7ba47fea3ba0d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed\"" Sep 13 10:35:51.820231 containerd[1557]: time="2025-09-13T10:35:51.820190891Z" level=info msg="StartContainer for \"7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed\"" Sep 13 10:35:51.829702 containerd[1557]: time="2025-09-13T10:35:51.829659544Z" level=info msg="connecting to shim 7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed" address="unix:///run/containerd/s/dc03a554cd2c1f996964569656a54051410b0a9a5e534986322f549d5948959e" protocol=ttrpc version=3 Sep 13 10:35:51.851150 systemd[1]: Started cri-containerd-7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed.scope - libcontainer container 7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed. Sep 13 10:35:51.892415 containerd[1557]: time="2025-09-13T10:35:51.892365386Z" level=info msg="StartContainer for \"7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed\" returns successfully" Sep 13 10:35:51.943112 systemd[1]: Started sshd@7-10.0.0.4:22-10.0.0.1:43432.service - OpenSSH per-connection server daemon (10.0.0.1:43432). Sep 13 10:35:51.978943 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 10:35:51.979262 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 10:35:52.005051 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 43432 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:35:52.006419 sshd-session[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:35:52.012264 systemd-logind[1543]: New session 8 of user core. Sep 13 10:35:52.020674 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 10:35:52.121134 kubelet[2727]: E0913 10:35:52.121088 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:52.122786 containerd[1557]: time="2025-09-13T10:35:52.122720411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bxxk6,Uid:353e5176-02d6-4c42-90c4-a5ef2b75b9a6,Namespace:kube-system,Attempt:0,}" Sep 13 10:35:52.176108 kubelet[2727]: I0913 10:35:52.176058 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-whisker-backend-key-pair\") pod \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\" (UID: \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\") " Sep 13 10:35:52.176108 kubelet[2727]: I0913 10:35:52.176106 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-whisker-ca-bundle\") pod \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\" (UID: \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\") " Sep 13 10:35:52.176194 kubelet[2727]: I0913 10:35:52.176137 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj7hh\" (UniqueName: \"kubernetes.io/projected/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-kube-api-access-tj7hh\") pod \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\" (UID: \"aba9d2ad-8ec3-48bc-8d22-a001517c5dd3\") " Sep 13 10:35:52.176714 kubelet[2727]: I0913 10:35:52.176665 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aba9d2ad-8ec3-48bc-8d22-a001517c5dd3" (UID: "aba9d2ad-8ec3-48bc-8d22-a001517c5dd3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 10:35:52.493047 kubelet[2727]: I0913 10:35:52.491148 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 10:35:52.494966 kubelet[2727]: I0913 10:35:52.494889 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-kube-api-access-tj7hh" (OuterVolumeSpecName: "kube-api-access-tj7hh") pod "aba9d2ad-8ec3-48bc-8d22-a001517c5dd3" (UID: "aba9d2ad-8ec3-48bc-8d22-a001517c5dd3"). InnerVolumeSpecName "kube-api-access-tj7hh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 10:35:52.498364 kubelet[2727]: I0913 10:35:52.497414 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aba9d2ad-8ec3-48bc-8d22-a001517c5dd3" (UID: "aba9d2ad-8ec3-48bc-8d22-a001517c5dd3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 10:35:52.550562 kubelet[2727]: I0913 10:35:52.550448 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hn4cx" podStartSLOduration=1.352217541 podStartE2EDuration="22.550430932s" podCreationTimestamp="2025-09-13 10:35:30 +0000 UTC" firstStartedPulling="2025-09-13 10:35:30.579170931 +0000 UTC m=+18.556119707" lastFinishedPulling="2025-09-13 10:35:51.777384322 +0000 UTC m=+39.754333098" observedRunningTime="2025-09-13 10:35:52.549233697 +0000 UTC m=+40.526182473" watchObservedRunningTime="2025-09-13 10:35:52.550430932 +0000 UTC m=+40.527379708" Sep 13 10:35:52.564506 sshd[3865]: Connection closed by 10.0.0.1 port 43432 Sep 13 10:35:52.566224 sshd-session[3855]: pam_unix(sshd:session): session closed for user core Sep 13 10:35:52.571107 systemd[1]: sshd@7-10.0.0.4:22-10.0.0.1:43432.service: Deactivated successfully. Sep 13 10:35:52.575150 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 10:35:52.578777 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Sep 13 10:35:52.580497 systemd-logind[1543]: Removed session 8. Sep 13 10:35:52.592194 kubelet[2727]: I0913 10:35:52.592154 2727 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tj7hh\" (UniqueName: \"kubernetes.io/projected/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-kube-api-access-tj7hh\") on node \"localhost\" DevicePath \"\"" Sep 13 10:35:52.592194 kubelet[2727]: I0913 10:35:52.592184 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 10:35:52.691792 containerd[1557]: time="2025-09-13T10:35:52.691751024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed\" id:\"57d0dbd7cfb18f09e959bd187fbc2e7af4ab04d7cbf139277fb217ce896f3eef\" pid:3943 exit_status:1 exited_at:{seconds:1757759752 nanos:691453263}" Sep 13 10:35:52.711536 systemd-networkd[1482]: cali2640655a1d4: Link UP Sep 13 10:35:52.711741 systemd-networkd[1482]: cali2640655a1d4: Gained carrier Sep 13 10:35:52.724891 containerd[1557]: 2025-09-13 10:35:52.581 [INFO][3891] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:35:52.724891 containerd[1557]: 2025-09-13 10:35:52.599 [INFO][3891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0 coredns-674b8bbfcf- kube-system 353e5176-02d6-4c42-90c4-a5ef2b75b9a6 882 0 2025-09-13 10:35:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bxxk6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2640655a1d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Namespace="kube-system" Pod="coredns-674b8bbfcf-bxxk6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bxxk6-" Sep 13 10:35:52.724891 containerd[1557]: 2025-09-13 10:35:52.599 [INFO][3891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Namespace="kube-system" Pod="coredns-674b8bbfcf-bxxk6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" Sep 13 10:35:52.724891 containerd[1557]: 2025-09-13 10:35:52.667 [INFO][3924] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" HandleID="k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Workload="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.668 [INFO][3924] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" HandleID="k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Workload="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005169f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bxxk6", "timestamp":"2025-09-13 10:35:52.667861709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.668 [INFO][3924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.668 [INFO][3924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.668 [INFO][3924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.677 [INFO][3924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" host="localhost" Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.683 [INFO][3924] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.687 [INFO][3924] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.689 [INFO][3924] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.691 [INFO][3924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:52.725364 containerd[1557]: 2025-09-13 10:35:52.691 [INFO][3924] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" host="localhost" Sep 13 10:35:52.726372 containerd[1557]: 2025-09-13 10:35:52.692 [INFO][3924] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f Sep 13 10:35:52.726372 containerd[1557]: 2025-09-13 10:35:52.696 [INFO][3924] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" host="localhost" Sep 13 10:35:52.726372 containerd[1557]: 2025-09-13 10:35:52.700 [INFO][3924] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" host="localhost" Sep 13 10:35:52.726372 containerd[1557]: 2025-09-13 10:35:52.700 [INFO][3924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" host="localhost" Sep 13 10:35:52.726372 containerd[1557]: 2025-09-13 10:35:52.700 [INFO][3924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:35:52.726372 containerd[1557]: 2025-09-13 10:35:52.701 [INFO][3924] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" HandleID="k8s-pod-network.3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Workload="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" Sep 13 10:35:52.726492 containerd[1557]: 2025-09-13 10:35:52.704 [INFO][3891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Namespace="kube-system" Pod="coredns-674b8bbfcf-bxxk6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"353e5176-02d6-4c42-90c4-a5ef2b75b9a6", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bxxk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2640655a1d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:52.726580 containerd[1557]: 2025-09-13 10:35:52.704 [INFO][3891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Namespace="kube-system" Pod="coredns-674b8bbfcf-bxxk6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" Sep 13 10:35:52.726580 containerd[1557]: 2025-09-13 10:35:52.704 [INFO][3891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2640655a1d4 ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Namespace="kube-system" Pod="coredns-674b8bbfcf-bxxk6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" Sep 13 10:35:52.726580 containerd[1557]: 2025-09-13 10:35:52.712 [INFO][3891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Namespace="kube-system" Pod="coredns-674b8bbfcf-bxxk6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" Sep 13 10:35:52.726650 containerd[1557]: 2025-09-13 10:35:52.712 [INFO][3891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Namespace="kube-system" Pod="coredns-674b8bbfcf-bxxk6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"353e5176-02d6-4c42-90c4-a5ef2b75b9a6", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f", Pod:"coredns-674b8bbfcf-bxxk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2640655a1d4", MAC:"4a:75:ab:0f:fd:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:52.726650 containerd[1557]: 2025-09-13 10:35:52.720 [INFO][3891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" Namespace="kube-system" Pod="coredns-674b8bbfcf-bxxk6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bxxk6-eth0" Sep 13 10:35:52.783611 systemd[1]: var-lib-kubelet-pods-aba9d2ad\x2d8ec3\x2d48bc\x2d8d22\x2da001517c5dd3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtj7hh.mount: Deactivated successfully. Sep 13 10:35:52.783729 systemd[1]: var-lib-kubelet-pods-aba9d2ad\x2d8ec3\x2d48bc\x2d8d22\x2da001517c5dd3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 10:35:52.795245 systemd[1]: Removed slice kubepods-besteffort-podaba9d2ad_8ec3_48bc_8d22_a001517c5dd3.slice - libcontainer container kubepods-besteffort-podaba9d2ad_8ec3_48bc_8d22_a001517c5dd3.slice. Sep 13 10:35:52.847294 systemd[1]: Created slice kubepods-besteffort-podb4ac2287_8ba9_46ff_b4ea_27f414aedbcc.slice - libcontainer container kubepods-besteffort-podb4ac2287_8ba9_46ff_b4ea_27f414aedbcc.slice. Sep 13 10:35:52.854318 containerd[1557]: time="2025-09-13T10:35:52.854281157Z" level=info msg="connecting to shim 3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f" address="unix:///run/containerd/s/39ec3483414cef97b898c75de309644cfa2b275b4d1ec8bc3b7d269f48dd0de9" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:52.894130 kubelet[2727]: I0913 10:35:52.894092 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4ac2287-8ba9-46ff-b4ea-27f414aedbcc-whisker-ca-bundle\") pod \"whisker-747f4c9d5f-7885l\" (UID: \"b4ac2287-8ba9-46ff-b4ea-27f414aedbcc\") " pod="calico-system/whisker-747f4c9d5f-7885l" Sep 13 10:35:52.894206 kubelet[2727]: I0913 10:35:52.894134 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4ac2287-8ba9-46ff-b4ea-27f414aedbcc-whisker-backend-key-pair\") pod \"whisker-747f4c9d5f-7885l\" (UID: \"b4ac2287-8ba9-46ff-b4ea-27f414aedbcc\") " pod="calico-system/whisker-747f4c9d5f-7885l" Sep 13 10:35:52.894206 kubelet[2727]: I0913 10:35:52.894156 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnlb2\" (UniqueName: \"kubernetes.io/projected/b4ac2287-8ba9-46ff-b4ea-27f414aedbcc-kube-api-access-nnlb2\") pod \"whisker-747f4c9d5f-7885l\" (UID: \"b4ac2287-8ba9-46ff-b4ea-27f414aedbcc\") " pod="calico-system/whisker-747f4c9d5f-7885l" Sep 13 10:35:52.900446 systemd[1]: Started cri-containerd-3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f.scope - libcontainer container 3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f. Sep 13 10:35:52.911896 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:35:52.944005 containerd[1557]: time="2025-09-13T10:35:52.943949466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bxxk6,Uid:353e5176-02d6-4c42-90c4-a5ef2b75b9a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f\"" Sep 13 10:35:52.947504 kubelet[2727]: E0913 10:35:52.947467 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:52.951846 containerd[1557]: time="2025-09-13T10:35:52.951804373Z" level=info msg="CreateContainer within sandbox \"3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:35:52.961921 containerd[1557]: time="2025-09-13T10:35:52.961880442Z" level=info msg="Container e89eb0f2a38ecde18b17524d0e728f132dfe86ae82e12a2346e6326929a6e96e: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:52.967813 containerd[1557]: time="2025-09-13T10:35:52.967779125Z" level=info msg="CreateContainer within sandbox \"3ca74c45c432b59e54ca8a6691c759b96e96ff38b88266b922f9724b3933868f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e89eb0f2a38ecde18b17524d0e728f132dfe86ae82e12a2346e6326929a6e96e\"" Sep 13 10:35:52.968246 containerd[1557]: time="2025-09-13T10:35:52.968219414Z" level=info msg="StartContainer for \"e89eb0f2a38ecde18b17524d0e728f132dfe86ae82e12a2346e6326929a6e96e\"" Sep 13 10:35:52.968955 containerd[1557]: time="2025-09-13T10:35:52.968932845Z" level=info msg="connecting to shim e89eb0f2a38ecde18b17524d0e728f132dfe86ae82e12a2346e6326929a6e96e" address="unix:///run/containerd/s/39ec3483414cef97b898c75de309644cfa2b275b4d1ec8bc3b7d269f48dd0de9" protocol=ttrpc version=3 Sep 13 10:35:52.997164 systemd[1]: Started cri-containerd-e89eb0f2a38ecde18b17524d0e728f132dfe86ae82e12a2346e6326929a6e96e.scope - libcontainer container e89eb0f2a38ecde18b17524d0e728f132dfe86ae82e12a2346e6326929a6e96e. Sep 13 10:35:53.028206 containerd[1557]: time="2025-09-13T10:35:53.028160493Z" level=info msg="StartContainer for \"e89eb0f2a38ecde18b17524d0e728f132dfe86ae82e12a2346e6326929a6e96e\" returns successfully" Sep 13 10:35:53.114524 kubelet[2727]: E0913 10:35:53.114414 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:53.114809 containerd[1557]: time="2025-09-13T10:35:53.114778284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9rmkh,Uid:2afe3b9e-e70d-4349-825e-5e225a4b0400,Namespace:kube-system,Attempt:0,}" Sep 13 10:35:53.153740 containerd[1557]: time="2025-09-13T10:35:53.153510234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-747f4c9d5f-7885l,Uid:b4ac2287-8ba9-46ff-b4ea-27f414aedbcc,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:53.207478 systemd-networkd[1482]: cali96313b85785: Link UP Sep 13 10:35:53.207663 systemd-networkd[1482]: cali96313b85785: Gained carrier Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.136 [INFO][4046] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.145 [INFO][4046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0 coredns-674b8bbfcf- kube-system 2afe3b9e-e70d-4349-825e-5e225a4b0400 884 0 2025-09-13 10:35:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9rmkh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96313b85785 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Namespace="kube-system" Pod="coredns-674b8bbfcf-9rmkh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9rmkh-" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.145 [INFO][4046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Namespace="kube-system" Pod="coredns-674b8bbfcf-9rmkh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.171 [INFO][4061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" HandleID="k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Workload="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.171 [INFO][4061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" HandleID="k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Workload="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9rmkh", "timestamp":"2025-09-13 10:35:53.171558316 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.171 [INFO][4061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.171 [INFO][4061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.171 [INFO][4061] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.177 [INFO][4061] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.181 [INFO][4061] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.186 [INFO][4061] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.188 [INFO][4061] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.190 [INFO][4061] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.190 [INFO][4061] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.191 [INFO][4061] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.194 [INFO][4061] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.199 [INFO][4061] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.199 [INFO][4061] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" host="localhost" Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.199 [INFO][4061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:35:53.218802 containerd[1557]: 2025-09-13 10:35:53.199 [INFO][4061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" HandleID="k8s-pod-network.1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Workload="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" Sep 13 10:35:53.219435 containerd[1557]: 2025-09-13 10:35:53.203 [INFO][4046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Namespace="kube-system" Pod="coredns-674b8bbfcf-9rmkh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2afe3b9e-e70d-4349-825e-5e225a4b0400", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9rmkh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96313b85785", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:53.219435 containerd[1557]: 2025-09-13 10:35:53.203 [INFO][4046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Namespace="kube-system" Pod="coredns-674b8bbfcf-9rmkh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" Sep 13 10:35:53.219435 containerd[1557]: 2025-09-13 10:35:53.203 [INFO][4046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96313b85785 ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Namespace="kube-system" Pod="coredns-674b8bbfcf-9rmkh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" Sep 13 10:35:53.219435 containerd[1557]: 2025-09-13 10:35:53.207 [INFO][4046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Namespace="kube-system" Pod="coredns-674b8bbfcf-9rmkh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" Sep 13 10:35:53.219435 containerd[1557]: 2025-09-13 10:35:53.207 [INFO][4046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Namespace="kube-system" Pod="coredns-674b8bbfcf-9rmkh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2afe3b9e-e70d-4349-825e-5e225a4b0400", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e", Pod:"coredns-674b8bbfcf-9rmkh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96313b85785", MAC:"ca:09:ef:f5:43:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:53.219435 containerd[1557]: 2025-09-13 10:35:53.214 [INFO][4046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" Namespace="kube-system" Pod="coredns-674b8bbfcf-9rmkh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9rmkh-eth0" Sep 13 10:35:53.232653 kubelet[2727]: E0913 10:35:53.232611 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:53.244795 kubelet[2727]: I0913 10:35:53.244739 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bxxk6" podStartSLOduration=36.244721475 podStartE2EDuration="36.244721475s" podCreationTimestamp="2025-09-13 10:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:35:53.244095365 +0000 UTC m=+41.221044141" watchObservedRunningTime="2025-09-13 10:35:53.244721475 +0000 UTC m=+41.221670241" Sep 13 10:35:53.257813 containerd[1557]: time="2025-09-13T10:35:53.257771429Z" level=info msg="connecting to shim 1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e" address="unix:///run/containerd/s/4db4efa93ca84535a51383e0b59daf9103f764a297c7a79a99818af9e9cec7d5" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:53.282273 systemd[1]: Started cri-containerd-1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e.scope - libcontainer container 1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e. Sep 13 10:35:53.296679 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:35:53.313227 systemd-networkd[1482]: calica306f393b2: Link UP Sep 13 10:35:53.313415 systemd-networkd[1482]: calica306f393b2: Gained carrier Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.178 [INFO][4068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.191 [INFO][4068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--747f4c9d5f--7885l-eth0 whisker-747f4c9d5f- calico-system b4ac2287-8ba9-46ff-b4ea-27f414aedbcc 1013 0 2025-09-13 10:35:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:747f4c9d5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-747f4c9d5f-7885l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calica306f393b2 [] [] }} ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Namespace="calico-system" Pod="whisker-747f4c9d5f-7885l" WorkloadEndpoint="localhost-k8s-whisker--747f4c9d5f--7885l-" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.191 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Namespace="calico-system" Pod="whisker-747f4c9d5f-7885l" WorkloadEndpoint="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.218 [INFO][4084] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" HandleID="k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Workload="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.218 [INFO][4084] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" HandleID="k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Workload="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-747f4c9d5f-7885l", "timestamp":"2025-09-13 10:35:53.218523525 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.218 [INFO][4084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.218 [INFO][4084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.218 [INFO][4084] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.279 [INFO][4084] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.285 [INFO][4084] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.289 [INFO][4084] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.290 [INFO][4084] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.294 [INFO][4084] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.294 [INFO][4084] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.295 [INFO][4084] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.301 [INFO][4084] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.305 [INFO][4084] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.305 [INFO][4084] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" host="localhost" Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.305 [INFO][4084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:35:53.335054 containerd[1557]: 2025-09-13 10:35:53.305 [INFO][4084] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" HandleID="k8s-pod-network.5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Workload="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" Sep 13 10:35:53.336647 containerd[1557]: 2025-09-13 10:35:53.310 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Namespace="calico-system" Pod="whisker-747f4c9d5f-7885l" WorkloadEndpoint="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--747f4c9d5f--7885l-eth0", GenerateName:"whisker-747f4c9d5f-", Namespace:"calico-system", SelfLink:"", UID:"b4ac2287-8ba9-46ff-b4ea-27f414aedbcc", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"747f4c9d5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-747f4c9d5f-7885l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica306f393b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:53.336647 containerd[1557]: 2025-09-13 10:35:53.310 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Namespace="calico-system" Pod="whisker-747f4c9d5f-7885l" WorkloadEndpoint="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" Sep 13 10:35:53.336647 containerd[1557]: 2025-09-13 10:35:53.310 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica306f393b2 ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Namespace="calico-system" Pod="whisker-747f4c9d5f-7885l" WorkloadEndpoint="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" Sep 13 10:35:53.336647 containerd[1557]: 2025-09-13 10:35:53.312 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Namespace="calico-system" Pod="whisker-747f4c9d5f-7885l" WorkloadEndpoint="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" Sep 13 10:35:53.336647 containerd[1557]: 2025-09-13 10:35:53.313 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Namespace="calico-system" Pod="whisker-747f4c9d5f-7885l" WorkloadEndpoint="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--747f4c9d5f--7885l-eth0", GenerateName:"whisker-747f4c9d5f-", Namespace:"calico-system", SelfLink:"", UID:"b4ac2287-8ba9-46ff-b4ea-27f414aedbcc", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"747f4c9d5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c", Pod:"whisker-747f4c9d5f-7885l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica306f393b2", MAC:"f6:df:13:40:25:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:53.336647 containerd[1557]: 2025-09-13 10:35:53.327 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" Namespace="calico-system" Pod="whisker-747f4c9d5f-7885l" WorkloadEndpoint="localhost-k8s-whisker--747f4c9d5f--7885l-eth0" Sep 13 10:35:53.342065 containerd[1557]: time="2025-09-13T10:35:53.342017771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9rmkh,Uid:2afe3b9e-e70d-4349-825e-5e225a4b0400,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e\"" Sep 13 10:35:53.343164 kubelet[2727]: E0913 10:35:53.343134 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:53.350417 containerd[1557]: time="2025-09-13T10:35:53.350328873Z" level=info msg="CreateContainer within sandbox \"1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:35:53.363721 containerd[1557]: time="2025-09-13T10:35:53.363673552Z" level=info msg="Container 1f6ea08504cb1ffedcce6ff41befbde923e3e02432491a9db5509fe17a6d5fbd: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:53.369281 containerd[1557]: time="2025-09-13T10:35:53.369181172Z" level=info msg="CreateContainer within sandbox \"1ddfad016f6d188c3e4e2f350c6a4bfc26ae79ce21d5f4c07619297cfeebaa1e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f6ea08504cb1ffedcce6ff41befbde923e3e02432491a9db5509fe17a6d5fbd\"" Sep 13 10:35:53.372828 containerd[1557]: time="2025-09-13T10:35:53.372782799Z" level=info msg="StartContainer for \"1f6ea08504cb1ffedcce6ff41befbde923e3e02432491a9db5509fe17a6d5fbd\"" Sep 13 10:35:53.373512 containerd[1557]: time="2025-09-13T10:35:53.373483684Z" level=info msg="connecting to shim 1f6ea08504cb1ffedcce6ff41befbde923e3e02432491a9db5509fe17a6d5fbd" address="unix:///run/containerd/s/4db4efa93ca84535a51383e0b59daf9103f764a297c7a79a99818af9e9cec7d5" protocol=ttrpc version=3 Sep 13 10:35:53.383158 containerd[1557]: time="2025-09-13T10:35:53.383117593Z" level=info msg="connecting to shim 5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c" address="unix:///run/containerd/s/7baca5b8d74c570da1c29d086d5ccd9aad7a9142b5d8ecf89d2d23542ef6bd4c" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:53.399811 containerd[1557]: time="2025-09-13T10:35:53.399759776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed\" id:\"570b1b36bbfdc7d5d7b5329725e41d79cb6f59ba8b0b96e2949b12267976e163\" pid:4133 exit_status:1 exited_at:{seconds:1757759753 nanos:399470052}" Sep 13 10:35:53.407309 systemd[1]: Started cri-containerd-1f6ea08504cb1ffedcce6ff41befbde923e3e02432491a9db5509fe17a6d5fbd.scope - libcontainer container 1f6ea08504cb1ffedcce6ff41befbde923e3e02432491a9db5509fe17a6d5fbd. Sep 13 10:35:53.411752 systemd[1]: Started cri-containerd-5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c.scope - libcontainer container 5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c. Sep 13 10:35:53.430866 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:35:53.448500 containerd[1557]: time="2025-09-13T10:35:53.448449425Z" level=info msg="StartContainer for \"1f6ea08504cb1ffedcce6ff41befbde923e3e02432491a9db5509fe17a6d5fbd\" returns successfully" Sep 13 10:35:53.467403 containerd[1557]: time="2025-09-13T10:35:53.467249271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-747f4c9d5f-7885l,Uid:b4ac2287-8ba9-46ff-b4ea-27f414aedbcc,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c\"" Sep 13 10:35:53.469313 containerd[1557]: time="2025-09-13T10:35:53.469274647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 10:35:54.023015 systemd-networkd[1482]: vxlan.calico: Link UP Sep 13 10:35:54.023192 systemd-networkd[1482]: vxlan.calico: Gained carrier Sep 13 10:35:54.117115 kubelet[2727]: I0913 10:35:54.117079 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba9d2ad-8ec3-48bc-8d22-a001517c5dd3" path="/var/lib/kubelet/pods/aba9d2ad-8ec3-48bc-8d22-a001517c5dd3/volumes" Sep 13 10:35:54.149177 systemd-networkd[1482]: cali2640655a1d4: Gained IPv6LL Sep 13 10:35:54.237661 kubelet[2727]: E0913 10:35:54.237461 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:54.237968 kubelet[2727]: E0913 10:35:54.237824 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:54.250874 kubelet[2727]: I0913 10:35:54.250818 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9rmkh" podStartSLOduration=37.250803693 podStartE2EDuration="37.250803693s" podCreationTimestamp="2025-09-13 10:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:35:54.249341507 +0000 UTC m=+42.226290283" watchObservedRunningTime="2025-09-13 10:35:54.250803693 +0000 UTC m=+42.227752459" Sep 13 10:35:54.333160 containerd[1557]: time="2025-09-13T10:35:54.332996504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed\" id:\"318fab63a4fd6e1d1133a08d90df31eab5277e1708b0bec84254f54f195e1f8d\" pid:4443 exit_status:1 exited_at:{seconds:1757759754 nanos:332463497}" Sep 13 10:35:54.853187 systemd-networkd[1482]: calica306f393b2: Gained IPv6LL Sep 13 10:35:54.853529 systemd-networkd[1482]: cali96313b85785: Gained IPv6LL Sep 13 10:35:55.121504 containerd[1557]: time="2025-09-13T10:35:55.114975514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655cf64d6d-wxtsq,Uid:5c3899f2-5c7f-42b4-adaf-74e62e5f01e1,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:35:55.121600 containerd[1557]: time="2025-09-13T10:35:55.121570090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655cf64d6d-jl8hn,Uid:d0426ace-6336-45df-bbe4-a10ad650afc4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 10:35:55.121757 containerd[1557]: time="2025-09-13T10:35:55.121731874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gzrf5,Uid:5961d03f-3270-4e65-9942-db58dded5a9d,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:55.121888 containerd[1557]: time="2025-09-13T10:35:55.121864964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695f99646f-wq2kh,Uid:8675c32c-23d1-40c6-87da-b0d97e289e16,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:55.239775 kubelet[2727]: E0913 10:35:55.239725 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:55.241974 kubelet[2727]: E0913 10:35:55.241092 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:55.268333 containerd[1557]: time="2025-09-13T10:35:55.268284839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:55.270549 containerd[1557]: time="2025-09-13T10:35:55.270496431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 10:35:55.271596 containerd[1557]: time="2025-09-13T10:35:55.271573456Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:55.274291 containerd[1557]: time="2025-09-13T10:35:55.274259439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:55.275733 containerd[1557]: time="2025-09-13T10:35:55.275539218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.806230625s" Sep 13 10:35:55.275733 containerd[1557]: time="2025-09-13T10:35:55.275570520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 10:35:55.281290 containerd[1557]: time="2025-09-13T10:35:55.281256027Z" level=info msg="CreateContainer within sandbox \"5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 10:35:55.285803 systemd-networkd[1482]: calib852fe301fe: Link UP Sep 13 10:35:55.286547 systemd-networkd[1482]: calib852fe301fe: Gained carrier Sep 13 10:35:55.297454 containerd[1557]: time="2025-09-13T10:35:55.297099813Z" level=info msg="Container 1af76b36c6eb31e0d23d7e6ac3abc639c846649cde7704652a71138920aaafde: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.189 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0 calico-kube-controllers-695f99646f- calico-system 8675c32c-23d1-40c6-87da-b0d97e289e16 886 0 2025-09-13 10:35:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:695f99646f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-695f99646f-wq2kh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib852fe301fe [] [] }} ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Namespace="calico-system" Pod="calico-kube-controllers-695f99646f-wq2kh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.189 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Namespace="calico-system" Pod="calico-kube-controllers-695f99646f-wq2kh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.234 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" HandleID="k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Workload="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.234 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" HandleID="k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Workload="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033ba30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-695f99646f-wq2kh", "timestamp":"2025-09-13 10:35:55.233981993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.234 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.234 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.234 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.241 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.247 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.258 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.260 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.263 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.263 [INFO][4558] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.266 [INFO][4558] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.270 [INFO][4558] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.277 [INFO][4558] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.277 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" host="localhost" Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.277 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:35:55.304320 containerd[1557]: 2025-09-13 10:35:55.277 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" HandleID="k8s-pod-network.24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Workload="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" Sep 13 10:35:55.304917 containerd[1557]: 2025-09-13 10:35:55.281 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Namespace="calico-system" Pod="calico-kube-controllers-695f99646f-wq2kh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0", GenerateName:"calico-kube-controllers-695f99646f-", Namespace:"calico-system", SelfLink:"", UID:"8675c32c-23d1-40c6-87da-b0d97e289e16", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695f99646f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-695f99646f-wq2kh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib852fe301fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:55.304917 containerd[1557]: 2025-09-13 10:35:55.281 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Namespace="calico-system" Pod="calico-kube-controllers-695f99646f-wq2kh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" Sep 13 10:35:55.304917 containerd[1557]: 2025-09-13 10:35:55.281 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib852fe301fe ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Namespace="calico-system" Pod="calico-kube-controllers-695f99646f-wq2kh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" Sep 13 10:35:55.304917 containerd[1557]: 2025-09-13 10:35:55.289 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Namespace="calico-system" Pod="calico-kube-controllers-695f99646f-wq2kh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" Sep 13 10:35:55.304917 containerd[1557]: 2025-09-13 10:35:55.290 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Namespace="calico-system" Pod="calico-kube-controllers-695f99646f-wq2kh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0", GenerateName:"calico-kube-controllers-695f99646f-", Namespace:"calico-system", SelfLink:"", UID:"8675c32c-23d1-40c6-87da-b0d97e289e16", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"695f99646f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d", Pod:"calico-kube-controllers-695f99646f-wq2kh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib852fe301fe", MAC:"22:83:87:4f:10:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:55.304917 containerd[1557]: 2025-09-13 10:35:55.300 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" Namespace="calico-system" Pod="calico-kube-controllers-695f99646f-wq2kh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--695f99646f--wq2kh-eth0" Sep 13 10:35:55.311395 containerd[1557]: time="2025-09-13T10:35:55.311349216Z" level=info msg="CreateContainer within sandbox \"5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1af76b36c6eb31e0d23d7e6ac3abc639c846649cde7704652a71138920aaafde\"" Sep 13 10:35:55.312047 containerd[1557]: time="2025-09-13T10:35:55.311999531Z" level=info msg="StartContainer for \"1af76b36c6eb31e0d23d7e6ac3abc639c846649cde7704652a71138920aaafde\"" Sep 13 10:35:55.313663 containerd[1557]: time="2025-09-13T10:35:55.313102125Z" level=info msg="connecting to shim 1af76b36c6eb31e0d23d7e6ac3abc639c846649cde7704652a71138920aaafde" address="unix:///run/containerd/s/7baca5b8d74c570da1c29d086d5ccd9aad7a9142b5d8ecf89d2d23542ef6bd4c" protocol=ttrpc version=3 Sep 13 10:35:55.338203 containerd[1557]: time="2025-09-13T10:35:55.338125674Z" level=info msg="connecting to shim 24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d" address="unix:///run/containerd/s/667c1eda0cd76234fa651449328b09f4cfff6f9f5b0987c256c34dc389eaf749" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:55.342264 systemd[1]: Started cri-containerd-1af76b36c6eb31e0d23d7e6ac3abc639c846649cde7704652a71138920aaafde.scope - libcontainer container 1af76b36c6eb31e0d23d7e6ac3abc639c846649cde7704652a71138920aaafde. Sep 13 10:35:55.366204 systemd-networkd[1482]: vxlan.calico: Gained IPv6LL Sep 13 10:35:55.366270 systemd[1]: Started cri-containerd-24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d.scope - libcontainer container 24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d. Sep 13 10:35:55.380233 systemd-networkd[1482]: calia1a75f7f0bf: Link UP Sep 13 10:35:55.382451 systemd-networkd[1482]: calia1a75f7f0bf: Gained carrier Sep 13 10:35:55.395699 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:35:55.405979 containerd[1557]: time="2025-09-13T10:35:55.405940459Z" level=info msg="StartContainer for \"1af76b36c6eb31e0d23d7e6ac3abc639c846649cde7704652a71138920aaafde\" returns successfully" Sep 13 10:35:55.409842 containerd[1557]: time="2025-09-13T10:35:55.409811257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.192 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0 calico-apiserver-655cf64d6d- calico-apiserver d0426ace-6336-45df-bbe4-a10ad650afc4 881 0 2025-09-13 10:35:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655cf64d6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-655cf64d6d-jl8hn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia1a75f7f0bf [] [] }} ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-jl8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.192 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-jl8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.234 [INFO][4567] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" HandleID="k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Workload="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.234 [INFO][4567] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" HandleID="k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Workload="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000137490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-655cf64d6d-jl8hn", "timestamp":"2025-09-13 10:35:55.234056437 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.234 [INFO][4567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.277 [INFO][4567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.277 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.342 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.348 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.354 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.356 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.357 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.357 [INFO][4567] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.359 [INFO][4567] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641 Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.364 [INFO][4567] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.373 [INFO][4567] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.373 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" host="localhost" Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.373 [INFO][4567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:35:55.417624 containerd[1557]: 2025-09-13 10:35:55.373 [INFO][4567] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" HandleID="k8s-pod-network.dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Workload="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" Sep 13 10:35:55.418120 containerd[1557]: 2025-09-13 10:35:55.377 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-jl8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0", GenerateName:"calico-apiserver-655cf64d6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0426ace-6336-45df-bbe4-a10ad650afc4", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655cf64d6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-655cf64d6d-jl8hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1a75f7f0bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:55.418120 containerd[1557]: 2025-09-13 10:35:55.377 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-jl8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" Sep 13 10:35:55.418120 containerd[1557]: 2025-09-13 10:35:55.377 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1a75f7f0bf ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-jl8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" Sep 13 10:35:55.418120 containerd[1557]: 2025-09-13 10:35:55.382 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-jl8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" Sep 13 10:35:55.418120 containerd[1557]: 2025-09-13 10:35:55.383 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-jl8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0", GenerateName:"calico-apiserver-655cf64d6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0426ace-6336-45df-bbe4-a10ad650afc4", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655cf64d6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641", Pod:"calico-apiserver-655cf64d6d-jl8hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1a75f7f0bf", MAC:"1e:34:fc:3c:09:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:55.418120 containerd[1557]: 2025-09-13 10:35:55.400 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-jl8hn" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--jl8hn-eth0" Sep 13 10:35:55.444214 containerd[1557]: time="2025-09-13T10:35:55.444131217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-695f99646f-wq2kh,Uid:8675c32c-23d1-40c6-87da-b0d97e289e16,Namespace:calico-system,Attempt:0,} returns sandbox id \"24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d\"" Sep 13 10:35:55.449472 containerd[1557]: time="2025-09-13T10:35:55.449427468Z" level=info msg="connecting to shim dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641" address="unix:///run/containerd/s/33836cbf0f89b8ba0565309859335a550531c68a03744c54d9cba1268876d795" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:55.475732 systemd-networkd[1482]: cali73e399661b3: Link UP Sep 13 10:35:55.476494 systemd-networkd[1482]: cali73e399661b3: Gained carrier Sep 13 10:35:55.479169 systemd[1]: Started cri-containerd-dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641.scope - libcontainer container dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641. Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.194 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0 calico-apiserver-655cf64d6d- calico-apiserver 5c3899f2-5c7f-42b4-adaf-74e62e5f01e1 883 0 2025-09-13 10:35:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655cf64d6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-655cf64d6d-wxtsq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73e399661b3 [] [] }} ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-wxtsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.194 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-wxtsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.250 [INFO][4569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" HandleID="k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Workload="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.252 [INFO][4569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" HandleID="k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Workload="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bee60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-655cf64d6d-wxtsq", "timestamp":"2025-09-13 10:35:55.248286734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.252 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.373 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.373 [INFO][4569] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.442 [INFO][4569] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.448 [INFO][4569] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.452 [INFO][4569] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.454 [INFO][4569] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.455 [INFO][4569] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.455 [INFO][4569] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.457 [INFO][4569] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.460 [INFO][4569] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.465 [INFO][4569] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.465 [INFO][4569] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" host="localhost" Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.465 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:35:55.488102 containerd[1557]: 2025-09-13 10:35:55.465 [INFO][4569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" HandleID="k8s-pod-network.62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Workload="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" Sep 13 10:35:55.488567 containerd[1557]: 2025-09-13 10:35:55.471 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-wxtsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0", GenerateName:"calico-apiserver-655cf64d6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c3899f2-5c7f-42b4-adaf-74e62e5f01e1", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655cf64d6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-655cf64d6d-wxtsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73e399661b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:55.488567 containerd[1557]: 2025-09-13 10:35:55.472 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-wxtsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" Sep 13 10:35:55.488567 containerd[1557]: 2025-09-13 10:35:55.472 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73e399661b3 ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-wxtsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" Sep 13 10:35:55.488567 containerd[1557]: 2025-09-13 10:35:55.476 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-wxtsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" Sep 13 10:35:55.488567 containerd[1557]: 2025-09-13 10:35:55.477 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-wxtsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0", GenerateName:"calico-apiserver-655cf64d6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c3899f2-5c7f-42b4-adaf-74e62e5f01e1", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655cf64d6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced", Pod:"calico-apiserver-655cf64d6d-wxtsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73e399661b3", MAC:"de:59:0b:99:e7:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:55.488567 containerd[1557]: 2025-09-13 10:35:55.485 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" Namespace="calico-apiserver" Pod="calico-apiserver-655cf64d6d-wxtsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655cf64d6d--wxtsq-eth0" Sep 13 10:35:55.497099 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:35:55.511336 containerd[1557]: time="2025-09-13T10:35:55.510381349Z" level=info msg="connecting to shim 62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced" address="unix:///run/containerd/s/e04482d1b63b03b0b0b114f828378233b583f0226bed816a8cb6e0c8cc8b28a2" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:55.525515 containerd[1557]: time="2025-09-13T10:35:55.525474664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655cf64d6d-jl8hn,Uid:d0426ace-6336-45df-bbe4-a10ad650afc4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641\"" Sep 13 10:35:55.539190 systemd[1]: Started cri-containerd-62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced.scope - libcontainer container 62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced. Sep 13 10:35:55.553760 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:35:55.573896 systemd-networkd[1482]: calie8a20d43b9d: Link UP Sep 13 10:35:55.574872 systemd-networkd[1482]: calie8a20d43b9d: Gained carrier Sep 13 10:35:55.590469 containerd[1557]: time="2025-09-13T10:35:55.590273554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655cf64d6d-wxtsq,Uid:5c3899f2-5c7f-42b4-adaf-74e62e5f01e1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced\"" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.190 [INFO][4521] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--gzrf5-eth0 goldmane-54d579b49d- calico-system 5961d03f-3270-4e65-9942-db58dded5a9d 880 0 2025-09-13 10:35:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-gzrf5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie8a20d43b9d [] [] }} ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Namespace="calico-system" Pod="goldmane-54d579b49d-gzrf5" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gzrf5-" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.193 [INFO][4521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Namespace="calico-system" Pod="goldmane-54d579b49d-gzrf5" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.252 [INFO][4564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" HandleID="k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Workload="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.252 [INFO][4564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" HandleID="k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Workload="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021f590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-gzrf5", "timestamp":"2025-09-13 10:35:55.252234783 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.252 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.465 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.465 [INFO][4564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.542 [INFO][4564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.548 [INFO][4564] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.552 [INFO][4564] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.553 [INFO][4564] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.556 [INFO][4564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.556 [INFO][4564] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.557 [INFO][4564] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1 Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.561 [INFO][4564] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.567 [INFO][4564] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.567 [INFO][4564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" host="localhost" Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.567 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:35:55.592508 containerd[1557]: 2025-09-13 10:35:55.567 [INFO][4564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" HandleID="k8s-pod-network.8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Workload="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" Sep 13 10:35:55.593687 containerd[1557]: 2025-09-13 10:35:55.571 [INFO][4521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Namespace="calico-system" Pod="goldmane-54d579b49d-gzrf5" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--gzrf5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5961d03f-3270-4e65-9942-db58dded5a9d", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-gzrf5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie8a20d43b9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:55.593687 containerd[1557]: 2025-09-13 10:35:55.571 [INFO][4521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Namespace="calico-system" Pod="goldmane-54d579b49d-gzrf5" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" Sep 13 10:35:55.593687 containerd[1557]: 2025-09-13 10:35:55.571 [INFO][4521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8a20d43b9d ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Namespace="calico-system" Pod="goldmane-54d579b49d-gzrf5" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" Sep 13 10:35:55.593687 containerd[1557]: 2025-09-13 10:35:55.575 [INFO][4521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Namespace="calico-system" Pod="goldmane-54d579b49d-gzrf5" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" Sep 13 10:35:55.593687 containerd[1557]: 2025-09-13 10:35:55.577 [INFO][4521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Namespace="calico-system" Pod="goldmane-54d579b49d-gzrf5" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--gzrf5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5961d03f-3270-4e65-9942-db58dded5a9d", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1", Pod:"goldmane-54d579b49d-gzrf5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie8a20d43b9d", MAC:"aa:f5:ea:16:fe:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:55.593687 containerd[1557]: 2025-09-13 10:35:55.587 [INFO][4521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" Namespace="calico-system" Pod="goldmane-54d579b49d-gzrf5" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gzrf5-eth0" Sep 13 10:35:55.614115 containerd[1557]: time="2025-09-13T10:35:55.614067240Z" level=info msg="connecting to shim 8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1" address="unix:///run/containerd/s/401bc9a7b7384632610d456a6074f2d2f35b2016cdae3c7aaa89144d08820211" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:55.637162 systemd[1]: Started cri-containerd-8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1.scope - libcontainer container 8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1. Sep 13 10:35:55.651616 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:35:55.680287 containerd[1557]: time="2025-09-13T10:35:55.680179654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gzrf5,Uid:5961d03f-3270-4e65-9942-db58dded5a9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1\"" Sep 13 10:35:56.253455 kubelet[2727]: E0913 10:35:56.253408 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:56.253809 kubelet[2727]: E0913 10:35:56.253695 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:35:56.645610 systemd-networkd[1482]: calie8a20d43b9d: Gained IPv6LL Sep 13 10:35:56.837167 systemd-networkd[1482]: cali73e399661b3: Gained IPv6LL Sep 13 10:35:57.093537 systemd-networkd[1482]: calib852fe301fe: Gained IPv6LL Sep 13 10:35:57.114513 containerd[1557]: time="2025-09-13T10:35:57.114471723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzp5l,Uid:df2ad2ba-6007-4e6d-89d9-770ca47aef38,Namespace:calico-system,Attempt:0,}" Sep 13 10:35:57.159210 systemd-networkd[1482]: calia1a75f7f0bf: Gained IPv6LL Sep 13 10:35:57.208914 systemd-networkd[1482]: cali851d7915eef: Link UP Sep 13 10:35:57.209354 systemd-networkd[1482]: cali851d7915eef: Gained carrier Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.146 [INFO][4850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wzp5l-eth0 csi-node-driver- calico-system df2ad2ba-6007-4e6d-89d9-770ca47aef38 768 0 2025-09-13 10:35:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wzp5l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali851d7915eef [] [] }} ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Namespace="calico-system" Pod="csi-node-driver-wzp5l" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzp5l-" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.146 [INFO][4850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Namespace="calico-system" Pod="csi-node-driver-wzp5l" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzp5l-eth0" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.171 [INFO][4864] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" HandleID="k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Workload="localhost-k8s-csi--node--driver--wzp5l-eth0" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.171 [INFO][4864] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" HandleID="k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Workload="localhost-k8s-csi--node--driver--wzp5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wzp5l", "timestamp":"2025-09-13 10:35:57.171504001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.172 [INFO][4864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.173 [INFO][4864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.173 [INFO][4864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.179 [INFO][4864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.183 [INFO][4864] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.187 [INFO][4864] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.189 [INFO][4864] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.191 [INFO][4864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.191 [INFO][4864] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.192 [INFO][4864] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19 Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.196 [INFO][4864] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.202 [INFO][4864] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.202 [INFO][4864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" host="localhost" Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.202 [INFO][4864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 10:35:57.222782 containerd[1557]: 2025-09-13 10:35:57.202 [INFO][4864] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" HandleID="k8s-pod-network.4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Workload="localhost-k8s-csi--node--driver--wzp5l-eth0" Sep 13 10:35:57.223488 containerd[1557]: 2025-09-13 10:35:57.205 [INFO][4850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Namespace="calico-system" Pod="csi-node-driver-wzp5l" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzp5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzp5l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df2ad2ba-6007-4e6d-89d9-770ca47aef38", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wzp5l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali851d7915eef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:57.223488 containerd[1557]: 2025-09-13 10:35:57.205 [INFO][4850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Namespace="calico-system" Pod="csi-node-driver-wzp5l" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzp5l-eth0" Sep 13 10:35:57.223488 containerd[1557]: 2025-09-13 10:35:57.205 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali851d7915eef ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Namespace="calico-system" Pod="csi-node-driver-wzp5l" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzp5l-eth0" Sep 13 10:35:57.223488 containerd[1557]: 2025-09-13 10:35:57.209 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Namespace="calico-system" Pod="csi-node-driver-wzp5l" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzp5l-eth0" Sep 13 10:35:57.223488 containerd[1557]: 2025-09-13 10:35:57.209 [INFO][4850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Namespace="calico-system" Pod="csi-node-driver-wzp5l" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzp5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wzp5l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df2ad2ba-6007-4e6d-89d9-770ca47aef38", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 10, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19", Pod:"csi-node-driver-wzp5l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali851d7915eef", MAC:"ca:e7:b1:81:fe:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 10:35:57.223488 containerd[1557]: 2025-09-13 10:35:57.218 [INFO][4850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" Namespace="calico-system" Pod="csi-node-driver-wzp5l" WorkloadEndpoint="localhost-k8s-csi--node--driver--wzp5l-eth0" Sep 13 10:35:57.244611 containerd[1557]: time="2025-09-13T10:35:57.244461305Z" level=info msg="connecting to shim 4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19" address="unix:///run/containerd/s/f453713327e7b9a937fcc560edf47e0b78cfac4fc66c7e4caa7ce446b1a3afd9" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:35:57.271168 systemd[1]: Started cri-containerd-4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19.scope - libcontainer container 4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19. Sep 13 10:35:57.289541 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:35:57.305394 containerd[1557]: time="2025-09-13T10:35:57.305360572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzp5l,Uid:df2ad2ba-6007-4e6d-89d9-770ca47aef38,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19\"" Sep 13 10:35:57.578860 systemd[1]: Started sshd@8-10.0.0.4:22-10.0.0.1:43444.service - OpenSSH per-connection server daemon (10.0.0.1:43444). Sep 13 10:35:57.641410 sshd[4932]: Accepted publickey for core from 10.0.0.1 port 43444 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:35:57.643847 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:35:57.649383 systemd-logind[1543]: New session 9 of user core. Sep 13 10:35:57.660161 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 10:35:57.805193 sshd[4935]: Connection closed by 10.0.0.1 port 43444 Sep 13 10:35:57.806101 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Sep 13 10:35:57.811468 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Sep 13 10:35:57.812070 systemd[1]: sshd@8-10.0.0.4:22-10.0.0.1:43444.service: Deactivated successfully. Sep 13 10:35:57.815900 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 10:35:57.817576 systemd-logind[1543]: Removed session 9. Sep 13 10:35:57.901440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3111978678.mount: Deactivated successfully. Sep 13 10:35:58.162291 containerd[1557]: time="2025-09-13T10:35:58.162190376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:58.162912 containerd[1557]: time="2025-09-13T10:35:58.162868673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 10:35:58.164042 containerd[1557]: time="2025-09-13T10:35:58.164004226Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:58.166074 containerd[1557]: time="2025-09-13T10:35:58.166043945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:35:58.166671 containerd[1557]: time="2025-09-13T10:35:58.166639411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.756793686s" Sep 13 10:35:58.166719 containerd[1557]: time="2025-09-13T10:35:58.166672695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 10:35:58.167579 containerd[1557]: time="2025-09-13T10:35:58.167552853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 10:35:58.170650 containerd[1557]: time="2025-09-13T10:35:58.170623833Z" level=info msg="CreateContainer within sandbox \"5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 10:35:58.178747 containerd[1557]: time="2025-09-13T10:35:58.178704796Z" level=info msg="Container 7290d9e9afa8c6cf76efdb3007503e00336608eeb77766ccf57889b34d58b800: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:35:58.186929 containerd[1557]: time="2025-09-13T10:35:58.186891023Z" level=info msg="CreateContainer within sandbox \"5e7dd1ed2b867831c04ffb9a430bb35a91b5b75e5cffa5629d5a56df4438cb7c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7290d9e9afa8c6cf76efdb3007503e00336608eeb77766ccf57889b34d58b800\"" Sep 13 10:35:58.187807 containerd[1557]: time="2025-09-13T10:35:58.187280228Z" level=info msg="StartContainer for \"7290d9e9afa8c6cf76efdb3007503e00336608eeb77766ccf57889b34d58b800\"" Sep 13 10:35:58.188247 containerd[1557]: time="2025-09-13T10:35:58.188222848Z" level=info msg="connecting to shim 7290d9e9afa8c6cf76efdb3007503e00336608eeb77766ccf57889b34d58b800" address="unix:///run/containerd/s/7baca5b8d74c570da1c29d086d5ccd9aad7a9142b5d8ecf89d2d23542ef6bd4c" protocol=ttrpc version=3 Sep 13 10:35:58.211165 systemd[1]: Started cri-containerd-7290d9e9afa8c6cf76efdb3007503e00336608eeb77766ccf57889b34d58b800.scope - libcontainer container 7290d9e9afa8c6cf76efdb3007503e00336608eeb77766ccf57889b34d58b800. Sep 13 10:35:58.318362 containerd[1557]: time="2025-09-13T10:35:58.318319886Z" level=info msg="StartContainer for \"7290d9e9afa8c6cf76efdb3007503e00336608eeb77766ccf57889b34d58b800\" returns successfully" Sep 13 10:35:58.950415 systemd-networkd[1482]: cali851d7915eef: Gained IPv6LL Sep 13 10:35:59.285582 kubelet[2727]: I0913 10:35:59.285338 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-747f4c9d5f-7885l" podStartSLOduration=2.586664594 podStartE2EDuration="7.285321669s" podCreationTimestamp="2025-09-13 10:35:52 +0000 UTC" firstStartedPulling="2025-09-13 10:35:53.468732832 +0000 UTC m=+41.445681608" lastFinishedPulling="2025-09-13 10:35:58.167389907 +0000 UTC m=+46.144338683" observedRunningTime="2025-09-13 10:35:59.284828874 +0000 UTC m=+47.261777650" watchObservedRunningTime="2025-09-13 10:35:59.285321669 +0000 UTC m=+47.262270435" Sep 13 10:36:01.601239 containerd[1557]: time="2025-09-13T10:36:01.601184517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:01.602061 containerd[1557]: time="2025-09-13T10:36:01.601999576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 10:36:01.603082 containerd[1557]: time="2025-09-13T10:36:01.603019091Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:01.604890 containerd[1557]: time="2025-09-13T10:36:01.604859155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:01.605313 containerd[1557]: time="2025-09-13T10:36:01.605282716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.437703783s" Sep 13 10:36:01.605313 containerd[1557]: time="2025-09-13T10:36:01.605310059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 10:36:01.609565 containerd[1557]: time="2025-09-13T10:36:01.609534132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 10:36:01.624739 containerd[1557]: time="2025-09-13T10:36:01.624694101Z" level=info msg="CreateContainer within sandbox \"24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 10:36:01.632382 containerd[1557]: time="2025-09-13T10:36:01.632340433Z" level=info msg="Container 624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:36:01.642234 containerd[1557]: time="2025-09-13T10:36:01.642201646Z" level=info msg="CreateContainer within sandbox \"24914236d5c37385ddbfbb773529b9ee681b7f402f9f837f7efa5d2f67b5a85d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a\"" Sep 13 10:36:01.642657 containerd[1557]: time="2025-09-13T10:36:01.642626409Z" level=info msg="StartContainer for \"624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a\"" Sep 13 10:36:01.643661 containerd[1557]: time="2025-09-13T10:36:01.643638590Z" level=info msg="connecting to shim 624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a" address="unix:///run/containerd/s/667c1eda0cd76234fa651449328b09f4cfff6f9f5b0987c256c34dc389eaf749" protocol=ttrpc version=3 Sep 13 10:36:01.669161 systemd[1]: Started cri-containerd-624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a.scope - libcontainer container 624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a. Sep 13 10:36:01.715304 containerd[1557]: time="2025-09-13T10:36:01.715266839Z" level=info msg="StartContainer for \"624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a\" returns successfully" Sep 13 10:36:02.309983 kubelet[2727]: I0913 10:36:02.309912 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-695f99646f-wq2kh" podStartSLOduration=26.146142218 podStartE2EDuration="32.309872991s" podCreationTimestamp="2025-09-13 10:35:30 +0000 UTC" firstStartedPulling="2025-09-13 10:35:55.445720188 +0000 UTC m=+43.422668964" lastFinishedPulling="2025-09-13 10:36:01.609450961 +0000 UTC m=+49.586399737" observedRunningTime="2025-09-13 10:36:02.304224714 +0000 UTC m=+50.281173490" watchObservedRunningTime="2025-09-13 10:36:02.309872991 +0000 UTC m=+50.286821767" Sep 13 10:36:02.363501 containerd[1557]: time="2025-09-13T10:36:02.363450236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a\" id:\"dfaf7e2a65c938c836d9ce222fe9479c129e46fbb6e66336469382491e6c9793\" pid:5060 exited_at:{seconds:1757759762 nanos:361858524}" Sep 13 10:36:02.819793 systemd[1]: Started sshd@9-10.0.0.4:22-10.0.0.1:38504.service - OpenSSH per-connection server daemon (10.0.0.1:38504). Sep 13 10:36:02.877245 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 38504 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:02.878744 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:02.882907 systemd-logind[1543]: New session 10 of user core. Sep 13 10:36:02.888154 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 10:36:03.009900 sshd[5076]: Connection closed by 10.0.0.1 port 38504 Sep 13 10:36:03.010224 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:03.014698 systemd[1]: sshd@9-10.0.0.4:22-10.0.0.1:38504.service: Deactivated successfully. Sep 13 10:36:03.016778 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 10:36:03.017539 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Sep 13 10:36:03.018877 systemd-logind[1543]: Removed session 10. Sep 13 10:36:05.451955 containerd[1557]: time="2025-09-13T10:36:05.451910649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:05.451955 containerd[1557]: time="2025-09-13T10:36:05.451911631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 10:36:05.454419 containerd[1557]: time="2025-09-13T10:36:05.454360756Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:05.455062 containerd[1557]: time="2025-09-13T10:36:05.455018468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.845459157s" Sep 13 10:36:05.455104 containerd[1557]: time="2025-09-13T10:36:05.455064888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 10:36:05.455466 containerd[1557]: time="2025-09-13T10:36:05.455432188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:05.456346 containerd[1557]: time="2025-09-13T10:36:05.456307891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 10:36:05.461227 containerd[1557]: time="2025-09-13T10:36:05.461196373Z" level=info msg="CreateContainer within sandbox \"dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 10:36:05.469950 containerd[1557]: time="2025-09-13T10:36:05.469916985Z" level=info msg="Container 8bb1e6decd7983ae1a5a81b892dac000d4865f1a63eee5d50e90fee4c5b3c708: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:36:05.479327 containerd[1557]: time="2025-09-13T10:36:05.479292481Z" level=info msg="CreateContainer within sandbox \"dfe2bd880b59ed9535933b640c5a546d154b0601fad8c42c66f2374188fe7641\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8bb1e6decd7983ae1a5a81b892dac000d4865f1a63eee5d50e90fee4c5b3c708\"" Sep 13 10:36:05.479746 containerd[1557]: time="2025-09-13T10:36:05.479719517Z" level=info msg="StartContainer for \"8bb1e6decd7983ae1a5a81b892dac000d4865f1a63eee5d50e90fee4c5b3c708\"" Sep 13 10:36:05.480698 containerd[1557]: time="2025-09-13T10:36:05.480669093Z" level=info msg="connecting to shim 8bb1e6decd7983ae1a5a81b892dac000d4865f1a63eee5d50e90fee4c5b3c708" address="unix:///run/containerd/s/33836cbf0f89b8ba0565309859335a550531c68a03744c54d9cba1268876d795" protocol=ttrpc version=3 Sep 13 10:36:05.532166 systemd[1]: Started cri-containerd-8bb1e6decd7983ae1a5a81b892dac000d4865f1a63eee5d50e90fee4c5b3c708.scope - libcontainer container 8bb1e6decd7983ae1a5a81b892dac000d4865f1a63eee5d50e90fee4c5b3c708. Sep 13 10:36:05.576839 containerd[1557]: time="2025-09-13T10:36:05.576802377Z" level=info msg="StartContainer for \"8bb1e6decd7983ae1a5a81b892dac000d4865f1a63eee5d50e90fee4c5b3c708\" returns successfully" Sep 13 10:36:05.885075 containerd[1557]: time="2025-09-13T10:36:05.884946222Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:05.885950 containerd[1557]: time="2025-09-13T10:36:05.885926898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 10:36:05.887605 containerd[1557]: time="2025-09-13T10:36:05.887579494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 431.239651ms" Sep 13 10:36:05.887605 containerd[1557]: time="2025-09-13T10:36:05.887605283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 10:36:05.888763 containerd[1557]: time="2025-09-13T10:36:05.888508510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 10:36:05.892009 containerd[1557]: time="2025-09-13T10:36:05.891953451Z" level=info msg="CreateContainer within sandbox \"62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 10:36:05.902226 containerd[1557]: time="2025-09-13T10:36:05.902193470Z" level=info msg="Container d9fb289c66bc2628760655e32da98ec14ab9abbaaa48b7966e9f5f943a922141: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:36:05.911527 containerd[1557]: time="2025-09-13T10:36:05.911490646Z" level=info msg="CreateContainer within sandbox \"62e29e7f431b0e6a2c768ca611c60029c7f448487dcb73b811adb646cb940ced\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d9fb289c66bc2628760655e32da98ec14ab9abbaaa48b7966e9f5f943a922141\"" Sep 13 10:36:05.913096 containerd[1557]: time="2025-09-13T10:36:05.913061924Z" level=info msg="StartContainer for \"d9fb289c66bc2628760655e32da98ec14ab9abbaaa48b7966e9f5f943a922141\"" Sep 13 10:36:05.914297 containerd[1557]: time="2025-09-13T10:36:05.914140970Z" level=info msg="connecting to shim d9fb289c66bc2628760655e32da98ec14ab9abbaaa48b7966e9f5f943a922141" address="unix:///run/containerd/s/e04482d1b63b03b0b0b114f828378233b583f0226bed816a8cb6e0c8cc8b28a2" protocol=ttrpc version=3 Sep 13 10:36:05.933152 systemd[1]: Started cri-containerd-d9fb289c66bc2628760655e32da98ec14ab9abbaaa48b7966e9f5f943a922141.scope - libcontainer container d9fb289c66bc2628760655e32da98ec14ab9abbaaa48b7966e9f5f943a922141. Sep 13 10:36:05.978695 containerd[1557]: time="2025-09-13T10:36:05.978648365Z" level=info msg="StartContainer for \"d9fb289c66bc2628760655e32da98ec14ab9abbaaa48b7966e9f5f943a922141\" returns successfully" Sep 13 10:36:06.324379 kubelet[2727]: I0913 10:36:06.324314 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655cf64d6d-wxtsq" podStartSLOduration=29.028125465 podStartE2EDuration="39.32429658s" podCreationTimestamp="2025-09-13 10:35:27 +0000 UTC" firstStartedPulling="2025-09-13 10:35:55.592180222 +0000 UTC m=+43.569128998" lastFinishedPulling="2025-09-13 10:36:05.888351337 +0000 UTC m=+53.865300113" observedRunningTime="2025-09-13 10:36:06.323685499 +0000 UTC m=+54.300634275" watchObservedRunningTime="2025-09-13 10:36:06.32429658 +0000 UTC m=+54.301245356" Sep 13 10:36:06.336538 kubelet[2727]: I0913 10:36:06.336460 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655cf64d6d-jl8hn" podStartSLOduration=29.407095085999998 podStartE2EDuration="39.336437385s" podCreationTimestamp="2025-09-13 10:35:27 +0000 UTC" firstStartedPulling="2025-09-13 10:35:55.526842404 +0000 UTC m=+43.503791180" lastFinishedPulling="2025-09-13 10:36:05.456184703 +0000 UTC m=+53.433133479" observedRunningTime="2025-09-13 10:36:06.336113038 +0000 UTC m=+54.313061814" watchObservedRunningTime="2025-09-13 10:36:06.336437385 +0000 UTC m=+54.313386161" Sep 13 10:36:07.304554 kubelet[2727]: I0913 10:36:07.304518 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:36:08.024576 systemd[1]: Started sshd@10-10.0.0.4:22-10.0.0.1:38512.service - OpenSSH per-connection server daemon (10.0.0.1:38512). Sep 13 10:36:08.080734 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 38512 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:08.082189 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:08.086459 systemd-logind[1543]: New session 11 of user core. Sep 13 10:36:08.096162 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 10:36:08.242111 sshd[5186]: Connection closed by 10.0.0.1 port 38512 Sep 13 10:36:08.243238 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:08.252823 systemd[1]: sshd@10-10.0.0.4:22-10.0.0.1:38512.service: Deactivated successfully. Sep 13 10:36:08.255006 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 10:36:08.258065 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Sep 13 10:36:08.260407 systemd[1]: Started sshd@11-10.0.0.4:22-10.0.0.1:38514.service - OpenSSH per-connection server daemon (10.0.0.1:38514). Sep 13 10:36:08.261437 systemd-logind[1543]: Removed session 11. Sep 13 10:36:08.319975 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 38514 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:08.323170 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:08.329116 systemd-logind[1543]: New session 12 of user core. Sep 13 10:36:08.340212 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 10:36:08.579967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109010859.mount: Deactivated successfully. Sep 13 10:36:08.632172 sshd[5207]: Connection closed by 10.0.0.1 port 38514 Sep 13 10:36:08.632589 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:08.642713 systemd[1]: sshd@11-10.0.0.4:22-10.0.0.1:38514.service: Deactivated successfully. Sep 13 10:36:08.644646 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 10:36:08.645454 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Sep 13 10:36:08.651561 systemd[1]: Started sshd@12-10.0.0.4:22-10.0.0.1:38524.service - OpenSSH per-connection server daemon (10.0.0.1:38524). Sep 13 10:36:08.652491 systemd-logind[1543]: Removed session 12. Sep 13 10:36:09.061197 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 38524 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:09.063122 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:09.067721 systemd-logind[1543]: New session 13 of user core. Sep 13 10:36:09.083149 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 10:36:09.212400 sshd[5227]: Connection closed by 10.0.0.1 port 38524 Sep 13 10:36:09.212817 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:09.218144 systemd[1]: sshd@12-10.0.0.4:22-10.0.0.1:38524.service: Deactivated successfully. Sep 13 10:36:09.221361 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 10:36:09.222506 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Sep 13 10:36:09.224378 systemd-logind[1543]: Removed session 13. Sep 13 10:36:09.516629 containerd[1557]: time="2025-09-13T10:36:09.516561108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:09.517451 containerd[1557]: time="2025-09-13T10:36:09.517404466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 10:36:09.522584 containerd[1557]: time="2025-09-13T10:36:09.522535692Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:09.524730 containerd[1557]: time="2025-09-13T10:36:09.524704350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:09.525356 containerd[1557]: time="2025-09-13T10:36:09.525326892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.63678614s" Sep 13 10:36:09.525399 containerd[1557]: time="2025-09-13T10:36:09.525356349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 10:36:09.526327 containerd[1557]: time="2025-09-13T10:36:09.526300993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 10:36:09.530290 containerd[1557]: time="2025-09-13T10:36:09.530263092Z" level=info msg="CreateContainer within sandbox \"8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 10:36:09.538339 containerd[1557]: time="2025-09-13T10:36:09.538300870Z" level=info msg="Container e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:36:09.546431 containerd[1557]: time="2025-09-13T10:36:09.546385588Z" level=info msg="CreateContainer within sandbox \"8c197dd0fda0ec458f609ebd79572e3760abe329750b2d9eac7820e7798174e1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af\"" Sep 13 10:36:09.547035 containerd[1557]: time="2025-09-13T10:36:09.546922294Z" level=info msg="StartContainer for \"e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af\"" Sep 13 10:36:09.548001 containerd[1557]: time="2025-09-13T10:36:09.547972312Z" level=info msg="connecting to shim e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af" address="unix:///run/containerd/s/401bc9a7b7384632610d456a6074f2d2f35b2016cdae3c7aaa89144d08820211" protocol=ttrpc version=3 Sep 13 10:36:09.572164 systemd[1]: Started cri-containerd-e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af.scope - libcontainer container e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af. Sep 13 10:36:09.719571 containerd[1557]: time="2025-09-13T10:36:09.719528143Z" level=info msg="StartContainer for \"e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af\" returns successfully" Sep 13 10:36:10.327229 kubelet[2727]: I0913 10:36:10.327163 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-gzrf5" podStartSLOduration=27.482548021 podStartE2EDuration="41.327144566s" podCreationTimestamp="2025-09-13 10:35:29 +0000 UTC" firstStartedPulling="2025-09-13 10:35:55.681562413 +0000 UTC m=+43.658511179" lastFinishedPulling="2025-09-13 10:36:09.526158948 +0000 UTC m=+57.503107724" observedRunningTime="2025-09-13 10:36:10.323205253 +0000 UTC m=+58.300154019" watchObservedRunningTime="2025-09-13 10:36:10.327144566 +0000 UTC m=+58.304093342" Sep 13 10:36:10.420116 containerd[1557]: time="2025-09-13T10:36:10.420064113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af\" id:\"e1572ad63fd403399ad36570a6d7eb17d6231fa0c7d17690bcdb4682e7db9839\" pid:5292 exit_status:1 exited_at:{seconds:1757759770 nanos:419627430}" Sep 13 10:36:11.076811 containerd[1557]: time="2025-09-13T10:36:11.076763073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:11.077654 containerd[1557]: time="2025-09-13T10:36:11.077619034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 10:36:11.078839 containerd[1557]: time="2025-09-13T10:36:11.078809190Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:11.080539 containerd[1557]: time="2025-09-13T10:36:11.080509370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:11.081063 containerd[1557]: time="2025-09-13T10:36:11.081015197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.554685839s" Sep 13 10:36:11.081104 containerd[1557]: time="2025-09-13T10:36:11.081067016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 10:36:11.085797 containerd[1557]: time="2025-09-13T10:36:11.085763638Z" level=info msg="CreateContainer within sandbox \"4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 10:36:11.094131 containerd[1557]: time="2025-09-13T10:36:11.094009749Z" level=info msg="Container 3ecfdc63637c80e2458b8186c4047031b78cb87c7ace0fe5718f9e9cb1d60bbd: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:36:11.120583 containerd[1557]: time="2025-09-13T10:36:11.120531161Z" level=info msg="CreateContainer within sandbox \"4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ecfdc63637c80e2458b8186c4047031b78cb87c7ace0fe5718f9e9cb1d60bbd\"" Sep 13 10:36:11.121068 containerd[1557]: time="2025-09-13T10:36:11.121045533Z" level=info msg="StartContainer for \"3ecfdc63637c80e2458b8186c4047031b78cb87c7ace0fe5718f9e9cb1d60bbd\"" Sep 13 10:36:11.122474 containerd[1557]: time="2025-09-13T10:36:11.122427690Z" level=info msg="connecting to shim 3ecfdc63637c80e2458b8186c4047031b78cb87c7ace0fe5718f9e9cb1d60bbd" address="unix:///run/containerd/s/f453713327e7b9a937fcc560edf47e0b78cfac4fc66c7e4caa7ce446b1a3afd9" protocol=ttrpc version=3 Sep 13 10:36:11.154169 systemd[1]: Started cri-containerd-3ecfdc63637c80e2458b8186c4047031b78cb87c7ace0fe5718f9e9cb1d60bbd.scope - libcontainer container 3ecfdc63637c80e2458b8186c4047031b78cb87c7ace0fe5718f9e9cb1d60bbd. Sep 13 10:36:11.192208 containerd[1557]: time="2025-09-13T10:36:11.192161437Z" level=info msg="StartContainer for \"3ecfdc63637c80e2458b8186c4047031b78cb87c7ace0fe5718f9e9cb1d60bbd\" returns successfully" Sep 13 10:36:11.193191 containerd[1557]: time="2025-09-13T10:36:11.193173631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 10:36:11.396542 containerd[1557]: time="2025-09-13T10:36:11.396415115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af\" id:\"27982df25c1fcbba6d4ca2449c9bbbafb957b2e7fb5ac6c9c91ec18e3160b75c\" pid:5348 exit_status:1 exited_at:{seconds:1757759771 nanos:396118964}" Sep 13 10:36:14.228074 systemd[1]: Started sshd@13-10.0.0.4:22-10.0.0.1:55122.service - OpenSSH per-connection server daemon (10.0.0.1:55122). Sep 13 10:36:14.233285 containerd[1557]: time="2025-09-13T10:36:14.233236140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:14.234007 containerd[1557]: time="2025-09-13T10:36:14.233937551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 10:36:14.235256 containerd[1557]: time="2025-09-13T10:36:14.235207250Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:14.237304 containerd[1557]: time="2025-09-13T10:36:14.237273994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:36:14.237810 containerd[1557]: time="2025-09-13T10:36:14.237768718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.044570429s" Sep 13 10:36:14.237859 containerd[1557]: time="2025-09-13T10:36:14.237813544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 10:36:14.242529 containerd[1557]: time="2025-09-13T10:36:14.242475793Z" level=info msg="CreateContainer within sandbox \"4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 10:36:14.254691 containerd[1557]: time="2025-09-13T10:36:14.254468585Z" level=info msg="Container 20dd22a0d95b2c4616009828dbf4ab1858bea0c30f1b989e6c15ff0471f7e621: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:36:14.271419 containerd[1557]: time="2025-09-13T10:36:14.271356302Z" level=info msg="CreateContainer within sandbox \"4bd84b9fed8061c0cb92f83f0d6b6334b6c4a34575007de3d007b306b76e5d19\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"20dd22a0d95b2c4616009828dbf4ab1858bea0c30f1b989e6c15ff0471f7e621\"" Sep 13 10:36:14.272341 containerd[1557]: time="2025-09-13T10:36:14.272302407Z" level=info msg="StartContainer for \"20dd22a0d95b2c4616009828dbf4ab1858bea0c30f1b989e6c15ff0471f7e621\"" Sep 13 10:36:14.273768 containerd[1557]: time="2025-09-13T10:36:14.273672759Z" level=info msg="connecting to shim 20dd22a0d95b2c4616009828dbf4ab1858bea0c30f1b989e6c15ff0471f7e621" address="unix:///run/containerd/s/f453713327e7b9a937fcc560edf47e0b78cfac4fc66c7e4caa7ce446b1a3afd9" protocol=ttrpc version=3 Sep 13 10:36:14.301157 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 55122 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:14.303147 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:14.306269 systemd[1]: Started cri-containerd-20dd22a0d95b2c4616009828dbf4ab1858bea0c30f1b989e6c15ff0471f7e621.scope - libcontainer container 20dd22a0d95b2c4616009828dbf4ab1858bea0c30f1b989e6c15ff0471f7e621. Sep 13 10:36:14.310751 systemd-logind[1543]: New session 14 of user core. Sep 13 10:36:14.313202 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 10:36:14.357484 containerd[1557]: time="2025-09-13T10:36:14.357392385Z" level=info msg="StartContainer for \"20dd22a0d95b2c4616009828dbf4ab1858bea0c30f1b989e6c15ff0471f7e621\" returns successfully" Sep 13 10:36:14.451058 sshd[5395]: Connection closed by 10.0.0.1 port 55122 Sep 13 10:36:14.451386 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:14.455275 systemd[1]: sshd@13-10.0.0.4:22-10.0.0.1:55122.service: Deactivated successfully. Sep 13 10:36:14.457266 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 10:36:14.458012 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Sep 13 10:36:14.459164 systemd-logind[1543]: Removed session 14. Sep 13 10:36:15.180770 kubelet[2727]: I0913 10:36:15.180736 2727 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 10:36:15.181715 kubelet[2727]: I0913 10:36:15.181689 2727 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 10:36:15.342599 kubelet[2727]: I0913 10:36:15.342539 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wzp5l" podStartSLOduration=28.410532572 podStartE2EDuration="45.342522662s" podCreationTimestamp="2025-09-13 10:35:30 +0000 UTC" firstStartedPulling="2025-09-13 10:35:57.306777073 +0000 UTC m=+45.283725849" lastFinishedPulling="2025-09-13 10:36:14.238767163 +0000 UTC m=+62.215715939" observedRunningTime="2025-09-13 10:36:15.341799779 +0000 UTC m=+63.318748545" watchObservedRunningTime="2025-09-13 10:36:15.342522662 +0000 UTC m=+63.319471438" Sep 13 10:36:19.473870 systemd[1]: Started sshd@14-10.0.0.4:22-10.0.0.1:55158.service - OpenSSH per-connection server daemon (10.0.0.1:55158). Sep 13 10:36:19.518519 sshd[5431]: Accepted publickey for core from 10.0.0.1 port 55158 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:19.519686 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:19.523635 systemd-logind[1543]: New session 15 of user core. Sep 13 10:36:19.535159 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 10:36:19.645340 sshd[5434]: Connection closed by 10.0.0.1 port 55158 Sep 13 10:36:19.645670 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:19.650079 systemd[1]: sshd@14-10.0.0.4:22-10.0.0.1:55158.service: Deactivated successfully. Sep 13 10:36:19.651970 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 10:36:19.652813 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Sep 13 10:36:19.653997 systemd-logind[1543]: Removed session 15. Sep 13 10:36:24.318174 containerd[1557]: time="2025-09-13T10:36:24.318130184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed\" id:\"adc3f77bd4e8cb465a53427bada4f36502b168a5050aecb0dab9647be7a7008f\" pid:5460 exited_at:{seconds:1757759784 nanos:317737044}" Sep 13 10:36:24.661623 systemd[1]: Started sshd@15-10.0.0.4:22-10.0.0.1:55954.service - OpenSSH per-connection server daemon (10.0.0.1:55954). Sep 13 10:36:24.730716 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 55954 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:24.732278 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:24.736339 systemd-logind[1543]: New session 16 of user core. Sep 13 10:36:24.744142 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 10:36:24.855308 sshd[5477]: Connection closed by 10.0.0.1 port 55954 Sep 13 10:36:24.855655 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:24.859991 systemd[1]: sshd@15-10.0.0.4:22-10.0.0.1:55954.service: Deactivated successfully. Sep 13 10:36:24.862132 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 10:36:24.862935 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Sep 13 10:36:24.864944 systemd-logind[1543]: Removed session 16. Sep 13 10:36:27.114089 kubelet[2727]: E0913 10:36:27.114049 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:36:29.872041 systemd[1]: Started sshd@16-10.0.0.4:22-10.0.0.1:55966.service - OpenSSH per-connection server daemon (10.0.0.1:55966). Sep 13 10:36:30.050102 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 55966 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:30.052592 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:30.058282 systemd-logind[1543]: New session 17 of user core. Sep 13 10:36:30.065160 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 10:36:30.220868 sshd[5493]: Connection closed by 10.0.0.1 port 55966 Sep 13 10:36:30.221214 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:30.225466 systemd[1]: sshd@16-10.0.0.4:22-10.0.0.1:55966.service: Deactivated successfully. Sep 13 10:36:30.227563 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 10:36:30.228373 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Sep 13 10:36:30.229444 systemd-logind[1543]: Removed session 17. Sep 13 10:36:32.114887 kubelet[2727]: E0913 10:36:32.114841 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:36:32.325473 containerd[1557]: time="2025-09-13T10:36:32.325406959Z" level=info msg="TaskExit event in podsandbox handler container_id:\"624e98e9819ff9cc3c0961907f4203743ce95902faa23326135e8dea19ff564a\" id:\"585a919a32946eae4291f2261d5c905b7d8347fa870923564749915f340453d7\" pid:5518 exited_at:{seconds:1757759792 nanos:325181974}" Sep 13 10:36:32.929653 kubelet[2727]: I0913 10:36:32.929599 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 10:36:33.114125 kubelet[2727]: E0913 10:36:33.114091 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:36:35.232327 systemd[1]: Started sshd@17-10.0.0.4:22-10.0.0.1:39804.service - OpenSSH per-connection server daemon (10.0.0.1:39804). Sep 13 10:36:35.273413 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 39804 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:35.274983 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:35.279044 systemd-logind[1543]: New session 18 of user core. Sep 13 10:36:35.286156 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 10:36:35.445976 sshd[5540]: Connection closed by 10.0.0.1 port 39804 Sep 13 10:36:35.447227 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:35.455166 systemd[1]: sshd@17-10.0.0.4:22-10.0.0.1:39804.service: Deactivated successfully. Sep 13 10:36:35.456956 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 10:36:35.458358 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Sep 13 10:36:35.460843 systemd-logind[1543]: Removed session 18. Sep 13 10:36:35.464497 systemd[1]: Started sshd@18-10.0.0.4:22-10.0.0.1:39818.service - OpenSSH per-connection server daemon (10.0.0.1:39818). Sep 13 10:36:35.519452 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 39818 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:35.520502 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:35.524593 systemd-logind[1543]: New session 19 of user core. Sep 13 10:36:35.539142 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 10:36:35.840483 sshd[5557]: Connection closed by 10.0.0.1 port 39818 Sep 13 10:36:35.841208 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:35.851644 systemd[1]: sshd@18-10.0.0.4:22-10.0.0.1:39818.service: Deactivated successfully. Sep 13 10:36:35.853465 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 10:36:35.854161 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Sep 13 10:36:35.857262 systemd[1]: Started sshd@19-10.0.0.4:22-10.0.0.1:39832.service - OpenSSH per-connection server daemon (10.0.0.1:39832). Sep 13 10:36:35.857916 systemd-logind[1543]: Removed session 19. Sep 13 10:36:35.911848 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 39832 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:35.913530 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:35.917633 systemd-logind[1543]: New session 20 of user core. Sep 13 10:36:35.924144 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 10:36:36.508336 sshd[5571]: Connection closed by 10.0.0.1 port 39832 Sep 13 10:36:36.509300 sshd-session[5568]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:36.518864 systemd[1]: sshd@19-10.0.0.4:22-10.0.0.1:39832.service: Deactivated successfully. Sep 13 10:36:36.521957 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 10:36:36.523974 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Sep 13 10:36:36.529933 systemd[1]: Started sshd@20-10.0.0.4:22-10.0.0.1:39838.service - OpenSSH per-connection server daemon (10.0.0.1:39838). Sep 13 10:36:36.532133 systemd-logind[1543]: Removed session 20. Sep 13 10:36:36.579457 sshd[5588]: Accepted publickey for core from 10.0.0.1 port 39838 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:36.580514 sshd-session[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:36.585109 systemd-logind[1543]: New session 21 of user core. Sep 13 10:36:36.593139 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 10:36:36.862119 sshd[5592]: Connection closed by 10.0.0.1 port 39838 Sep 13 10:36:36.859793 sshd-session[5588]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:36.874886 systemd[1]: sshd@20-10.0.0.4:22-10.0.0.1:39838.service: Deactivated successfully. Sep 13 10:36:36.879181 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 10:36:36.883575 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Sep 13 10:36:36.888515 systemd[1]: Started sshd@21-10.0.0.4:22-10.0.0.1:39848.service - OpenSSH per-connection server daemon (10.0.0.1:39848). Sep 13 10:36:36.891772 systemd-logind[1543]: Removed session 21. Sep 13 10:36:36.936407 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 39848 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:36.938089 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:36.945419 systemd-logind[1543]: New session 22 of user core. Sep 13 10:36:36.950201 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 10:36:37.098991 sshd[5607]: Connection closed by 10.0.0.1 port 39848 Sep 13 10:36:37.099682 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:37.103211 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Sep 13 10:36:37.103527 systemd[1]: sshd@21-10.0.0.4:22-10.0.0.1:39848.service: Deactivated successfully. Sep 13 10:36:37.105789 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 10:36:37.108361 systemd-logind[1543]: Removed session 22. Sep 13 10:36:41.114748 kubelet[2727]: E0913 10:36:41.114712 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:36:41.394165 containerd[1557]: time="2025-09-13T10:36:41.393974035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af\" id:\"820ee7015ffae2fcaa991ca081fcd503c9718f6f4b7cb2df0c48e3b2bdf98b87\" pid:5633 exited_at:{seconds:1757759801 nanos:393698012}" Sep 13 10:36:42.109076 systemd[1]: Started sshd@22-10.0.0.4:22-10.0.0.1:46776.service - OpenSSH per-connection server daemon (10.0.0.1:46776). Sep 13 10:36:42.174223 sshd[5649]: Accepted publickey for core from 10.0.0.1 port 46776 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:42.176069 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:42.186516 systemd-logind[1543]: New session 23 of user core. Sep 13 10:36:42.190191 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 10:36:42.368206 sshd[5652]: Connection closed by 10.0.0.1 port 46776 Sep 13 10:36:42.368465 sshd-session[5649]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:42.373333 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Sep 13 10:36:42.373836 systemd[1]: sshd@22-10.0.0.4:22-10.0.0.1:46776.service: Deactivated successfully. Sep 13 10:36:42.376608 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 10:36:42.379631 systemd-logind[1543]: Removed session 23. Sep 13 10:36:47.386109 systemd[1]: Started sshd@23-10.0.0.4:22-10.0.0.1:46812.service - OpenSSH per-connection server daemon (10.0.0.1:46812). Sep 13 10:36:47.443231 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 46812 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:47.445279 sshd-session[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:47.450329 systemd-logind[1543]: New session 24 of user core. Sep 13 10:36:47.457239 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 10:36:47.628667 sshd[5668]: Connection closed by 10.0.0.1 port 46812 Sep 13 10:36:47.629002 sshd-session[5665]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:47.633628 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Sep 13 10:36:47.633962 systemd[1]: sshd@23-10.0.0.4:22-10.0.0.1:46812.service: Deactivated successfully. Sep 13 10:36:47.635920 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 10:36:47.637864 systemd-logind[1543]: Removed session 24. Sep 13 10:36:51.049221 containerd[1557]: time="2025-09-13T10:36:51.049100709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e194a3654a1f0b668e8439ee411117fe6844e6eff86aa58edfc10a3bb61bd8af\" id:\"eca160e29e00cdb6f6d02cf0eac57b0b5618738266b3f72643b2881015b0f662\" pid:5694 exited_at:{seconds:1757759811 nanos:48460743}" Sep 13 10:36:51.114151 kubelet[2727]: E0913 10:36:51.114110 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:36:52.642812 systemd[1]: Started sshd@24-10.0.0.4:22-10.0.0.1:57428.service - OpenSSH per-connection server daemon (10.0.0.1:57428). Sep 13 10:36:52.698407 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 57428 ssh2: RSA SHA256:I4tmlDyqp5RFEyqGKHaYdkjXvcdDV0E2+nrH9jspWZ4 Sep 13 10:36:52.700298 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:36:52.704522 systemd-logind[1543]: New session 25 of user core. Sep 13 10:36:52.715150 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 10:36:52.936626 sshd[5711]: Connection closed by 10.0.0.1 port 57428 Sep 13 10:36:52.936964 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Sep 13 10:36:52.941426 systemd[1]: sshd@24-10.0.0.4:22-10.0.0.1:57428.service: Deactivated successfully. Sep 13 10:36:52.943431 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 10:36:52.944247 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Sep 13 10:36:52.945394 systemd-logind[1543]: Removed session 25. Sep 13 10:36:54.318642 containerd[1557]: time="2025-09-13T10:36:54.318601695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f47e7abc99608a2c4eb79663ceef6ad485aa3ff037d59fa1715fb80248cf2ed\" id:\"1f6e5a718df63ca2dc86cc3324e5891e57c8f4cebafca94b47d5b322e91b382c\" pid:5735 exited_at:{seconds:1757759814 nanos:317771436}"