Sep 4 16:37:25.803098 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 14:31:01 -00 2025 Sep 4 16:37:25.803122 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:37:25.803131 kernel: BIOS-provided physical RAM map: Sep 4 16:37:25.803138 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 16:37:25.803145 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 16:37:25.803151 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 16:37:25.803159 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 4 16:37:25.803168 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 4 16:37:25.803175 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 4 16:37:25.803182 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 4 16:37:25.803189 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 16:37:25.803196 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 16:37:25.803203 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 16:37:25.803209 kernel: NX (Execute Disable) protection: active Sep 4 16:37:25.803220 kernel: APIC: Static calls initialized Sep 4 16:37:25.803228 kernel: SMBIOS 2.8 present. Sep 4 16:37:25.803235 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 16:37:25.803243 kernel: DMI: Memory slots populated: 1/1 Sep 4 16:37:25.803250 kernel: Hypervisor detected: KVM Sep 4 16:37:25.803257 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 16:37:25.803265 kernel: kvm-clock: using sched offset of 3263301669 cycles Sep 4 16:37:25.803272 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 16:37:25.803282 kernel: tsc: Detected 2794.750 MHz processor Sep 4 16:37:25.803291 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 16:37:25.803299 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 16:37:25.803307 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 4 16:37:25.803315 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 16:37:25.803323 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 16:37:25.803330 kernel: Using GB pages for direct mapping Sep 4 16:37:25.803340 kernel: ACPI: Early table checksum verification disabled Sep 4 16:37:25.803348 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 4 16:37:25.803356 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:37:25.803364 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:37:25.803372 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:37:25.803380 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 16:37:25.803388 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:37:25.803397 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:37:25.803405 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:37:25.803413 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:37:25.803421 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 4 16:37:25.803432 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 4 16:37:25.803442 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 16:37:25.803450 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 4 16:37:25.803458 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 4 16:37:25.803465 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 4 16:37:25.803473 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 4 16:37:25.803481 kernel: No NUMA configuration found Sep 4 16:37:25.803491 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 4 16:37:25.803499 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 4 16:37:25.803507 kernel: Zone ranges: Sep 4 16:37:25.803515 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 16:37:25.803523 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 4 16:37:25.803531 kernel: Normal empty Sep 4 16:37:25.803539 kernel: Device empty Sep 4 16:37:25.803546 kernel: Movable zone start for each node Sep 4 16:37:25.803566 kernel: Early memory node ranges Sep 4 16:37:25.803574 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 16:37:25.803597 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 4 16:37:25.803605 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 4 16:37:25.803613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 16:37:25.803621 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 16:37:25.803629 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 4 16:37:25.803640 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 16:37:25.803648 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 16:37:25.803656 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 16:37:25.803668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 16:37:25.803676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 16:37:25.803684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 16:37:25.803692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 16:37:25.803700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 16:37:25.803711 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 16:37:25.803719 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 16:37:25.803726 kernel: TSC deadline timer available Sep 4 16:37:25.803734 kernel: CPU topo: Max. logical packages: 1 Sep 4 16:37:25.803742 kernel: CPU topo: Max. logical dies: 1 Sep 4 16:37:25.803750 kernel: CPU topo: Max. dies per package: 1 Sep 4 16:37:25.803758 kernel: CPU topo: Max. threads per core: 1 Sep 4 16:37:25.803768 kernel: CPU topo: Num. cores per package: 4 Sep 4 16:37:25.803775 kernel: CPU topo: Num. threads per package: 4 Sep 4 16:37:25.803783 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 16:37:25.803791 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 16:37:25.803799 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 16:37:25.803807 kernel: kvm-guest: setup PV sched yield Sep 4 16:37:25.803815 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 4 16:37:25.803823 kernel: Booting paravirtualized kernel on KVM Sep 4 16:37:25.803833 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 16:37:25.803841 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 16:37:25.803849 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 16:37:25.803857 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 16:37:25.803865 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 16:37:25.803873 kernel: kvm-guest: PV spinlocks enabled Sep 4 16:37:25.803901 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 16:37:25.803913 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:37:25.803922 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 16:37:25.803930 kernel: random: crng init done Sep 4 16:37:25.803938 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 16:37:25.803946 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 16:37:25.803954 kernel: Fallback order for Node 0: 0 Sep 4 16:37:25.803962 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 4 16:37:25.803972 kernel: Policy zone: DMA32 Sep 4 16:37:25.803980 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 16:37:25.803988 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 16:37:25.803996 kernel: ftrace: allocating 40102 entries in 157 pages Sep 4 16:37:25.804004 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 16:37:25.804012 kernel: Dynamic Preempt: voluntary Sep 4 16:37:25.804020 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 16:37:25.804030 kernel: rcu: RCU event tracing is enabled. Sep 4 16:37:25.804038 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 16:37:25.804046 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 16:37:25.804054 kernel: Rude variant of Tasks RCU enabled. Sep 4 16:37:25.804062 kernel: Tracing variant of Tasks RCU enabled. Sep 4 16:37:25.804070 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 16:37:25.804078 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 16:37:25.804093 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:37:25.804104 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:37:25.804112 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:37:25.804120 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 16:37:25.804129 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 16:37:25.804143 kernel: Console: colour VGA+ 80x25 Sep 4 16:37:25.804153 kernel: printk: legacy console [ttyS0] enabled Sep 4 16:37:25.804161 kernel: ACPI: Core revision 20240827 Sep 4 16:37:25.804169 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 16:37:25.804178 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 16:37:25.804188 kernel: x2apic enabled Sep 4 16:37:25.804196 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 16:37:25.804204 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 16:37:25.804213 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 16:37:25.804222 kernel: kvm-guest: setup PV IPIs Sep 4 16:37:25.804231 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 16:37:25.804239 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 16:37:25.804247 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 16:37:25.804256 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 16:37:25.804264 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 16:37:25.804272 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 16:37:25.804282 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 16:37:25.804291 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 16:37:25.804299 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 16:37:25.804307 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 16:37:25.804315 kernel: active return thunk: retbleed_return_thunk Sep 4 16:37:25.804324 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 16:37:25.804332 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 16:37:25.804342 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 16:37:25.804350 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 16:37:25.804359 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 16:37:25.804367 kernel: active return thunk: srso_return_thunk Sep 4 16:37:25.804376 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 16:37:25.804384 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 16:37:25.804394 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 16:37:25.804403 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 16:37:25.804411 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 16:37:25.804419 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 16:37:25.804436 kernel: Freeing SMP alternatives memory: 32K Sep 4 16:37:25.804451 kernel: pid_max: default: 32768 minimum: 301 Sep 4 16:37:25.804460 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 16:37:25.804471 kernel: landlock: Up and running. Sep 4 16:37:25.804479 kernel: SELinux: Initializing. Sep 4 16:37:25.804487 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 16:37:25.804496 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 16:37:25.804508 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 16:37:25.804516 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 16:37:25.804525 kernel: ... version: 0 Sep 4 16:37:25.804535 kernel: ... bit width: 48 Sep 4 16:37:25.804543 kernel: ... generic registers: 6 Sep 4 16:37:25.804551 kernel: ... value mask: 0000ffffffffffff Sep 4 16:37:25.804559 kernel: ... max period: 00007fffffffffff Sep 4 16:37:25.804568 kernel: ... fixed-purpose events: 0 Sep 4 16:37:25.804576 kernel: ... event mask: 000000000000003f Sep 4 16:37:25.804584 kernel: signal: max sigframe size: 1776 Sep 4 16:37:25.804592 kernel: rcu: Hierarchical SRCU implementation. Sep 4 16:37:25.804603 kernel: rcu: Max phase no-delay instances is 400. Sep 4 16:37:25.804611 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 16:37:25.804619 kernel: smp: Bringing up secondary CPUs ... Sep 4 16:37:25.804627 kernel: smpboot: x86: Booting SMP configuration: Sep 4 16:37:25.804636 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 16:37:25.804644 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 16:37:25.804652 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 16:37:25.804663 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54288K init, 2680K bss, 136904K reserved, 0K cma-reserved) Sep 4 16:37:25.804671 kernel: devtmpfs: initialized Sep 4 16:37:25.804679 kernel: x86/mm: Memory block size: 128MB Sep 4 16:37:25.804688 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 16:37:25.804696 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 16:37:25.804704 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 16:37:25.804713 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 16:37:25.804723 kernel: audit: initializing netlink subsys (disabled) Sep 4 16:37:25.804731 kernel: audit: type=2000 audit(1757003842.952:1): state=initialized audit_enabled=0 res=1 Sep 4 16:37:25.804739 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 16:37:25.804748 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 16:37:25.804756 kernel: cpuidle: using governor menu Sep 4 16:37:25.804764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 16:37:25.804772 kernel: dca service started, version 1.12.1 Sep 4 16:37:25.804782 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 4 16:37:25.804791 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 4 16:37:25.804799 kernel: PCI: Using configuration type 1 for base access Sep 4 16:37:25.804807 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 16:37:25.804815 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 16:37:25.804824 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 16:37:25.804832 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 16:37:25.804842 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 16:37:25.804850 kernel: ACPI: Added _OSI(Module Device) Sep 4 16:37:25.804858 kernel: ACPI: Added _OSI(Processor Device) Sep 4 16:37:25.804866 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 16:37:25.804875 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 16:37:25.804894 kernel: ACPI: Interpreter enabled Sep 4 16:37:25.804913 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 16:37:25.804924 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 16:37:25.804933 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 16:37:25.804942 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 16:37:25.804952 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 16:37:25.804961 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 16:37:25.805196 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 16:37:25.805365 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 16:37:25.805533 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 16:37:25.805544 kernel: PCI host bridge to bus 0000:00 Sep 4 16:37:25.805706 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 16:37:25.805857 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 16:37:25.806075 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 16:37:25.806241 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 4 16:37:25.806392 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 4 16:37:25.806582 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 4 16:37:25.806734 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 16:37:25.806984 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 16:37:25.807171 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 16:37:25.807337 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 4 16:37:25.807498 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 4 16:37:25.807663 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 4 16:37:25.807823 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 16:37:25.808013 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 16:37:25.808189 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 4 16:37:25.808350 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 4 16:37:25.808511 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 16:37:25.808680 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 16:37:25.808842 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 4 16:37:25.809104 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 4 16:37:25.809271 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 16:37:25.809445 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 16:37:25.809605 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 4 16:37:25.809765 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 4 16:37:25.809966 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 16:37:25.810877 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 4 16:37:25.811130 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 16:37:25.811297 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 16:37:25.811469 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 16:37:25.811629 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 4 16:37:25.811788 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 4 16:37:25.812002 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 16:37:25.812208 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 4 16:37:25.812222 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 16:37:25.812231 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 16:37:25.812241 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 16:37:25.812250 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 16:37:25.812259 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 16:37:25.812272 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 16:37:25.812281 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 16:37:25.812290 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 16:37:25.812299 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 16:37:25.812308 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 16:37:25.812317 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 16:37:25.812326 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 16:37:25.812338 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 16:37:25.812347 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 16:37:25.812356 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 16:37:25.812365 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 16:37:25.812374 kernel: iommu: Default domain type: Translated Sep 4 16:37:25.812383 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 16:37:25.812392 kernel: PCI: Using ACPI for IRQ routing Sep 4 16:37:25.812404 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 16:37:25.812413 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 16:37:25.812422 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 4 16:37:25.812586 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 16:37:25.812746 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 16:37:25.812937 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 16:37:25.812953 kernel: vgaarb: loaded Sep 4 16:37:25.812963 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 16:37:25.812972 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 16:37:25.812981 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 16:37:25.812989 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 16:37:25.812999 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 16:37:25.813008 kernel: pnp: PnP ACPI init Sep 4 16:37:25.813238 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 4 16:37:25.813254 kernel: pnp: PnP ACPI: found 6 devices Sep 4 16:37:25.813264 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 16:37:25.813272 kernel: NET: Registered PF_INET protocol family Sep 4 16:37:25.813282 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 16:37:25.813291 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 16:37:25.813300 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 16:37:25.813313 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 16:37:25.813322 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 16:37:25.813331 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 16:37:25.813340 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 16:37:25.813349 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 16:37:25.813358 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 16:37:25.813366 kernel: NET: Registered PF_XDP protocol family Sep 4 16:37:25.813523 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 16:37:25.813672 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 16:37:25.813819 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 16:37:25.813985 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 4 16:37:25.814140 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 4 16:37:25.814286 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 4 16:37:25.814302 kernel: PCI: CLS 0 bytes, default 64 Sep 4 16:37:25.814312 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 16:37:25.814321 kernel: Initialise system trusted keyrings Sep 4 16:37:25.814331 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 16:37:25.814340 kernel: Key type asymmetric registered Sep 4 16:37:25.814349 kernel: Asymmetric key parser 'x509' registered Sep 4 16:37:25.814358 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 16:37:25.814367 kernel: io scheduler mq-deadline registered Sep 4 16:37:25.814379 kernel: io scheduler kyber registered Sep 4 16:37:25.814388 kernel: io scheduler bfq registered Sep 4 16:37:25.814397 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 16:37:25.814407 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 16:37:25.814416 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 16:37:25.814425 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 16:37:25.814434 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 16:37:25.814446 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 16:37:25.814455 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 16:37:25.814464 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 16:37:25.814473 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 16:37:25.814643 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 16:37:25.814657 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 16:37:25.814835 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 16:37:25.815006 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T16:37:25 UTC (1757003845) Sep 4 16:37:25.815167 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 16:37:25.815179 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 16:37:25.815188 kernel: NET: Registered PF_INET6 protocol family Sep 4 16:37:25.815197 kernel: Segment Routing with IPv6 Sep 4 16:37:25.815206 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 16:37:25.815219 kernel: NET: Registered PF_PACKET protocol family Sep 4 16:37:25.815228 kernel: Key type dns_resolver registered Sep 4 16:37:25.815237 kernel: IPI shorthand broadcast: enabled Sep 4 16:37:25.815246 kernel: sched_clock: Marking stable (2718002039, 109460108)->(2841279178, -13817031) Sep 4 16:37:25.815255 kernel: registered taskstats version 1 Sep 4 16:37:25.815264 kernel: Loading compiled-in X.509 certificates Sep 4 16:37:25.815273 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 250d2bafae7fa56c92cf187a0b8b7b2cdd349fc7' Sep 4 16:37:25.815284 kernel: Demotion targets for Node 0: null Sep 4 16:37:25.815293 kernel: Key type .fscrypt registered Sep 4 16:37:25.815302 kernel: Key type fscrypt-provisioning registered Sep 4 16:37:25.815311 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 16:37:25.815320 kernel: ima: Allocated hash algorithm: sha1 Sep 4 16:37:25.815329 kernel: ima: No architecture policies found Sep 4 16:37:25.815338 kernel: clk: Disabling unused clocks Sep 4 16:37:25.815349 kernel: Warning: unable to open an initial console. Sep 4 16:37:25.815359 kernel: Freeing unused kernel image (initmem) memory: 54288K Sep 4 16:37:25.815367 kernel: Write protecting the kernel read-only data: 24576k Sep 4 16:37:25.815376 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 4 16:37:25.815385 kernel: Run /init as init process Sep 4 16:37:25.815394 kernel: with arguments: Sep 4 16:37:25.815402 kernel: /init Sep 4 16:37:25.815413 kernel: with environment: Sep 4 16:37:25.815422 kernel: HOME=/ Sep 4 16:37:25.815430 kernel: TERM=linux Sep 4 16:37:25.815439 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 16:37:25.815450 systemd[1]: Successfully made /usr/ read-only. Sep 4 16:37:25.815477 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 16:37:25.815489 systemd[1]: Detected virtualization kvm. Sep 4 16:37:25.815498 systemd[1]: Detected architecture x86-64. Sep 4 16:37:25.815508 systemd[1]: Running in initrd. Sep 4 16:37:25.815517 systemd[1]: No hostname configured, using default hostname. Sep 4 16:37:25.815527 systemd[1]: Hostname set to . Sep 4 16:37:25.815538 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 16:37:25.815548 systemd[1]: Queued start job for default target initrd.target. Sep 4 16:37:25.815557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:37:25.815567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:37:25.815577 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 16:37:25.815587 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 16:37:25.815596 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 16:37:25.815609 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 16:37:25.815620 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 16:37:25.815630 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 16:37:25.815640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:37:25.815649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:37:25.815661 systemd[1]: Reached target paths.target - Path Units. Sep 4 16:37:25.815670 systemd[1]: Reached target slices.target - Slice Units. Sep 4 16:37:25.815680 systemd[1]: Reached target swap.target - Swaps. Sep 4 16:37:25.815690 systemd[1]: Reached target timers.target - Timer Units. Sep 4 16:37:25.815699 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 16:37:25.815709 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 16:37:25.815718 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 16:37:25.815730 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 16:37:25.815740 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:37:25.815749 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 16:37:25.815759 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:37:25.815769 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 16:37:25.815778 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 16:37:25.815789 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 16:37:25.815801 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 16:37:25.815811 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 16:37:25.815821 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 16:37:25.815831 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 16:37:25.815843 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 16:37:25.815853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:37:25.815862 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 16:37:25.815873 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:37:25.815898 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 16:37:25.815911 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 16:37:25.815940 systemd-journald[220]: Collecting audit messages is disabled. Sep 4 16:37:25.815967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 16:37:25.815977 systemd-journald[220]: Journal started Sep 4 16:37:25.815996 systemd-journald[220]: Runtime Journal (/run/log/journal/fc538b8354154562a80c09aec9088c54) is 6M, max 48.6M, 42.5M free. Sep 4 16:37:25.802477 systemd-modules-load[222]: Inserted module 'overlay' Sep 4 16:37:25.850537 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 16:37:25.850559 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 16:37:25.850575 kernel: Bridge firewalling registered Sep 4 16:37:25.828655 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 4 16:37:25.854182 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 16:37:25.859003 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 16:37:25.861227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:37:25.866581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 16:37:25.867808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:37:25.870912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 16:37:25.879163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:37:25.886632 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 16:37:25.889612 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:37:25.891955 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:37:25.895800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 16:37:25.896479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 16:37:25.900231 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 16:37:25.922776 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:37:25.939313 systemd-resolved[262]: Positive Trust Anchors: Sep 4 16:37:25.939327 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 16:37:25.939331 systemd-resolved[262]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 16:37:25.939362 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 16:37:25.941724 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 4 16:37:25.942859 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 16:37:25.943345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:37:26.036911 kernel: SCSI subsystem initialized Sep 4 16:37:26.044912 kernel: Loading iSCSI transport class v2.0-870. Sep 4 16:37:26.055927 kernel: iscsi: registered transport (tcp) Sep 4 16:37:26.076921 kernel: iscsi: registered transport (qla4xxx) Sep 4 16:37:26.076942 kernel: QLogic iSCSI HBA Driver Sep 4 16:37:26.096049 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 16:37:26.119953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:37:26.121830 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 16:37:26.173879 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 16:37:26.176292 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 16:37:26.243917 kernel: raid6: avx2x4 gen() 30386 MB/s Sep 4 16:37:26.260908 kernel: raid6: avx2x2 gen() 31322 MB/s Sep 4 16:37:26.277937 kernel: raid6: avx2x1 gen() 25746 MB/s Sep 4 16:37:26.277956 kernel: raid6: using algorithm avx2x2 gen() 31322 MB/s Sep 4 16:37:26.296004 kernel: raid6: .... xor() 19695 MB/s, rmw enabled Sep 4 16:37:26.296033 kernel: raid6: using avx2x2 recovery algorithm Sep 4 16:37:26.315910 kernel: xor: automatically using best checksumming function avx Sep 4 16:37:26.477914 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 16:37:26.486287 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 16:37:26.489907 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:37:26.521797 systemd-udevd[474]: Using default interface naming scheme 'v257'. Sep 4 16:37:26.527783 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:37:26.531127 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 16:37:26.556132 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Sep 4 16:37:26.583434 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 16:37:26.586785 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 16:37:26.653843 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:37:26.657495 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 16:37:26.695272 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 16:37:26.697369 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 16:37:26.698904 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 16:37:26.705908 kernel: AES CTR mode by8 optimization enabled Sep 4 16:37:26.708966 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 16:37:26.711923 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 16:37:26.711951 kernel: GPT:9289727 != 19775487 Sep 4 16:37:26.711963 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 16:37:26.712959 kernel: GPT:9289727 != 19775487 Sep 4 16:37:26.712978 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 16:37:26.714029 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:37:26.730905 kernel: libata version 3.00 loaded. Sep 4 16:37:26.736923 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 16:37:26.738910 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 16:37:26.738937 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 16:37:26.741192 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 16:37:26.741454 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 16:37:26.749518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:37:26.750343 kernel: scsi host0: ahci Sep 4 16:37:26.750569 kernel: scsi host1: ahci Sep 4 16:37:26.749644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:37:26.755799 kernel: scsi host2: ahci Sep 4 16:37:26.756027 kernel: scsi host3: ahci Sep 4 16:37:26.756226 kernel: scsi host4: ahci Sep 4 16:37:26.755710 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:37:26.763921 kernel: scsi host5: ahci Sep 4 16:37:26.764191 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 4 16:37:26.764205 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 4 16:37:26.764217 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 4 16:37:26.764233 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 4 16:37:26.764245 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 4 16:37:26.762414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:37:26.766513 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 4 16:37:26.790282 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 16:37:26.821087 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 16:37:26.826271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:37:26.836976 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 16:37:26.851961 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 16:37:26.860865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 16:37:26.861920 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 16:37:26.890843 disk-uuid[635]: Primary Header is updated. Sep 4 16:37:26.890843 disk-uuid[635]: Secondary Entries is updated. Sep 4 16:37:26.890843 disk-uuid[635]: Secondary Header is updated. Sep 4 16:37:26.894907 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:37:26.898911 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:37:27.072774 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 16:37:27.072812 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 16:37:27.072824 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 16:37:27.072899 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 16:37:27.073916 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 16:37:27.074916 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 16:37:27.074937 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 16:37:27.076021 kernel: ata3.00: applying bridge limits Sep 4 16:37:27.076921 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 16:37:27.076940 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 16:37:27.077375 kernel: ata3.00: configured for UDMA/100 Sep 4 16:37:27.079913 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 16:37:27.130454 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 16:37:27.130698 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 16:37:27.150947 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 16:37:27.458374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 16:37:27.459173 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 16:37:27.460620 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:37:27.463734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 16:37:27.465680 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 16:37:27.488075 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 16:37:27.900729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:37:27.901605 disk-uuid[636]: The operation has completed successfully. Sep 4 16:37:27.932850 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 16:37:27.932997 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 16:37:27.966345 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 16:37:27.978738 sh[665]: Success Sep 4 16:37:27.996273 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 16:37:27.996305 kernel: device-mapper: uevent: version 1.0.3 Sep 4 16:37:27.997318 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 16:37:28.005915 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 16:37:28.032591 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 16:37:28.034711 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 16:37:28.060061 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 16:37:28.063925 kernel: BTRFS: device fsid ac7b5b49-8d71-4968-afd7-5e4410595bf4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (677) Sep 4 16:37:28.065916 kernel: BTRFS info (device dm-0): first mount of filesystem ac7b5b49-8d71-4968-afd7-5e4410595bf4 Sep 4 16:37:28.065937 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:37:28.070461 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 16:37:28.070478 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 16:37:28.071534 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 16:37:28.073611 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 16:37:28.075751 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 16:37:28.078266 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 16:37:28.080762 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 16:37:28.099367 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 4 16:37:28.099391 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:37:28.099406 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:37:28.103118 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:37:28.103141 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:37:28.107991 kernel: BTRFS info (device vda6): last unmount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:37:28.108767 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 16:37:28.111861 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 16:37:28.196197 ignition[751]: Ignition 2.22.0 Sep 4 16:37:28.196209 ignition[751]: Stage: fetch-offline Sep 4 16:37:28.196253 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:37:28.196261 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:37:28.196343 ignition[751]: parsed url from cmdline: "" Sep 4 16:37:28.196347 ignition[751]: no config URL provided Sep 4 16:37:28.196351 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 16:37:28.196359 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 4 16:37:28.196379 ignition[751]: op(1): [started] loading QEMU firmware config module Sep 4 16:37:28.196384 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 16:37:28.208409 ignition[751]: op(1): [finished] loading QEMU firmware config module Sep 4 16:37:28.212804 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 16:37:28.217852 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 16:37:28.250168 ignition[751]: parsing config with SHA512: b2bd076c00e68c2fc56b75a8bbae13ed101edf79781da4a16cb9bc72a1a9fe5b80f272f352b8d894b975573fa3f6a8355ae71a2bc35168bf43d87c02ece44a80 Sep 4 16:37:28.256119 systemd-networkd[855]: lo: Link UP Sep 4 16:37:28.256128 systemd-networkd[855]: lo: Gained carrier Sep 4 16:37:28.257678 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 16:37:28.257965 systemd-networkd[855]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:37:28.257969 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 16:37:28.258473 systemd-networkd[855]: eth0: Link UP Sep 4 16:37:28.258927 systemd-networkd[855]: eth0: Gained carrier Sep 4 16:37:28.258935 systemd-networkd[855]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:37:28.265617 ignition[751]: fetch-offline: fetch-offline passed Sep 4 16:37:28.260712 unknown[751]: fetched base config from "system" Sep 4 16:37:28.266333 ignition[751]: Ignition finished successfully Sep 4 16:37:28.260720 unknown[751]: fetched user config from "qemu" Sep 4 16:37:28.261566 systemd[1]: Reached target network.target - Network. Sep 4 16:37:28.271977 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 16:37:28.272489 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 16:37:28.273302 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 16:37:28.279000 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.3/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 16:37:28.305528 ignition[860]: Ignition 2.22.0 Sep 4 16:37:28.305541 ignition[860]: Stage: kargs Sep 4 16:37:28.305663 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:37:28.305675 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:37:28.308369 ignition[860]: kargs: kargs passed Sep 4 16:37:28.308412 ignition[860]: Ignition finished successfully Sep 4 16:37:28.313640 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 16:37:28.315627 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 16:37:28.350445 ignition[869]: Ignition 2.22.0 Sep 4 16:37:28.350458 ignition[869]: Stage: disks Sep 4 16:37:28.350575 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:37:28.350585 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:37:28.352917 ignition[869]: disks: disks passed Sep 4 16:37:28.352971 ignition[869]: Ignition finished successfully Sep 4 16:37:28.356939 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 16:37:28.358963 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 16:37:28.359225 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 16:37:28.361396 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 16:37:28.361708 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 16:37:28.362196 systemd[1]: Reached target basic.target - Basic System. Sep 4 16:37:28.368374 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 16:37:28.402127 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 16:37:28.409521 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 16:37:28.411089 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 16:37:28.516911 kernel: EXT4-fs (vda9): mounted filesystem 5b9a7850-c07f-470b-a91c-362c3904243c r/w with ordered data mode. Quota mode: none. Sep 4 16:37:28.517749 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 16:37:28.518510 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 16:37:28.520857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 16:37:28.523051 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 16:37:28.526489 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 16:37:28.526530 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 16:37:28.526556 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 16:37:28.538773 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 16:37:28.540581 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 16:37:28.543910 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Sep 4 16:37:28.545951 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:37:28.545971 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:37:28.548934 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:37:28.548956 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:37:28.550923 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 16:37:28.576763 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 16:37:28.582092 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 4 16:37:28.586794 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 16:37:28.591583 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 16:37:28.674469 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 16:37:28.676543 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 16:37:28.678125 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 16:37:28.698907 kernel: BTRFS info (device vda6): last unmount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:37:28.710112 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 16:37:28.729319 ignition[1002]: INFO : Ignition 2.22.0 Sep 4 16:37:28.729319 ignition[1002]: INFO : Stage: mount Sep 4 16:37:28.730924 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:37:28.730924 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:37:28.730924 ignition[1002]: INFO : mount: mount passed Sep 4 16:37:28.730924 ignition[1002]: INFO : Ignition finished successfully Sep 4 16:37:28.734676 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 16:37:28.737841 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 16:37:29.064066 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 16:37:29.066330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 16:37:29.094911 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 4 16:37:29.094965 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:37:29.096438 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:37:29.099197 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:37:29.099228 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:37:29.100642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 16:37:29.128726 ignition[1031]: INFO : Ignition 2.22.0 Sep 4 16:37:29.128726 ignition[1031]: INFO : Stage: files Sep 4 16:37:29.130333 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:37:29.130333 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:37:29.130333 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 4 16:37:29.134030 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 16:37:29.134030 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 16:37:29.138196 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 16:37:29.139711 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 16:37:29.141445 unknown[1031]: wrote ssh authorized keys file for user: core Sep 4 16:37:29.142570 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 16:37:29.144035 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 16:37:29.144035 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 16:37:29.183527 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 16:37:29.720464 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 16:37:29.722472 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 16:37:29.722472 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 16:37:29.722472 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 16:37:29.722472 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 16:37:29.722472 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 16:37:29.722472 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 16:37:29.722472 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 16:37:29.722472 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 16:37:29.736344 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 16:37:29.736344 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 16:37:29.736344 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 16:37:29.736344 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 16:37:29.736344 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 16:37:29.736344 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 4 16:37:30.145049 systemd-networkd[855]: eth0: Gained IPv6LL Sep 4 16:37:30.513209 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 16:37:30.903592 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 16:37:30.903592 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 16:37:30.907348 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 16:37:30.912903 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 16:37:30.912903 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 16:37:30.915934 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 16:37:30.915934 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 16:37:30.918947 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 16:37:30.918947 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 16:37:30.918947 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 16:37:30.937459 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 16:37:30.940948 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 16:37:30.942591 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 16:37:30.942591 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 16:37:30.945291 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 16:37:30.945291 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 16:37:30.945291 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 16:37:30.945291 ignition[1031]: INFO : files: files passed Sep 4 16:37:30.945291 ignition[1031]: INFO : Ignition finished successfully Sep 4 16:37:30.951732 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 16:37:30.954758 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 16:37:30.957041 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 16:37:30.972559 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 16:37:30.972685 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 16:37:30.975527 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 16:37:30.977048 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:37:30.978741 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:37:30.980283 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:37:30.980513 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 16:37:30.983381 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 16:37:30.986515 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 16:37:31.033207 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 16:37:31.033344 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 16:37:31.033851 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 16:37:31.036744 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 16:37:31.037264 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 16:37:31.040112 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 16:37:31.074036 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 16:37:31.076527 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 16:37:31.100848 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:37:31.101361 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:37:31.101688 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 16:37:31.102184 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 16:37:31.102284 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 16:37:31.108526 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 16:37:31.109263 systemd[1]: Stopped target basic.target - Basic System. Sep 4 16:37:31.109578 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 16:37:31.109912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 16:37:31.110384 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 16:37:31.110699 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 16:37:31.111375 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 16:37:31.111677 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 16:37:31.112218 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 16:37:31.112530 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 16:37:31.112841 systemd[1]: Stopped target swap.target - Swaps. Sep 4 16:37:31.113336 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 16:37:31.113442 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 16:37:31.131280 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:37:31.131796 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:37:31.132251 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 16:37:31.137079 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:37:31.139503 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 16:37:31.139616 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 16:37:31.142266 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 16:37:31.142379 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 16:37:31.142973 systemd[1]: Stopped target paths.target - Path Units. Sep 4 16:37:31.143365 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 16:37:31.149992 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:37:31.152605 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 16:37:31.152942 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 16:37:31.154719 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 16:37:31.154812 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 16:37:31.156386 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 16:37:31.156467 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 16:37:31.158200 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 16:37:31.158319 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 16:37:31.159768 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 16:37:31.159876 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 16:37:31.164208 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 16:37:31.164640 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 16:37:31.164739 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:37:31.165854 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 16:37:31.169656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 16:37:31.169780 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:37:31.172718 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 16:37:31.172815 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:37:31.174660 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 16:37:31.174757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 16:37:31.183713 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 16:37:31.183818 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 16:37:31.198235 ignition[1086]: INFO : Ignition 2.22.0 Sep 4 16:37:31.198235 ignition[1086]: INFO : Stage: umount Sep 4 16:37:31.199813 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:37:31.199813 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:37:31.199813 ignition[1086]: INFO : umount: umount passed Sep 4 16:37:31.199813 ignition[1086]: INFO : Ignition finished successfully Sep 4 16:37:31.203856 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 16:37:31.204515 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 16:37:31.204654 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 16:37:31.206648 systemd[1]: Stopped target network.target - Network. Sep 4 16:37:31.207398 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 16:37:31.207466 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 16:37:31.207736 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 16:37:31.207787 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 16:37:31.208213 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 16:37:31.208261 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 16:37:31.208516 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 16:37:31.208558 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 16:37:31.208968 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 16:37:31.209535 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 16:37:31.219840 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 16:37:31.220001 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 16:37:31.228936 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 16:37:31.229085 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 16:37:31.233316 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 16:37:31.234482 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 16:37:31.234521 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:37:31.238447 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 16:37:31.240174 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 16:37:31.240229 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 16:37:31.240642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 16:37:31.240682 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:37:31.241118 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 16:37:31.241159 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 16:37:31.241443 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:37:31.262470 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 16:37:31.262597 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 16:37:31.265861 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 16:37:31.266107 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:37:31.266574 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 16:37:31.266619 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 16:37:31.269481 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 16:37:31.269515 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:37:31.269772 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 16:37:31.269815 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 16:37:31.270597 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 16:37:31.270641 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 16:37:31.277023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 16:37:31.277074 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 16:37:31.280842 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 16:37:31.281376 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 16:37:31.281423 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:37:31.281727 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 16:37:31.281770 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:37:31.282209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:37:31.282250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:37:31.309851 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 16:37:31.309990 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 16:37:31.464391 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 16:37:31.464528 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 16:37:31.466483 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 16:37:31.468072 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 16:37:31.468128 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 16:37:31.470779 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 16:37:31.496498 systemd[1]: Switching root. Sep 4 16:37:31.539377 systemd-journald[220]: Journal stopped Sep 4 16:37:32.863225 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 4 16:37:32.863297 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 16:37:32.863314 kernel: SELinux: policy capability open_perms=1 Sep 4 16:37:32.863326 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 16:37:32.863348 kernel: SELinux: policy capability always_check_network=0 Sep 4 16:37:32.863360 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 16:37:32.863372 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 16:37:32.863393 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 16:37:32.863410 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 16:37:32.863422 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 16:37:32.863434 kernel: audit: type=1403 audit(1757003852.092:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 16:37:32.863451 systemd[1]: Successfully loaded SELinux policy in 61.390ms. Sep 4 16:37:32.863467 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.085ms. Sep 4 16:37:32.863481 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 16:37:32.863499 systemd[1]: Detected virtualization kvm. Sep 4 16:37:32.863514 systemd[1]: Detected architecture x86-64. Sep 4 16:37:32.863526 systemd[1]: Detected first boot. Sep 4 16:37:32.863539 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 16:37:32.863552 zram_generator::config[1133]: No configuration found. Sep 4 16:37:32.863566 kernel: Guest personality initialized and is inactive Sep 4 16:37:32.863583 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 16:37:32.863597 kernel: Initialized host personality Sep 4 16:37:32.863609 kernel: NET: Registered PF_VSOCK protocol family Sep 4 16:37:32.863623 systemd[1]: Populated /etc with preset unit settings. Sep 4 16:37:32.863636 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 16:37:32.863649 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 16:37:32.863662 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 16:37:32.863675 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 16:37:32.863690 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 16:37:32.863703 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 16:37:32.863715 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 16:37:32.863728 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 16:37:32.863740 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 16:37:32.863754 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 16:37:32.863767 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 16:37:32.863781 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:37:32.863794 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:37:32.863807 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 16:37:32.863819 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 16:37:32.863832 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 16:37:32.863845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 16:37:32.863860 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 16:37:32.863873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:37:32.863987 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:37:32.864002 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 16:37:32.864015 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 16:37:32.864027 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 16:37:32.864040 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 16:37:32.864055 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:37:32.864068 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 16:37:32.864081 systemd[1]: Reached target slices.target - Slice Units. Sep 4 16:37:32.864094 systemd[1]: Reached target swap.target - Swaps. Sep 4 16:37:32.864106 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 16:37:32.864119 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 16:37:32.864134 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 16:37:32.864148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:37:32.864161 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 16:37:32.864174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:37:32.864187 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 16:37:32.864200 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 16:37:32.864213 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 16:37:32.864225 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 16:37:32.864238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:37:32.864253 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 16:37:32.864265 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 16:37:32.864278 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 16:37:32.864291 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 16:37:32.864304 systemd[1]: Reached target machines.target - Containers. Sep 4 16:37:32.864317 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 16:37:32.864332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:37:32.864344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 16:37:32.864357 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 16:37:32.864369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:37:32.864382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 16:37:32.864394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:37:32.864408 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 16:37:32.864423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:37:32.864436 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 16:37:32.864449 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 16:37:32.864461 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 16:37:32.864474 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 16:37:32.864487 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 16:37:32.864500 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:37:32.864515 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 16:37:32.864528 kernel: fuse: init (API version 7.41) Sep 4 16:37:32.864539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 16:37:32.864552 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 16:37:32.864565 kernel: loop: module loaded Sep 4 16:37:32.864576 kernel: ACPI: bus type drm_connector registered Sep 4 16:37:32.864589 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 16:37:32.864603 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 16:37:32.864616 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 16:37:32.864629 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 16:37:32.864642 systemd[1]: Stopped verity-setup.service. Sep 4 16:37:32.864656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:37:32.864670 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 16:37:32.864702 systemd-journald[1209]: Collecting audit messages is disabled. Sep 4 16:37:32.864726 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 16:37:32.864739 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 16:37:32.864752 systemd-journald[1209]: Journal started Sep 4 16:37:32.864776 systemd-journald[1209]: Runtime Journal (/run/log/journal/fc538b8354154562a80c09aec9088c54) is 6M, max 48.6M, 42.5M free. Sep 4 16:37:32.616979 systemd[1]: Queued start job for default target multi-user.target. Sep 4 16:37:32.637723 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 16:37:32.638288 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 16:37:32.867916 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 16:37:32.869244 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 16:37:32.870422 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 16:37:32.871618 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 16:37:32.873110 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 16:37:32.874666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:37:32.876224 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 16:37:32.876456 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 16:37:32.878007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:37:32.878237 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:37:32.879626 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 16:37:32.879843 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 16:37:32.881266 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:37:32.881473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:37:32.883079 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 16:37:32.883307 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 16:37:32.884635 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:37:32.884842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:37:32.886441 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 16:37:32.888153 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:37:32.890452 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 16:37:32.892091 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 16:37:32.907404 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 16:37:32.909004 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Sep 4 16:37:32.911417 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 16:37:32.913443 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 16:37:32.914508 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 16:37:32.914594 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 16:37:32.916578 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 16:37:32.918070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:37:32.926975 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 16:37:32.928945 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 16:37:32.930071 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 16:37:32.931020 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 16:37:32.932145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 16:37:32.933094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:37:32.934812 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 16:37:32.944013 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 16:37:32.951231 systemd-journald[1209]: Time spent on flushing to /var/log/journal/fc538b8354154562a80c09aec9088c54 is 20.719ms for 974 entries. Sep 4 16:37:32.951231 systemd-journald[1209]: System Journal (/var/log/journal/fc538b8354154562a80c09aec9088c54) is 8M, max 195.6M, 187.6M free. Sep 4 16:37:32.990324 systemd-journald[1209]: Received client request to flush runtime journal. Sep 4 16:37:32.990378 kernel: loop0: detected capacity change from 0 to 128016 Sep 4 16:37:32.990403 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 16:37:32.948303 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:37:32.950934 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 16:37:32.953107 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 16:37:32.965632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:37:32.967389 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 16:37:32.970486 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 16:37:32.973410 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 16:37:32.987300 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 16:37:32.991025 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 16:37:32.999300 kernel: loop1: detected capacity change from 0 to 111000 Sep 4 16:37:32.995007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 16:37:32.996768 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 16:37:33.008021 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 16:37:33.019093 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 16:37:33.020917 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 4 16:37:33.020936 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 4 16:37:33.025454 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:37:33.030918 kernel: loop2: detected capacity change from 0 to 221472 Sep 4 16:37:33.051950 kernel: loop3: detected capacity change from 0 to 128016 Sep 4 16:37:33.052433 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 16:37:33.063922 kernel: loop4: detected capacity change from 0 to 111000 Sep 4 16:37:33.068920 kernel: loop5: detected capacity change from 0 to 221472 Sep 4 16:37:33.074308 (sd-merge)[1278]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Sep 4 16:37:33.077769 (sd-merge)[1278]: Merged extensions into '/usr'. Sep 4 16:37:33.082535 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 16:37:33.082552 systemd[1]: Reloading... Sep 4 16:37:33.122817 systemd-resolved[1268]: Positive Trust Anchors: Sep 4 16:37:33.122832 systemd-resolved[1268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 16:37:33.122836 systemd-resolved[1268]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 16:37:33.122867 systemd-resolved[1268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 16:37:33.130121 systemd-resolved[1268]: Defaulting to hostname 'linux'. Sep 4 16:37:33.136917 zram_generator::config[1308]: No configuration found. Sep 4 16:37:33.326728 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 16:37:33.327193 systemd[1]: Reloading finished in 244 ms. Sep 4 16:37:33.359150 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 16:37:33.360688 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 16:37:33.364437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:37:33.385182 systemd[1]: Starting ensure-sysext.service... Sep 4 16:37:33.386966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 16:37:33.398182 systemd[1]: Reload requested from client PID 1348 ('systemctl') (unit ensure-sysext.service)... Sep 4 16:37:33.398196 systemd[1]: Reloading... Sep 4 16:37:33.404518 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 16:37:33.404554 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 16:37:33.404837 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 16:37:33.405481 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 16:37:33.406455 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 16:37:33.406785 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Sep 4 16:37:33.406934 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Sep 4 16:37:33.412718 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 16:37:33.412791 systemd-tmpfiles[1349]: Skipping /boot Sep 4 16:37:33.422829 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 16:37:33.422842 systemd-tmpfiles[1349]: Skipping /boot Sep 4 16:37:33.459928 zram_generator::config[1384]: No configuration found. Sep 4 16:37:33.631060 systemd[1]: Reloading finished in 232 ms. Sep 4 16:37:33.654309 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 16:37:33.679430 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:37:33.689145 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:37:33.691416 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 16:37:33.718287 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 16:37:33.720772 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 16:37:33.724210 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:37:33.727042 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 16:37:33.732153 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:37:33.732320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:37:33.734800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:37:33.742664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:37:33.746145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:37:33.748140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:37:33.748245 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:37:33.748334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:37:33.750572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:37:33.750879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:37:33.753541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:37:33.753976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:37:33.757658 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 16:37:33.759909 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:37:33.760339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:37:33.773696 systemd-udevd[1423]: Using default interface naming scheme 'v257'. Sep 4 16:37:33.775281 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:37:33.775490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:37:33.778128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:37:33.781198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:37:33.786807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:37:33.788020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:37:33.788188 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:37:33.788280 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:37:33.789866 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 16:37:33.791933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:37:33.792167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:37:33.794776 augenrules[1453]: No rules Sep 4 16:37:33.795522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:37:33.795729 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:37:33.797677 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:37:33.802087 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:37:33.805053 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:37:33.805337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:37:33.814223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:37:33.819276 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 16:37:33.830062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:37:33.833051 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:37:33.835160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:37:33.838046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:37:33.840702 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 16:37:33.845089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:37:33.852279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:37:33.853421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:37:33.853948 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:37:33.858071 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 16:37:33.859274 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 16:37:33.859298 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:37:33.860719 systemd[1]: Finished ensure-sysext.service. Sep 4 16:37:33.862302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:37:33.867351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:37:33.869363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:37:33.871064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:37:33.874647 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 16:37:33.876432 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 16:37:33.885928 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:37:33.886149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:37:33.892745 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 16:37:33.892808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 16:37:33.899730 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 16:37:33.904058 augenrules[1491]: /sbin/augenrules: No change Sep 4 16:37:33.921085 augenrules[1522]: No rules Sep 4 16:37:33.921330 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:37:33.922405 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:37:33.926304 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 16:37:33.969228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 16:37:33.971912 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 16:37:34.001475 systemd-networkd[1496]: lo: Link UP Sep 4 16:37:34.004638 systemd-networkd[1496]: lo: Gained carrier Sep 4 16:37:34.010221 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 16:37:34.010360 systemd-networkd[1496]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:37:34.010372 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 16:37:34.010710 systemd[1]: Reached target network.target - Network. Sep 4 16:37:34.011209 systemd-networkd[1496]: eth0: Link UP Sep 4 16:37:34.012198 systemd-networkd[1496]: eth0: Gained carrier Sep 4 16:37:34.012215 systemd-networkd[1496]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:37:34.016204 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 16:37:34.012563 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 16:37:34.016481 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 16:37:34.021499 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 16:37:34.022806 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 16:37:34.024436 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 16:37:34.031981 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 16:37:34.031959 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.3/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 16:37:34.032724 systemd-timesyncd[1512]: Network configuration changed, trying to establish connection. Sep 4 16:37:34.034822 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 16:37:34.035094 systemd-timesyncd[1512]: Initial clock synchronization to Thu 2025-09-04 16:37:34.215287 UTC. Sep 4 16:37:34.037932 kernel: ACPI: button: Power Button [PWRF] Sep 4 16:37:34.041151 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 16:37:34.041452 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 16:37:34.045952 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 16:37:34.107293 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:37:34.180275 ldconfig[1420]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 16:37:34.191533 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 16:37:34.193750 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 16:37:34.228935 kernel: kvm_amd: TSC scaling supported Sep 4 16:37:34.229027 kernel: kvm_amd: Nested Virtualization enabled Sep 4 16:37:34.229042 kernel: kvm_amd: Nested Paging enabled Sep 4 16:37:34.229056 kernel: kvm_amd: LBR virtualization supported Sep 4 16:37:34.229069 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 16:37:34.229092 kernel: kvm_amd: Virtual GIF supported Sep 4 16:37:34.239101 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 16:37:34.248007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:37:34.250925 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 16:37:34.252143 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 16:37:34.253368 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 16:37:34.254791 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 16:37:34.256185 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 16:37:34.257921 kernel: EDAC MC: Ver: 3.0.0 Sep 4 16:37:34.257787 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 16:37:34.259071 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 16:37:34.260331 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 16:37:34.260361 systemd[1]: Reached target paths.target - Path Units. Sep 4 16:37:34.261281 systemd[1]: Reached target timers.target - Timer Units. Sep 4 16:37:34.262950 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 16:37:34.266340 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 16:37:34.269277 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 16:37:34.270668 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 16:37:34.271932 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 16:37:34.275572 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 16:37:34.276856 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 16:37:34.278595 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 16:37:34.280364 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 16:37:34.281320 systemd[1]: Reached target basic.target - Basic System. Sep 4 16:37:34.282268 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 16:37:34.282296 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 16:37:34.283244 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 16:37:34.285202 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 16:37:34.287097 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 16:37:34.289140 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 16:37:34.293035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 16:37:34.294130 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 16:37:34.303505 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 16:37:34.306244 jq[1578]: false Sep 4 16:37:34.306358 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 16:37:34.309399 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 16:37:34.311927 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 16:37:34.315536 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 16:37:34.318003 extend-filesystems[1579]: Found /dev/vda6 Sep 4 16:37:34.318995 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Refreshing passwd entry cache Sep 4 16:37:34.318441 oslogin_cache_refresh[1580]: Refreshing passwd entry cache Sep 4 16:37:34.320343 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 16:37:34.321436 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 16:37:34.321860 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 16:37:34.321961 extend-filesystems[1579]: Found /dev/vda9 Sep 4 16:37:34.322856 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 16:37:34.326899 extend-filesystems[1579]: Checking size of /dev/vda9 Sep 4 16:37:34.327021 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 16:37:34.330953 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Failure getting users, quitting Sep 4 16:37:34.330870 oslogin_cache_refresh[1580]: Failure getting users, quitting Sep 4 16:37:34.331029 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 16:37:34.331029 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Refreshing group entry cache Sep 4 16:37:34.330980 oslogin_cache_refresh[1580]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 16:37:34.331021 oslogin_cache_refresh[1580]: Refreshing group entry cache Sep 4 16:37:34.331881 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 16:37:34.333480 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 16:37:34.333724 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 16:37:34.337651 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 16:37:34.337918 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 16:37:34.338024 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Failure getting groups, quitting Sep 4 16:37:34.338024 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 16:37:34.337993 oslogin_cache_refresh[1580]: Failure getting groups, quitting Sep 4 16:37:34.338003 oslogin_cache_refresh[1580]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 16:37:34.339400 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 16:37:34.339631 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 16:37:34.341307 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 16:37:34.341552 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 16:37:34.342657 extend-filesystems[1579]: Resized partition /dev/vda9 Sep 4 16:37:34.345968 jq[1598]: true Sep 4 16:37:34.351107 extend-filesystems[1608]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 16:37:34.354777 (ntainerd)[1609]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 16:37:34.355953 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 16:37:34.357798 update_engine[1593]: I20250904 16:37:34.357739 1593 main.cc:92] Flatcar Update Engine starting Sep 4 16:37:34.362089 jq[1612]: true Sep 4 16:37:34.377923 tar[1603]: linux-amd64/helm Sep 4 16:37:34.383968 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 16:37:34.409867 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 16:37:34.409867 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 16:37:34.409867 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 16:37:34.416089 extend-filesystems[1579]: Resized filesystem in /dev/vda9 Sep 4 16:37:34.414214 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 16:37:34.415936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 16:37:34.427030 systemd-logind[1591]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 16:37:34.427054 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 16:37:34.427262 systemd-logind[1591]: New seat seat0. Sep 4 16:37:34.428331 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 16:37:34.441082 dbus-daemon[1576]: [system] SELinux support is enabled Sep 4 16:37:34.441562 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 16:37:34.449262 dbus-daemon[1576]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 16:37:34.449812 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 16:37:34.450296 update_engine[1593]: I20250904 16:37:34.449852 1593 update_check_scheduler.cc:74] Next update check in 11m32s Sep 4 16:37:34.449839 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 16:37:34.451106 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 16:37:34.451125 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 16:37:34.457792 systemd[1]: Started update-engine.service - Update Engine. Sep 4 16:37:34.458934 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Sep 4 16:37:34.461240 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 16:37:34.464941 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 16:37:34.467434 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 16:37:34.511528 locksmithd[1648]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 16:37:34.565461 containerd[1609]: time="2025-09-04T16:37:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 16:37:34.566341 containerd[1609]: time="2025-09-04T16:37:34.566256599Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 16:37:34.574638 containerd[1609]: time="2025-09-04T16:37:34.574607758Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.868µs" Sep 4 16:37:34.574638 containerd[1609]: time="2025-09-04T16:37:34.574631642Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 16:37:34.574689 containerd[1609]: time="2025-09-04T16:37:34.574645959Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 16:37:34.574828 containerd[1609]: time="2025-09-04T16:37:34.574792354Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 16:37:34.574828 containerd[1609]: time="2025-09-04T16:37:34.574810107Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 16:37:34.574902 containerd[1609]: time="2025-09-04T16:37:34.574829844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 16:37:34.574932 containerd[1609]: time="2025-09-04T16:37:34.574909273Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 16:37:34.574932 containerd[1609]: time="2025-09-04T16:37:34.574921075Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 16:37:34.575144 containerd[1609]: time="2025-09-04T16:37:34.575116692Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 16:37:34.575144 containerd[1609]: time="2025-09-04T16:37:34.575136008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 16:37:34.575189 containerd[1609]: time="2025-09-04T16:37:34.575145726Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 16:37:34.575189 containerd[1609]: time="2025-09-04T16:37:34.575153831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 16:37:34.575352 containerd[1609]: time="2025-09-04T16:37:34.575239602Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 16:37:34.575476 containerd[1609]: time="2025-09-04T16:37:34.575451740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 16:37:34.575509 containerd[1609]: time="2025-09-04T16:37:34.575487607Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 16:37:34.575509 containerd[1609]: time="2025-09-04T16:37:34.575498047Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 16:37:34.575635 containerd[1609]: time="2025-09-04T16:37:34.575543642Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 16:37:34.578575 containerd[1609]: time="2025-09-04T16:37:34.578380752Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 16:37:34.578575 containerd[1609]: time="2025-09-04T16:37:34.578459680Z" level=info msg="metadata content store policy set" policy=shared Sep 4 16:37:34.583488 containerd[1609]: time="2025-09-04T16:37:34.583439447Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 16:37:34.583549 containerd[1609]: time="2025-09-04T16:37:34.583525859Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 16:37:34.583574 containerd[1609]: time="2025-09-04T16:37:34.583550686Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 16:37:34.583661 containerd[1609]: time="2025-09-04T16:37:34.583640734Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 16:37:34.583686 containerd[1609]: time="2025-09-04T16:37:34.583664239Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 16:37:34.583686 containerd[1609]: time="2025-09-04T16:37:34.583678034Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 16:37:34.583733 containerd[1609]: time="2025-09-04T16:37:34.583691460Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 16:37:34.583733 containerd[1609]: time="2025-09-04T16:37:34.583707229Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 16:37:34.583733 containerd[1609]: time="2025-09-04T16:37:34.583722317Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 16:37:34.583733 containerd[1609]: time="2025-09-04T16:37:34.583733098Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 16:37:34.583802 containerd[1609]: time="2025-09-04T16:37:34.583744509Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 16:37:34.583802 containerd[1609]: time="2025-09-04T16:37:34.583759958Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 16:37:34.583958 containerd[1609]: time="2025-09-04T16:37:34.583929356Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 16:37:34.583958 containerd[1609]: time="2025-09-04T16:37:34.583956597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 16:37:34.584004 containerd[1609]: time="2025-09-04T16:37:34.583974901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 16:37:34.584004 containerd[1609]: time="2025-09-04T16:37:34.583987204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 16:37:34.584043 containerd[1609]: time="2025-09-04T16:37:34.584008484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 16:37:34.584043 containerd[1609]: time="2025-09-04T16:37:34.584021378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 16:37:34.584043 containerd[1609]: time="2025-09-04T16:37:34.584034142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 16:37:34.584108 containerd[1609]: time="2025-09-04T16:37:34.584046235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 16:37:34.584108 containerd[1609]: time="2025-09-04T16:37:34.584058438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 16:37:34.584108 containerd[1609]: time="2025-09-04T16:37:34.584070250Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 16:37:34.584108 containerd[1609]: time="2025-09-04T16:37:34.584082272Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 16:37:34.584183 containerd[1609]: time="2025-09-04T16:37:34.584149839Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 16:37:34.584183 containerd[1609]: time="2025-09-04T16:37:34.584164446Z" level=info msg="Start snapshots syncer" Sep 4 16:37:34.584220 containerd[1609]: time="2025-09-04T16:37:34.584192389Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 16:37:34.584462 containerd[1609]: time="2025-09-04T16:37:34.584423041Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 16:37:34.584563 containerd[1609]: time="2025-09-04T16:37:34.584473746Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 16:37:34.584563 containerd[1609]: time="2025-09-04T16:37:34.584536304Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 16:37:34.584701 containerd[1609]: time="2025-09-04T16:37:34.584640800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 16:37:34.584701 containerd[1609]: time="2025-09-04T16:37:34.584666167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 16:37:34.584701 containerd[1609]: time="2025-09-04T16:37:34.584675384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 16:37:34.584701 containerd[1609]: time="2025-09-04T16:37:34.584684321Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 16:37:34.584701 containerd[1609]: time="2025-09-04T16:37:34.584695332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 16:37:34.584701 containerd[1609]: time="2025-09-04T16:37:34.584705581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 16:37:34.584816 containerd[1609]: time="2025-09-04T16:37:34.584716311Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 16:37:34.584816 containerd[1609]: time="2025-09-04T16:37:34.584740727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 16:37:34.584816 containerd[1609]: time="2025-09-04T16:37:34.584751557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 16:37:34.584816 containerd[1609]: time="2025-09-04T16:37:34.584761336Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 16:37:34.584816 containerd[1609]: time="2025-09-04T16:37:34.584789809Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 16:37:34.584816 containerd[1609]: time="2025-09-04T16:37:34.584802342Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 16:37:34.584816 containerd[1609]: time="2025-09-04T16:37:34.584810478Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 16:37:34.584816 containerd[1609]: time="2025-09-04T16:37:34.584820496Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 16:37:34.584983 containerd[1609]: time="2025-09-04T16:37:34.584828982Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 16:37:34.584983 containerd[1609]: time="2025-09-04T16:37:34.584842738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 16:37:34.584983 containerd[1609]: time="2025-09-04T16:37:34.584853518Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 16:37:34.584983 containerd[1609]: time="2025-09-04T16:37:34.584900797Z" level=info msg="runtime interface created" Sep 4 16:37:34.584983 containerd[1609]: time="2025-09-04T16:37:34.584906959Z" level=info msg="created NRI interface" Sep 4 16:37:34.584983 containerd[1609]: time="2025-09-04T16:37:34.584919612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 16:37:34.584983 containerd[1609]: time="2025-09-04T16:37:34.584931895Z" level=info msg="Connect containerd service" Sep 4 16:37:34.584983 containerd[1609]: time="2025-09-04T16:37:34.584953586Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 16:37:34.585721 containerd[1609]: time="2025-09-04T16:37:34.585625105Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 16:37:34.668832 containerd[1609]: time="2025-09-04T16:37:34.668785761Z" level=info msg="Start subscribing containerd event" Sep 4 16:37:34.669044 containerd[1609]: time="2025-09-04T16:37:34.668997007Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 16:37:34.669116 containerd[1609]: time="2025-09-04T16:37:34.669081516Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 16:37:34.669496 containerd[1609]: time="2025-09-04T16:37:34.669022605Z" level=info msg="Start recovering state" Sep 4 16:37:34.669496 containerd[1609]: time="2025-09-04T16:37:34.669256744Z" level=info msg="Start event monitor" Sep 4 16:37:34.669496 containerd[1609]: time="2025-09-04T16:37:34.669276421Z" level=info msg="Start cni network conf syncer for default" Sep 4 16:37:34.669496 containerd[1609]: time="2025-09-04T16:37:34.669283725Z" level=info msg="Start streaming server" Sep 4 16:37:34.669496 containerd[1609]: time="2025-09-04T16:37:34.669301809Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 16:37:34.669496 containerd[1609]: time="2025-09-04T16:37:34.669309604Z" level=info msg="runtime interface starting up..." Sep 4 16:37:34.669496 containerd[1609]: time="2025-09-04T16:37:34.669315444Z" level=info msg="starting plugins..." Sep 4 16:37:34.669496 containerd[1609]: time="2025-09-04T16:37:34.669341353Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 16:37:34.669825 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 16:37:34.670271 containerd[1609]: time="2025-09-04T16:37:34.670257170Z" level=info msg="containerd successfully booted in 0.105301s" Sep 4 16:37:34.675586 tar[1603]: linux-amd64/LICENSE Sep 4 16:37:34.675668 tar[1603]: linux-amd64/README.md Sep 4 16:37:34.700817 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 16:37:34.898037 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 16:37:34.921053 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 16:37:34.923651 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 16:37:34.958636 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 16:37:34.958907 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 16:37:34.961345 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 16:37:34.989289 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 16:37:34.991837 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 16:37:34.993872 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 16:37:34.995106 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 16:37:35.329422 systemd-networkd[1496]: eth0: Gained IPv6LL Sep 4 16:37:35.332430 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 16:37:35.334103 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 16:37:35.336379 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 16:37:35.338716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:37:35.340781 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 16:37:35.363130 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 16:37:35.364678 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 16:37:35.364981 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 16:37:35.367156 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 16:37:36.045433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:37:36.047067 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 16:37:36.048282 systemd[1]: Startup finished in 2.772s (kernel) + 6.470s (initrd) + 4.014s (userspace) = 13.257s. Sep 4 16:37:36.049391 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:37:36.451853 kubelet[1718]: E0904 16:37:36.451794 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:37:36.455755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:37:36.455967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:37:36.456333 systemd[1]: kubelet.service: Consumed 954ms CPU time, 264.3M memory peak. Sep 4 16:37:38.708996 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 16:37:38.710239 systemd[1]: Started sshd@0-10.0.0.3:22-10.0.0.1:50512.service - OpenSSH per-connection server daemon (10.0.0.1:50512). Sep 4 16:37:38.779449 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 50512 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:37:38.781514 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:37:38.787694 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 16:37:38.788772 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 16:37:38.794502 systemd-logind[1591]: New session 1 of user core. Sep 4 16:37:38.809177 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 16:37:38.811973 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 16:37:38.829053 (systemd)[1736]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 16:37:38.831164 systemd-logind[1591]: New session c1 of user core. Sep 4 16:37:38.970018 systemd[1736]: Queued start job for default target default.target. Sep 4 16:37:38.987054 systemd[1736]: Created slice app.slice - User Application Slice. Sep 4 16:37:38.987078 systemd[1736]: Reached target paths.target - Paths. Sep 4 16:37:38.987115 systemd[1736]: Reached target timers.target - Timers. Sep 4 16:37:38.988454 systemd[1736]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 16:37:38.998673 systemd[1736]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 16:37:38.998746 systemd[1736]: Reached target sockets.target - Sockets. Sep 4 16:37:38.998783 systemd[1736]: Reached target basic.target - Basic System. Sep 4 16:37:38.998824 systemd[1736]: Reached target default.target - Main User Target. Sep 4 16:37:38.998858 systemd[1736]: Startup finished in 161ms. Sep 4 16:37:38.999242 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 16:37:39.000753 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 16:37:39.062009 systemd[1]: Started sshd@1-10.0.0.3:22-10.0.0.1:50522.service - OpenSSH per-connection server daemon (10.0.0.1:50522). Sep 4 16:37:39.114804 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 50522 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:37:39.116122 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:37:39.120281 systemd-logind[1591]: New session 2 of user core. Sep 4 16:37:39.134013 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 16:37:39.187191 sshd[1750]: Connection closed by 10.0.0.1 port 50522 Sep 4 16:37:39.187476 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Sep 4 16:37:39.201382 systemd[1]: sshd@1-10.0.0.3:22-10.0.0.1:50522.service: Deactivated successfully. Sep 4 16:37:39.203070 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 16:37:39.203749 systemd-logind[1591]: Session 2 logged out. Waiting for processes to exit. Sep 4 16:37:39.206267 systemd[1]: Started sshd@2-10.0.0.3:22-10.0.0.1:50534.service - OpenSSH per-connection server daemon (10.0.0.1:50534). Sep 4 16:37:39.206791 systemd-logind[1591]: Removed session 2. Sep 4 16:37:39.254217 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 50534 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:37:39.255341 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:37:39.259187 systemd-logind[1591]: New session 3 of user core. Sep 4 16:37:39.270012 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 16:37:39.318571 sshd[1759]: Connection closed by 10.0.0.1 port 50534 Sep 4 16:37:39.318981 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Sep 4 16:37:39.327360 systemd[1]: sshd@2-10.0.0.3:22-10.0.0.1:50534.service: Deactivated successfully. Sep 4 16:37:39.329124 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 16:37:39.329808 systemd-logind[1591]: Session 3 logged out. Waiting for processes to exit. Sep 4 16:37:39.332397 systemd[1]: Started sshd@3-10.0.0.3:22-10.0.0.1:50546.service - OpenSSH per-connection server daemon (10.0.0.1:50546). Sep 4 16:37:39.333009 systemd-logind[1591]: Removed session 3. Sep 4 16:37:39.385715 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 50546 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:37:39.386950 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:37:39.390807 systemd-logind[1591]: New session 4 of user core. Sep 4 16:37:39.401016 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 16:37:39.454139 sshd[1768]: Connection closed by 10.0.0.1 port 50546 Sep 4 16:37:39.454466 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Sep 4 16:37:39.462234 systemd[1]: sshd@3-10.0.0.3:22-10.0.0.1:50546.service: Deactivated successfully. Sep 4 16:37:39.463892 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 16:37:39.464603 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Sep 4 16:37:39.467124 systemd[1]: Started sshd@4-10.0.0.3:22-10.0.0.1:50554.service - OpenSSH per-connection server daemon (10.0.0.1:50554). Sep 4 16:37:39.467661 systemd-logind[1591]: Removed session 4. Sep 4 16:37:39.528383 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 50554 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:37:39.529575 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:37:39.533276 systemd-logind[1591]: New session 5 of user core. Sep 4 16:37:39.544999 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 16:37:39.602971 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 16:37:39.603275 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:37:39.617668 sudo[1778]: pam_unix(sudo:session): session closed for user root Sep 4 16:37:39.619365 sshd[1777]: Connection closed by 10.0.0.1 port 50554 Sep 4 16:37:39.619711 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Sep 4 16:37:39.633382 systemd[1]: sshd@4-10.0.0.3:22-10.0.0.1:50554.service: Deactivated successfully. Sep 4 16:37:39.635088 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 16:37:39.635782 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Sep 4 16:37:39.638418 systemd[1]: Started sshd@5-10.0.0.3:22-10.0.0.1:50568.service - OpenSSH per-connection server daemon (10.0.0.1:50568). Sep 4 16:37:39.639007 systemd-logind[1591]: Removed session 5. Sep 4 16:37:39.697645 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 50568 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:37:39.699205 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:37:39.703552 systemd-logind[1591]: New session 6 of user core. Sep 4 16:37:39.715077 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 16:37:39.768985 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 16:37:39.769285 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:37:40.021069 sudo[1789]: pam_unix(sudo:session): session closed for user root Sep 4 16:37:40.027646 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 16:37:40.027948 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:37:40.037431 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:37:40.089022 augenrules[1811]: No rules Sep 4 16:37:40.090702 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:37:40.091003 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:37:40.092106 sudo[1788]: pam_unix(sudo:session): session closed for user root Sep 4 16:37:40.093582 sshd[1787]: Connection closed by 10.0.0.1 port 50568 Sep 4 16:37:40.093972 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Sep 4 16:37:40.102240 systemd[1]: sshd@5-10.0.0.3:22-10.0.0.1:50568.service: Deactivated successfully. Sep 4 16:37:40.104092 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 16:37:40.104778 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Sep 4 16:37:40.107381 systemd[1]: Started sshd@6-10.0.0.3:22-10.0.0.1:54614.service - OpenSSH per-connection server daemon (10.0.0.1:54614). Sep 4 16:37:40.107994 systemd-logind[1591]: Removed session 6. Sep 4 16:37:40.160426 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 54614 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:37:40.161597 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:37:40.165742 systemd-logind[1591]: New session 7 of user core. Sep 4 16:37:40.176013 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 16:37:40.229770 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 16:37:40.230093 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:37:40.517310 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 16:37:40.537166 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 16:37:40.759469 dockerd[1845]: time="2025-09-04T16:37:40.759393303Z" level=info msg="Starting up" Sep 4 16:37:40.760408 dockerd[1845]: time="2025-09-04T16:37:40.760373091Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 16:37:40.775109 dockerd[1845]: time="2025-09-04T16:37:40.775016092Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 16:37:41.239396 dockerd[1845]: time="2025-09-04T16:37:41.239282052Z" level=info msg="Loading containers: start." Sep 4 16:37:41.248924 kernel: Initializing XFRM netlink socket Sep 4 16:37:41.487549 systemd-networkd[1496]: docker0: Link UP Sep 4 16:37:41.492069 dockerd[1845]: time="2025-09-04T16:37:41.491989080Z" level=info msg="Loading containers: done." Sep 4 16:37:41.504803 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1210496125-merged.mount: Deactivated successfully. Sep 4 16:37:41.507450 dockerd[1845]: time="2025-09-04T16:37:41.507393411Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 16:37:41.507575 dockerd[1845]: time="2025-09-04T16:37:41.507486356Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 16:37:41.507601 dockerd[1845]: time="2025-09-04T16:37:41.507584805Z" level=info msg="Initializing buildkit" Sep 4 16:37:41.535877 dockerd[1845]: time="2025-09-04T16:37:41.535844578Z" level=info msg="Completed buildkit initialization" Sep 4 16:37:41.542376 dockerd[1845]: time="2025-09-04T16:37:41.542346768Z" level=info msg="Daemon has completed initialization" Sep 4 16:37:41.542468 dockerd[1845]: time="2025-09-04T16:37:41.542420151Z" level=info msg="API listen on /run/docker.sock" Sep 4 16:37:41.542597 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 16:37:42.209952 containerd[1609]: time="2025-09-04T16:37:42.209914381Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 4 16:37:42.815423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645045806.mount: Deactivated successfully. Sep 4 16:37:43.695248 containerd[1609]: time="2025-09-04T16:37:43.695182752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:43.696067 containerd[1609]: time="2025-09-04T16:37:43.696004334Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 4 16:37:43.697299 containerd[1609]: time="2025-09-04T16:37:43.697254712Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:43.699652 containerd[1609]: time="2025-09-04T16:37:43.699615603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:43.700514 containerd[1609]: time="2025-09-04T16:37:43.700485122Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 1.49052759s" Sep 4 16:37:43.700555 containerd[1609]: time="2025-09-04T16:37:43.700517057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 4 16:37:43.701092 containerd[1609]: time="2025-09-04T16:37:43.701038137Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 4 16:37:44.878746 containerd[1609]: time="2025-09-04T16:37:44.878686334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:44.879609 containerd[1609]: time="2025-09-04T16:37:44.879544839Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 4 16:37:44.880954 containerd[1609]: time="2025-09-04T16:37:44.880905110Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:44.883624 containerd[1609]: time="2025-09-04T16:37:44.883585992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:44.884443 containerd[1609]: time="2025-09-04T16:37:44.884402618Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.183330626s" Sep 4 16:37:44.884443 containerd[1609]: time="2025-09-04T16:37:44.884433583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 4 16:37:44.884931 containerd[1609]: time="2025-09-04T16:37:44.884910123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 4 16:37:46.298006 containerd[1609]: time="2025-09-04T16:37:46.297944648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:46.298988 containerd[1609]: time="2025-09-04T16:37:46.298927712Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 4 16:37:46.300275 containerd[1609]: time="2025-09-04T16:37:46.300215349Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:46.302797 containerd[1609]: time="2025-09-04T16:37:46.302746010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:46.303663 containerd[1609]: time="2025-09-04T16:37:46.303634862Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.418618637s" Sep 4 16:37:46.303703 containerd[1609]: time="2025-09-04T16:37:46.303664258Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 4 16:37:46.304083 containerd[1609]: time="2025-09-04T16:37:46.304059196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 4 16:37:46.706361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 16:37:46.707923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:37:46.919526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:37:46.929262 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:37:46.971589 kubelet[2136]: E0904 16:37:46.971496 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:37:46.978178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:37:46.978377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:37:46.978743 systemd[1]: kubelet.service: Consumed 221ms CPU time, 115.2M memory peak. Sep 4 16:37:47.761126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669243742.mount: Deactivated successfully. Sep 4 16:37:48.440689 containerd[1609]: time="2025-09-04T16:37:48.440633917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:48.441613 containerd[1609]: time="2025-09-04T16:37:48.441580867Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 4 16:37:48.442769 containerd[1609]: time="2025-09-04T16:37:48.442738395Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:48.444741 containerd[1609]: time="2025-09-04T16:37:48.444694377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:48.445191 containerd[1609]: time="2025-09-04T16:37:48.445149248Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.141066526s" Sep 4 16:37:48.445191 containerd[1609]: time="2025-09-04T16:37:48.445188498Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 4 16:37:48.445664 containerd[1609]: time="2025-09-04T16:37:48.445639537Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 16:37:48.914725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234703271.mount: Deactivated successfully. Sep 4 16:37:49.568045 containerd[1609]: time="2025-09-04T16:37:49.567988097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:49.568777 containerd[1609]: time="2025-09-04T16:37:49.568755774Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 16:37:49.570164 containerd[1609]: time="2025-09-04T16:37:49.570121593Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:49.572438 containerd[1609]: time="2025-09-04T16:37:49.572404909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:49.573350 containerd[1609]: time="2025-09-04T16:37:49.573320706Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.127652699s" Sep 4 16:37:49.573388 containerd[1609]: time="2025-09-04T16:37:49.573347861Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 16:37:49.573841 containerd[1609]: time="2025-09-04T16:37:49.573801412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 16:37:50.048246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509128854.mount: Deactivated successfully. Sep 4 16:37:50.054528 containerd[1609]: time="2025-09-04T16:37:50.054483699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:37:50.055277 containerd[1609]: time="2025-09-04T16:37:50.055237336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 16:37:50.056414 containerd[1609]: time="2025-09-04T16:37:50.056380976Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:37:50.058266 containerd[1609]: time="2025-09-04T16:37:50.058228991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:37:50.058788 containerd[1609]: time="2025-09-04T16:37:50.058754098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 484.920144ms" Sep 4 16:37:50.058788 containerd[1609]: time="2025-09-04T16:37:50.058778568Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 16:37:50.059359 containerd[1609]: time="2025-09-04T16:37:50.059168621Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 4 16:37:50.652062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979458365.mount: Deactivated successfully. Sep 4 16:37:52.314131 containerd[1609]: time="2025-09-04T16:37:52.314060622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:52.314936 containerd[1609]: time="2025-09-04T16:37:52.314893417Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 4 16:37:52.316292 containerd[1609]: time="2025-09-04T16:37:52.316250232Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:52.319551 containerd[1609]: time="2025-09-04T16:37:52.319516068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:37:52.320417 containerd[1609]: time="2025-09-04T16:37:52.320384271Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.261186442s" Sep 4 16:37:52.320457 containerd[1609]: time="2025-09-04T16:37:52.320416557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 4 16:37:54.452228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:37:54.452384 systemd[1]: kubelet.service: Consumed 221ms CPU time, 115.2M memory peak. Sep 4 16:37:54.454419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:37:54.478315 systemd[1]: Reload requested from client PID 2293 ('systemctl') (unit session-7.scope)... Sep 4 16:37:54.478331 systemd[1]: Reloading... Sep 4 16:37:54.565933 zram_generator::config[2340]: No configuration found. Sep 4 16:37:54.865690 systemd[1]: Reloading finished in 387 ms. Sep 4 16:37:54.948556 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 16:37:54.948657 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 16:37:54.948968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:37:54.949012 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.4M memory peak. Sep 4 16:37:54.950444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:37:55.115995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:37:55.120017 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 16:37:55.154242 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:37:55.154242 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 16:37:55.154242 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:37:55.154474 kubelet[2385]: I0904 16:37:55.154298 2385 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 16:37:55.423357 kubelet[2385]: I0904 16:37:55.423275 2385 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 16:37:55.423357 kubelet[2385]: I0904 16:37:55.423309 2385 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 16:37:55.423593 kubelet[2385]: I0904 16:37:55.423571 2385 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 16:37:55.445647 kubelet[2385]: E0904 16:37:55.445615 2385 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:37:55.446526 kubelet[2385]: I0904 16:37:55.446502 2385 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 16:37:55.454115 kubelet[2385]: I0904 16:37:55.454091 2385 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 16:37:55.460017 kubelet[2385]: I0904 16:37:55.459980 2385 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 16:37:55.460506 kubelet[2385]: I0904 16:37:55.460478 2385 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 16:37:55.460670 kubelet[2385]: I0904 16:37:55.460633 2385 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 16:37:55.460836 kubelet[2385]: I0904 16:37:55.460657 2385 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 16:37:55.460948 kubelet[2385]: I0904 16:37:55.460840 2385 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 16:37:55.460948 kubelet[2385]: I0904 16:37:55.460849 2385 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 16:37:55.460997 kubelet[2385]: I0904 16:37:55.460974 2385 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:37:55.462749 kubelet[2385]: I0904 16:37:55.462722 2385 kubelet.go:408] "Attempting to sync node with API server" Sep 4 16:37:55.462749 kubelet[2385]: I0904 16:37:55.462743 2385 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 16:37:55.462807 kubelet[2385]: I0904 16:37:55.462775 2385 kubelet.go:314] "Adding apiserver pod source" Sep 4 16:37:55.462807 kubelet[2385]: I0904 16:37:55.462791 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 16:37:55.466822 kubelet[2385]: I0904 16:37:55.466496 2385 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 16:37:55.466861 kubelet[2385]: I0904 16:37:55.466846 2385 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 16:37:55.467905 kubelet[2385]: W0904 16:37:55.466911 2385 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 16:37:55.468316 kubelet[2385]: W0904 16:37:55.468241 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Sep 4 16:37:55.468316 kubelet[2385]: E0904 16:37:55.468312 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:37:55.468778 kubelet[2385]: I0904 16:37:55.468756 2385 server.go:1274] "Started kubelet" Sep 4 16:37:55.469687 kubelet[2385]: W0904 16:37:55.469616 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Sep 4 16:37:55.469687 kubelet[2385]: E0904 16:37:55.469671 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:37:55.469773 kubelet[2385]: I0904 16:37:55.469717 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 16:37:55.469950 kubelet[2385]: I0904 16:37:55.469927 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 16:37:55.470044 kubelet[2385]: I0904 16:37:55.470021 2385 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 16:37:55.470094 kubelet[2385]: I0904 16:37:55.470071 2385 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 16:37:55.470940 kubelet[2385]: I0904 16:37:55.470920 2385 server.go:449] "Adding debug handlers to kubelet server" Sep 4 16:37:55.471665 kubelet[2385]: I0904 16:37:55.471229 2385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 16:37:55.474623 kubelet[2385]: E0904 16:37:55.474595 2385 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:37:55.475714 kubelet[2385]: I0904 16:37:55.474601 2385 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 16:37:55.475799 kubelet[2385]: I0904 16:37:55.474610 2385 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 16:37:55.475849 kubelet[2385]: I0904 16:37:55.475836 2385 reconciler.go:26] "Reconciler: start to sync state" Sep 4 16:37:55.475849 kubelet[2385]: W0904 16:37:55.475137 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Sep 4 16:37:55.475914 kubelet[2385]: E0904 16:37:55.475874 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:37:55.475914 kubelet[2385]: E0904 16:37:55.473495 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.3:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.3:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186221b73a4c5bb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 16:37:55.46872927 +0000 UTC m=+0.345185096,LastTimestamp:2025-09-04 16:37:55.46872927 +0000 UTC m=+0.345185096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 16:37:55.475914 kubelet[2385]: E0904 16:37:55.474812 2385 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 16:37:55.476053 kubelet[2385]: E0904 16:37:55.475540 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="200ms" Sep 4 16:37:55.476231 kubelet[2385]: I0904 16:37:55.476202 2385 factory.go:221] Registration of the systemd container factory successfully Sep 4 16:37:55.476308 kubelet[2385]: I0904 16:37:55.476289 2385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 16:37:55.479017 kubelet[2385]: I0904 16:37:55.478989 2385 factory.go:221] Registration of the containerd container factory successfully Sep 4 16:37:55.487662 kubelet[2385]: I0904 16:37:55.487619 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 16:37:55.488814 kubelet[2385]: I0904 16:37:55.488785 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 16:37:55.488814 kubelet[2385]: I0904 16:37:55.488806 2385 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 16:37:55.488814 kubelet[2385]: I0904 16:37:55.488823 2385 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 16:37:55.488966 kubelet[2385]: E0904 16:37:55.488863 2385 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 16:37:55.494067 kubelet[2385]: W0904 16:37:55.494022 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Sep 4 16:37:55.494114 kubelet[2385]: E0904 16:37:55.494065 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:37:55.494865 kubelet[2385]: I0904 16:37:55.494833 2385 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 16:37:55.494865 kubelet[2385]: I0904 16:37:55.494849 2385 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 16:37:55.494865 kubelet[2385]: I0904 16:37:55.494863 2385 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:37:55.575835 kubelet[2385]: E0904 16:37:55.575798 2385 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:37:55.589269 kubelet[2385]: E0904 16:37:55.589235 2385 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 16:37:55.607437 kubelet[2385]: I0904 16:37:55.607409 2385 policy_none.go:49] "None policy: Start" Sep 4 16:37:55.607942 kubelet[2385]: I0904 16:37:55.607921 2385 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 16:37:55.607942 kubelet[2385]: I0904 16:37:55.607943 2385 state_mem.go:35] "Initializing new in-memory state store" Sep 4 16:37:55.614007 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 16:37:55.628750 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 16:37:55.640605 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 16:37:55.641833 kubelet[2385]: I0904 16:37:55.641810 2385 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 16:37:55.642038 kubelet[2385]: I0904 16:37:55.642021 2385 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 16:37:55.642089 kubelet[2385]: I0904 16:37:55.642035 2385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 16:37:55.642264 kubelet[2385]: I0904 16:37:55.642233 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 16:37:55.643225 kubelet[2385]: E0904 16:37:55.643197 2385 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 16:37:55.677243 kubelet[2385]: E0904 16:37:55.677158 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="400ms" Sep 4 16:37:55.743242 kubelet[2385]: I0904 16:37:55.743206 2385 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 16:37:55.743521 kubelet[2385]: E0904 16:37:55.743492 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Sep 4 16:37:55.796979 systemd[1]: Created slice kubepods-burstable-pod28c181f7165e71ecca65dd83a55b168b.slice - libcontainer container kubepods-burstable-pod28c181f7165e71ecca65dd83a55b168b.slice. Sep 4 16:37:55.807638 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 4 16:37:55.810900 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 4 16:37:55.878926 kubelet[2385]: I0904 16:37:55.878794 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28c181f7165e71ecca65dd83a55b168b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"28c181f7165e71ecca65dd83a55b168b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:37:55.878985 kubelet[2385]: I0904 16:37:55.878926 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:37:55.878985 kubelet[2385]: I0904 16:37:55.878947 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:37:55.878985 kubelet[2385]: I0904 16:37:55.878964 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:37:55.878985 kubelet[2385]: I0904 16:37:55.878978 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:37:55.879073 kubelet[2385]: I0904 16:37:55.878992 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:37:55.879073 kubelet[2385]: I0904 16:37:55.879035 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28c181f7165e71ecca65dd83a55b168b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"28c181f7165e71ecca65dd83a55b168b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:37:55.879073 kubelet[2385]: I0904 16:37:55.879050 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 16:37:55.879142 kubelet[2385]: I0904 16:37:55.879087 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28c181f7165e71ecca65dd83a55b168b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"28c181f7165e71ecca65dd83a55b168b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:37:55.944654 kubelet[2385]: I0904 16:37:55.944579 2385 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 16:37:55.944987 kubelet[2385]: E0904 16:37:55.944858 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Sep 4 16:37:56.078396 kubelet[2385]: E0904 16:37:56.078350 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="800ms" Sep 4 16:37:56.105763 kubelet[2385]: E0904 16:37:56.105733 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:56.106259 containerd[1609]: time="2025-09-04T16:37:56.106218666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:28c181f7165e71ecca65dd83a55b168b,Namespace:kube-system,Attempt:0,}" Sep 4 16:37:56.110544 kubelet[2385]: E0904 16:37:56.110516 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:56.110877 containerd[1609]: time="2025-09-04T16:37:56.110843939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 4 16:37:56.113111 kubelet[2385]: E0904 16:37:56.113082 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:56.113412 containerd[1609]: time="2025-09-04T16:37:56.113380100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 4 16:37:56.134053 containerd[1609]: time="2025-09-04T16:37:56.134019366Z" level=info msg="connecting to shim 87776a54f20b7b186694e5cb23fe497e274b18c1ab2a0d88e04e610cd7872194" address="unix:///run/containerd/s/4520c9d40e3072d723c0d7a7e2a5aa6c58ef3750e0927581f6c84d661ed5cff3" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:37:56.153724 containerd[1609]: time="2025-09-04T16:37:56.153667513Z" level=info msg="connecting to shim ee9bc519f6db43d93eff08570c95a2205b358742f85aacf550ec3761d574ad1f" address="unix:///run/containerd/s/bbca18c1a96d9665357d1c9a12cf7534c155b3474729e94eb5ba569068a46a0f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:37:56.155084 containerd[1609]: time="2025-09-04T16:37:56.154743867Z" level=info msg="connecting to shim 7c8d922b4cc1b9c0558c0afdd47901603117cffc838dd3bdcd9f577ad63a775f" address="unix:///run/containerd/s/3256d54e37f09ee2080dc5b28d0e23b96574205d6ae3e142565007d847e78a53" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:37:56.163063 systemd[1]: Started cri-containerd-87776a54f20b7b186694e5cb23fe497e274b18c1ab2a0d88e04e610cd7872194.scope - libcontainer container 87776a54f20b7b186694e5cb23fe497e274b18c1ab2a0d88e04e610cd7872194. Sep 4 16:37:56.181020 systemd[1]: Started cri-containerd-ee9bc519f6db43d93eff08570c95a2205b358742f85aacf550ec3761d574ad1f.scope - libcontainer container ee9bc519f6db43d93eff08570c95a2205b358742f85aacf550ec3761d574ad1f. Sep 4 16:37:56.186496 systemd[1]: Started cri-containerd-7c8d922b4cc1b9c0558c0afdd47901603117cffc838dd3bdcd9f577ad63a775f.scope - libcontainer container 7c8d922b4cc1b9c0558c0afdd47901603117cffc838dd3bdcd9f577ad63a775f. Sep 4 16:37:56.223213 containerd[1609]: time="2025-09-04T16:37:56.223109028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:28c181f7165e71ecca65dd83a55b168b,Namespace:kube-system,Attempt:0,} returns sandbox id \"87776a54f20b7b186694e5cb23fe497e274b18c1ab2a0d88e04e610cd7872194\"" Sep 4 16:37:56.227378 kubelet[2385]: E0904 16:37:56.227343 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:56.233277 containerd[1609]: time="2025-09-04T16:37:56.233239233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee9bc519f6db43d93eff08570c95a2205b358742f85aacf550ec3761d574ad1f\"" Sep 4 16:37:56.234485 kubelet[2385]: E0904 16:37:56.234463 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:56.234911 containerd[1609]: time="2025-09-04T16:37:56.234290346Z" level=info msg="CreateContainer within sandbox \"87776a54f20b7b186694e5cb23fe497e274b18c1ab2a0d88e04e610cd7872194\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 16:37:56.237177 containerd[1609]: time="2025-09-04T16:37:56.237128745Z" level=info msg="CreateContainer within sandbox \"ee9bc519f6db43d93eff08570c95a2205b358742f85aacf550ec3761d574ad1f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 16:37:56.244293 containerd[1609]: time="2025-09-04T16:37:56.244252236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c8d922b4cc1b9c0558c0afdd47901603117cffc838dd3bdcd9f577ad63a775f\"" Sep 4 16:37:56.244766 kubelet[2385]: E0904 16:37:56.244741 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:56.246024 containerd[1609]: time="2025-09-04T16:37:56.245985206Z" level=info msg="CreateContainer within sandbox \"7c8d922b4cc1b9c0558c0afdd47901603117cffc838dd3bdcd9f577ad63a775f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 16:37:56.247730 containerd[1609]: time="2025-09-04T16:37:56.247707552Z" level=info msg="Container 9ba555d83bba91c28b22fb7618e1a68860962c845d72f142ba50846ad0c5ff20: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:37:56.269463 containerd[1609]: time="2025-09-04T16:37:56.268918328Z" level=info msg="Container 929714fa4cdcaacd691d87f0e15b7fe8dc29de6f522e2565eca4340c2ca70547: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:37:56.346860 kubelet[2385]: I0904 16:37:56.346820 2385 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 16:37:56.347219 kubelet[2385]: E0904 16:37:56.347170 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Sep 4 16:37:56.492174 kubelet[2385]: W0904 16:37:56.491999 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Sep 4 16:37:56.492174 kubelet[2385]: E0904 16:37:56.492077 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:37:56.498129 kubelet[2385]: W0904 16:37:56.498051 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Sep 4 16:37:56.498129 kubelet[2385]: E0904 16:37:56.498122 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:37:56.548943 kubelet[2385]: W0904 16:37:56.548869 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.3:6443: connect: connection refused Sep 4 16:37:56.548943 kubelet[2385]: E0904 16:37:56.548937 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:37:56.549628 containerd[1609]: time="2025-09-04T16:37:56.549577388Z" level=info msg="CreateContainer within sandbox \"87776a54f20b7b186694e5cb23fe497e274b18c1ab2a0d88e04e610cd7872194\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"929714fa4cdcaacd691d87f0e15b7fe8dc29de6f522e2565eca4340c2ca70547\"" Sep 4 16:37:56.550311 containerd[1609]: time="2025-09-04T16:37:56.550280805Z" level=info msg="StartContainer for \"929714fa4cdcaacd691d87f0e15b7fe8dc29de6f522e2565eca4340c2ca70547\"" Sep 4 16:37:56.551407 containerd[1609]: time="2025-09-04T16:37:56.551384317Z" level=info msg="connecting to shim 929714fa4cdcaacd691d87f0e15b7fe8dc29de6f522e2565eca4340c2ca70547" address="unix:///run/containerd/s/4520c9d40e3072d723c0d7a7e2a5aa6c58ef3750e0927581f6c84d661ed5cff3" protocol=ttrpc version=3 Sep 4 16:37:56.551928 containerd[1609]: time="2025-09-04T16:37:56.551879230Z" level=info msg="Container ccb0163b01663f7452580b9d9802db37e9931d8bd667920006471e197611f09e: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:37:56.553838 containerd[1609]: time="2025-09-04T16:37:56.553794490Z" level=info msg="CreateContainer within sandbox \"ee9bc519f6db43d93eff08570c95a2205b358742f85aacf550ec3761d574ad1f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ba555d83bba91c28b22fb7618e1a68860962c845d72f142ba50846ad0c5ff20\"" Sep 4 16:37:56.554940 containerd[1609]: time="2025-09-04T16:37:56.554229036Z" level=info msg="StartContainer for \"9ba555d83bba91c28b22fb7618e1a68860962c845d72f142ba50846ad0c5ff20\"" Sep 4 16:37:56.555119 containerd[1609]: time="2025-09-04T16:37:56.555096163Z" level=info msg="connecting to shim 9ba555d83bba91c28b22fb7618e1a68860962c845d72f142ba50846ad0c5ff20" address="unix:///run/containerd/s/bbca18c1a96d9665357d1c9a12cf7534c155b3474729e94eb5ba569068a46a0f" protocol=ttrpc version=3 Sep 4 16:37:56.559636 containerd[1609]: time="2025-09-04T16:37:56.559595369Z" level=info msg="CreateContainer within sandbox \"7c8d922b4cc1b9c0558c0afdd47901603117cffc838dd3bdcd9f577ad63a775f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ccb0163b01663f7452580b9d9802db37e9931d8bd667920006471e197611f09e\"" Sep 4 16:37:56.560007 containerd[1609]: time="2025-09-04T16:37:56.559985823Z" level=info msg="StartContainer for \"ccb0163b01663f7452580b9d9802db37e9931d8bd667920006471e197611f09e\"" Sep 4 16:37:56.560953 containerd[1609]: time="2025-09-04T16:37:56.560928755Z" level=info msg="connecting to shim ccb0163b01663f7452580b9d9802db37e9931d8bd667920006471e197611f09e" address="unix:///run/containerd/s/3256d54e37f09ee2080dc5b28d0e23b96574205d6ae3e142565007d847e78a53" protocol=ttrpc version=3 Sep 4 16:37:56.579019 systemd[1]: Started cri-containerd-9ba555d83bba91c28b22fb7618e1a68860962c845d72f142ba50846ad0c5ff20.scope - libcontainer container 9ba555d83bba91c28b22fb7618e1a68860962c845d72f142ba50846ad0c5ff20. Sep 4 16:37:56.583533 systemd[1]: Started cri-containerd-929714fa4cdcaacd691d87f0e15b7fe8dc29de6f522e2565eca4340c2ca70547.scope - libcontainer container 929714fa4cdcaacd691d87f0e15b7fe8dc29de6f522e2565eca4340c2ca70547. Sep 4 16:37:56.585359 systemd[1]: Started cri-containerd-ccb0163b01663f7452580b9d9802db37e9931d8bd667920006471e197611f09e.scope - libcontainer container ccb0163b01663f7452580b9d9802db37e9931d8bd667920006471e197611f09e. Sep 4 16:37:56.631718 containerd[1609]: time="2025-09-04T16:37:56.631665644Z" level=info msg="StartContainer for \"9ba555d83bba91c28b22fb7618e1a68860962c845d72f142ba50846ad0c5ff20\" returns successfully" Sep 4 16:37:56.641169 containerd[1609]: time="2025-09-04T16:37:56.641138250Z" level=info msg="StartContainer for \"ccb0163b01663f7452580b9d9802db37e9931d8bd667920006471e197611f09e\" returns successfully" Sep 4 16:37:56.644080 containerd[1609]: time="2025-09-04T16:37:56.643967920Z" level=info msg="StartContainer for \"929714fa4cdcaacd691d87f0e15b7fe8dc29de6f522e2565eca4340c2ca70547\" returns successfully" Sep 4 16:37:57.148631 kubelet[2385]: I0904 16:37:57.148537 2385 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 16:37:57.504658 kubelet[2385]: E0904 16:37:57.504548 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:57.506268 kubelet[2385]: E0904 16:37:57.506244 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:57.508075 kubelet[2385]: E0904 16:37:57.508052 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:57.809484 kubelet[2385]: E0904 16:37:57.809165 2385 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 16:37:57.904456 kubelet[2385]: I0904 16:37:57.904382 2385 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 16:37:58.466017 kubelet[2385]: I0904 16:37:58.465958 2385 apiserver.go:52] "Watching apiserver" Sep 4 16:37:58.476813 kubelet[2385]: I0904 16:37:58.476789 2385 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 16:37:58.512725 kubelet[2385]: E0904 16:37:58.512681 2385 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 16:37:58.512725 kubelet[2385]: E0904 16:37:58.512712 2385 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 16:37:58.513145 kubelet[2385]: E0904 16:37:58.512684 2385 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:37:58.513145 kubelet[2385]: E0904 16:37:58.512822 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:58.513145 kubelet[2385]: E0904 16:37:58.512928 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:58.513145 kubelet[2385]: E0904 16:37:58.512967 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:59.513726 kubelet[2385]: E0904 16:37:59.513687 2385 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:37:59.555441 systemd[1]: Reload requested from client PID 2662 ('systemctl') (unit session-7.scope)... Sep 4 16:37:59.555456 systemd[1]: Reloading... Sep 4 16:37:59.631961 zram_generator::config[2706]: No configuration found. Sep 4 16:37:59.855629 systemd[1]: Reloading finished in 299 ms. Sep 4 16:37:59.885058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:37:59.910102 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 16:37:59.910397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:37:59.910448 systemd[1]: kubelet.service: Consumed 768ms CPU time, 130.9M memory peak. Sep 4 16:37:59.912113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:38:00.129070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:38:00.135252 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 16:38:00.171744 kubelet[2751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:38:00.171744 kubelet[2751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 16:38:00.171744 kubelet[2751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:38:00.172190 kubelet[2751]: I0904 16:38:00.171812 2751 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 16:38:00.178569 kubelet[2751]: I0904 16:38:00.178530 2751 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 16:38:00.178569 kubelet[2751]: I0904 16:38:00.178550 2751 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 16:38:00.178776 kubelet[2751]: I0904 16:38:00.178753 2751 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 16:38:00.179900 kubelet[2751]: I0904 16:38:00.179871 2751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 16:38:00.182365 kubelet[2751]: I0904 16:38:00.182281 2751 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 16:38:00.186049 kubelet[2751]: I0904 16:38:00.186026 2751 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 16:38:00.190178 kubelet[2751]: I0904 16:38:00.190157 2751 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 16:38:00.190282 kubelet[2751]: I0904 16:38:00.190261 2751 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 16:38:00.190429 kubelet[2751]: I0904 16:38:00.190398 2751 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 16:38:00.190579 kubelet[2751]: I0904 16:38:00.190422 2751 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 16:38:00.190653 kubelet[2751]: I0904 16:38:00.190590 2751 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 16:38:00.190653 kubelet[2751]: I0904 16:38:00.190600 2751 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 16:38:00.190653 kubelet[2751]: I0904 16:38:00.190624 2751 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:38:00.190755 kubelet[2751]: I0904 16:38:00.190717 2751 kubelet.go:408] "Attempting to sync node with API server" Sep 4 16:38:00.190755 kubelet[2751]: I0904 16:38:00.190728 2751 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 16:38:00.190799 kubelet[2751]: I0904 16:38:00.190759 2751 kubelet.go:314] "Adding apiserver pod source" Sep 4 16:38:00.190799 kubelet[2751]: I0904 16:38:00.190769 2751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 16:38:00.196366 kubelet[2751]: I0904 16:38:00.193418 2751 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 16:38:00.196366 kubelet[2751]: I0904 16:38:00.193807 2751 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 16:38:00.196366 kubelet[2751]: I0904 16:38:00.194268 2751 server.go:1274] "Started kubelet" Sep 4 16:38:00.196366 kubelet[2751]: I0904 16:38:00.194436 2751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 16:38:00.196366 kubelet[2751]: I0904 16:38:00.194612 2751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 16:38:00.196366 kubelet[2751]: I0904 16:38:00.194856 2751 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 16:38:00.196366 kubelet[2751]: I0904 16:38:00.195460 2751 server.go:449] "Adding debug handlers to kubelet server" Sep 4 16:38:00.198353 kubelet[2751]: I0904 16:38:00.198302 2751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 16:38:00.200600 kubelet[2751]: I0904 16:38:00.200568 2751 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 16:38:00.202267 kubelet[2751]: I0904 16:38:00.201169 2751 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 16:38:00.202267 kubelet[2751]: I0904 16:38:00.201451 2751 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 16:38:00.202267 kubelet[2751]: I0904 16:38:00.201587 2751 reconciler.go:26] "Reconciler: start to sync state" Sep 4 16:38:00.203708 kubelet[2751]: I0904 16:38:00.203672 2751 factory.go:221] Registration of the systemd container factory successfully Sep 4 16:38:00.203897 kubelet[2751]: I0904 16:38:00.203756 2751 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 16:38:00.203897 kubelet[2751]: E0904 16:38:00.203808 2751 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 16:38:00.204923 kubelet[2751]: I0904 16:38:00.204795 2751 factory.go:221] Registration of the containerd container factory successfully Sep 4 16:38:00.212129 kubelet[2751]: I0904 16:38:00.212097 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 16:38:00.213533 kubelet[2751]: I0904 16:38:00.213515 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 16:38:00.213607 kubelet[2751]: I0904 16:38:00.213598 2751 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 16:38:00.213716 kubelet[2751]: I0904 16:38:00.213706 2751 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 16:38:00.213801 kubelet[2751]: E0904 16:38:00.213784 2751 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 16:38:00.243698 kubelet[2751]: I0904 16:38:00.243678 2751 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 16:38:00.243785 kubelet[2751]: I0904 16:38:00.243774 2751 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 16:38:00.243846 kubelet[2751]: I0904 16:38:00.243837 2751 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:38:00.244049 kubelet[2751]: I0904 16:38:00.244022 2751 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 16:38:00.244049 kubelet[2751]: I0904 16:38:00.244037 2751 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 16:38:00.244049 kubelet[2751]: I0904 16:38:00.244056 2751 policy_none.go:49] "None policy: Start" Sep 4 16:38:00.244769 kubelet[2751]: I0904 16:38:00.244747 2751 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 16:38:00.244769 kubelet[2751]: I0904 16:38:00.244769 2751 state_mem.go:35] "Initializing new in-memory state store" Sep 4 16:38:00.244915 kubelet[2751]: I0904 16:38:00.244900 2751 state_mem.go:75] "Updated machine memory state" Sep 4 16:38:00.249098 kubelet[2751]: I0904 16:38:00.249076 2751 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 16:38:00.249317 kubelet[2751]: I0904 16:38:00.249304 2751 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 16:38:00.249400 kubelet[2751]: I0904 16:38:00.249369 2751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 16:38:00.249585 kubelet[2751]: I0904 16:38:00.249561 2751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 16:38:00.320664 kubelet[2751]: E0904 16:38:00.320628 2751 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 16:38:00.357675 kubelet[2751]: I0904 16:38:00.357645 2751 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 4 16:38:00.363981 kubelet[2751]: I0904 16:38:00.363959 2751 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 4 16:38:00.364054 kubelet[2751]: I0904 16:38:00.364023 2751 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 4 16:38:00.503023 kubelet[2751]: I0904 16:38:00.502920 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:38:00.503023 kubelet[2751]: I0904 16:38:00.502948 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:38:00.503023 kubelet[2751]: I0904 16:38:00.502970 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28c181f7165e71ecca65dd83a55b168b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"28c181f7165e71ecca65dd83a55b168b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:38:00.503023 kubelet[2751]: I0904 16:38:00.502988 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28c181f7165e71ecca65dd83a55b168b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"28c181f7165e71ecca65dd83a55b168b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:38:00.503023 kubelet[2751]: I0904 16:38:00.503003 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28c181f7165e71ecca65dd83a55b168b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"28c181f7165e71ecca65dd83a55b168b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:38:00.503240 kubelet[2751]: I0904 16:38:00.503029 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:38:00.503240 kubelet[2751]: I0904 16:38:00.503067 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:38:00.503240 kubelet[2751]: I0904 16:38:00.503100 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 4 16:38:00.503240 kubelet[2751]: I0904 16:38:00.503117 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:38:00.619923 kubelet[2751]: E0904 16:38:00.619811 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:00.620839 kubelet[2751]: E0904 16:38:00.620818 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:00.620961 kubelet[2751]: E0904 16:38:00.620951 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:01.191709 kubelet[2751]: I0904 16:38:01.191670 2751 apiserver.go:52] "Watching apiserver" Sep 4 16:38:01.201622 kubelet[2751]: I0904 16:38:01.201598 2751 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 16:38:01.231574 kubelet[2751]: E0904 16:38:01.231535 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:01.232196 kubelet[2751]: E0904 16:38:01.232171 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:01.232405 kubelet[2751]: E0904 16:38:01.232382 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:01.248726 kubelet[2751]: I0904 16:38:01.248673 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.248657219 podStartE2EDuration="1.248657219s" podCreationTimestamp="2025-09-04 16:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:38:01.248487111 +0000 UTC m=+1.109428231" watchObservedRunningTime="2025-09-04 16:38:01.248657219 +0000 UTC m=+1.109598339" Sep 4 16:38:01.260093 kubelet[2751]: I0904 16:38:01.260052 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.260038717 podStartE2EDuration="1.260038717s" podCreationTimestamp="2025-09-04 16:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:38:01.255088013 +0000 UTC m=+1.116029133" watchObservedRunningTime="2025-09-04 16:38:01.260038717 +0000 UTC m=+1.120979837" Sep 4 16:38:02.233029 kubelet[2751]: E0904 16:38:02.232983 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:02.233406 kubelet[2751]: E0904 16:38:02.233123 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:04.852408 kubelet[2751]: E0904 16:38:04.852368 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:05.307287 kubelet[2751]: E0904 16:38:05.307240 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:06.677196 kubelet[2751]: I0904 16:38:06.677165 2751 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 16:38:06.677602 containerd[1609]: time="2025-09-04T16:38:06.677566998Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 16:38:06.677831 kubelet[2751]: I0904 16:38:06.677784 2751 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 16:38:07.499227 kubelet[2751]: I0904 16:38:07.499171 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.499135355 podStartE2EDuration="8.499135355s" podCreationTimestamp="2025-09-04 16:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:38:01.260809291 +0000 UTC m=+1.121750411" watchObservedRunningTime="2025-09-04 16:38:07.499135355 +0000 UTC m=+7.360076465" Sep 4 16:38:07.507340 systemd[1]: Created slice kubepods-besteffort-pod21739006_96e4_4b01_8e65_adb3b9e61438.slice - libcontainer container kubepods-besteffort-pod21739006_96e4_4b01_8e65_adb3b9e61438.slice. Sep 4 16:38:07.551179 kubelet[2751]: I0904 16:38:07.551147 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21739006-96e4-4b01-8e65-adb3b9e61438-xtables-lock\") pod \"kube-proxy-584gl\" (UID: \"21739006-96e4-4b01-8e65-adb3b9e61438\") " pod="kube-system/kube-proxy-584gl" Sep 4 16:38:07.551179 kubelet[2751]: I0904 16:38:07.551182 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21739006-96e4-4b01-8e65-adb3b9e61438-lib-modules\") pod \"kube-proxy-584gl\" (UID: \"21739006-96e4-4b01-8e65-adb3b9e61438\") " pod="kube-system/kube-proxy-584gl" Sep 4 16:38:07.551306 kubelet[2751]: I0904 16:38:07.551203 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnw9p\" (UniqueName: \"kubernetes.io/projected/21739006-96e4-4b01-8e65-adb3b9e61438-kube-api-access-pnw9p\") pod \"kube-proxy-584gl\" (UID: \"21739006-96e4-4b01-8e65-adb3b9e61438\") " pod="kube-system/kube-proxy-584gl" Sep 4 16:38:07.551306 kubelet[2751]: I0904 16:38:07.551220 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/21739006-96e4-4b01-8e65-adb3b9e61438-kube-proxy\") pod \"kube-proxy-584gl\" (UID: \"21739006-96e4-4b01-8e65-adb3b9e61438\") " pod="kube-system/kube-proxy-584gl" Sep 4 16:38:07.812503 systemd[1]: Created slice kubepods-besteffort-podb5aa3a26_8799_4a63_b2cb_8be65a3b831a.slice - libcontainer container kubepods-besteffort-podb5aa3a26_8799_4a63_b2cb_8be65a3b831a.slice. Sep 4 16:38:07.820667 kubelet[2751]: E0904 16:38:07.820632 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:07.821161 containerd[1609]: time="2025-09-04T16:38:07.821106187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-584gl,Uid:21739006-96e4-4b01-8e65-adb3b9e61438,Namespace:kube-system,Attempt:0,}" Sep 4 16:38:07.838456 containerd[1609]: time="2025-09-04T16:38:07.838388265Z" level=info msg="connecting to shim 295be93e7ecf2117a8602e7e814acf7e91cc287e3f0e98d8f21c0534af54779b" address="unix:///run/containerd/s/36c7cc2ebc657341adf7b06195ae2e11b47540e5ebfc11204df34866622b4732" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:07.853124 kubelet[2751]: I0904 16:38:07.853089 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5aa3a26-8799-4a63-b2cb-8be65a3b831a-var-lib-calico\") pod \"tigera-operator-58fc44c59b-5jq66\" (UID: \"b5aa3a26-8799-4a63-b2cb-8be65a3b831a\") " pod="tigera-operator/tigera-operator-58fc44c59b-5jq66" Sep 4 16:38:07.853199 kubelet[2751]: I0904 16:38:07.853127 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsdqz\" (UniqueName: \"kubernetes.io/projected/b5aa3a26-8799-4a63-b2cb-8be65a3b831a-kube-api-access-tsdqz\") pod \"tigera-operator-58fc44c59b-5jq66\" (UID: \"b5aa3a26-8799-4a63-b2cb-8be65a3b831a\") " pod="tigera-operator/tigera-operator-58fc44c59b-5jq66" Sep 4 16:38:07.868017 systemd[1]: Started cri-containerd-295be93e7ecf2117a8602e7e814acf7e91cc287e3f0e98d8f21c0534af54779b.scope - libcontainer container 295be93e7ecf2117a8602e7e814acf7e91cc287e3f0e98d8f21c0534af54779b. Sep 4 16:38:07.891333 containerd[1609]: time="2025-09-04T16:38:07.891300225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-584gl,Uid:21739006-96e4-4b01-8e65-adb3b9e61438,Namespace:kube-system,Attempt:0,} returns sandbox id \"295be93e7ecf2117a8602e7e814acf7e91cc287e3f0e98d8f21c0534af54779b\"" Sep 4 16:38:07.891940 kubelet[2751]: E0904 16:38:07.891918 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:07.893922 containerd[1609]: time="2025-09-04T16:38:07.893878476Z" level=info msg="CreateContainer within sandbox \"295be93e7ecf2117a8602e7e814acf7e91cc287e3f0e98d8f21c0534af54779b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 16:38:07.907723 containerd[1609]: time="2025-09-04T16:38:07.907692707Z" level=info msg="Container bcfc41ee48e6f8bb8a0535ae7761513cc86d946610e7af36367faa7588bc9d9f: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:07.910089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328492872.mount: Deactivated successfully. Sep 4 16:38:07.915413 containerd[1609]: time="2025-09-04T16:38:07.915366895Z" level=info msg="CreateContainer within sandbox \"295be93e7ecf2117a8602e7e814acf7e91cc287e3f0e98d8f21c0534af54779b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcfc41ee48e6f8bb8a0535ae7761513cc86d946610e7af36367faa7588bc9d9f\"" Sep 4 16:38:07.915800 containerd[1609]: time="2025-09-04T16:38:07.915777746Z" level=info msg="StartContainer for \"bcfc41ee48e6f8bb8a0535ae7761513cc86d946610e7af36367faa7588bc9d9f\"" Sep 4 16:38:07.917041 containerd[1609]: time="2025-09-04T16:38:07.917008205Z" level=info msg="connecting to shim bcfc41ee48e6f8bb8a0535ae7761513cc86d946610e7af36367faa7588bc9d9f" address="unix:///run/containerd/s/36c7cc2ebc657341adf7b06195ae2e11b47540e5ebfc11204df34866622b4732" protocol=ttrpc version=3 Sep 4 16:38:07.941014 systemd[1]: Started cri-containerd-bcfc41ee48e6f8bb8a0535ae7761513cc86d946610e7af36367faa7588bc9d9f.scope - libcontainer container bcfc41ee48e6f8bb8a0535ae7761513cc86d946610e7af36367faa7588bc9d9f. Sep 4 16:38:07.980912 containerd[1609]: time="2025-09-04T16:38:07.980772173Z" level=info msg="StartContainer for \"bcfc41ee48e6f8bb8a0535ae7761513cc86d946610e7af36367faa7588bc9d9f\" returns successfully" Sep 4 16:38:08.115102 containerd[1609]: time="2025-09-04T16:38:08.115061974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5jq66,Uid:b5aa3a26-8799-4a63-b2cb-8be65a3b831a,Namespace:tigera-operator,Attempt:0,}" Sep 4 16:38:08.136822 containerd[1609]: time="2025-09-04T16:38:08.136768095Z" level=info msg="connecting to shim 857e9276347a82b0a3e5690c34cadc9f4419f4254665dde4d0d328c9d9a08a71" address="unix:///run/containerd/s/371a42b406191b0a8a41e85cd4df952be833257cbd365e908290c443b4da7472" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:08.160018 systemd[1]: Started cri-containerd-857e9276347a82b0a3e5690c34cadc9f4419f4254665dde4d0d328c9d9a08a71.scope - libcontainer container 857e9276347a82b0a3e5690c34cadc9f4419f4254665dde4d0d328c9d9a08a71. Sep 4 16:38:08.198188 containerd[1609]: time="2025-09-04T16:38:08.198153862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5jq66,Uid:b5aa3a26-8799-4a63-b2cb-8be65a3b831a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"857e9276347a82b0a3e5690c34cadc9f4419f4254665dde4d0d328c9d9a08a71\"" Sep 4 16:38:08.199580 containerd[1609]: time="2025-09-04T16:38:08.199549654Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 16:38:08.242926 kubelet[2751]: E0904 16:38:08.242772 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:08.251343 kubelet[2751]: I0904 16:38:08.251192 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-584gl" podStartSLOduration=1.25117586 podStartE2EDuration="1.25117586s" podCreationTimestamp="2025-09-04 16:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:38:08.251014138 +0000 UTC m=+8.111955258" watchObservedRunningTime="2025-09-04 16:38:08.25117586 +0000 UTC m=+8.112116980" Sep 4 16:38:09.599394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016883135.mount: Deactivated successfully. Sep 4 16:38:09.964377 containerd[1609]: time="2025-09-04T16:38:09.964313573Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:09.965090 containerd[1609]: time="2025-09-04T16:38:09.965060976Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 16:38:09.966215 containerd[1609]: time="2025-09-04T16:38:09.966159578Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:09.968273 containerd[1609]: time="2025-09-04T16:38:09.968230314Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:09.968762 containerd[1609]: time="2025-09-04T16:38:09.968729453Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.769146977s" Sep 4 16:38:09.968762 containerd[1609]: time="2025-09-04T16:38:09.968759611Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 16:38:09.971461 containerd[1609]: time="2025-09-04T16:38:09.971435371Z" level=info msg="CreateContainer within sandbox \"857e9276347a82b0a3e5690c34cadc9f4419f4254665dde4d0d328c9d9a08a71\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 16:38:09.979576 containerd[1609]: time="2025-09-04T16:38:09.979534692Z" level=info msg="Container aff236758663c6e56bb1c374c7bb724c17611ba580261cd22568d28d25606a3a: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:09.983445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402936674.mount: Deactivated successfully. Sep 4 16:38:09.987465 containerd[1609]: time="2025-09-04T16:38:09.987437396Z" level=info msg="CreateContainer within sandbox \"857e9276347a82b0a3e5690c34cadc9f4419f4254665dde4d0d328c9d9a08a71\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aff236758663c6e56bb1c374c7bb724c17611ba580261cd22568d28d25606a3a\"" Sep 4 16:38:09.987973 containerd[1609]: time="2025-09-04T16:38:09.987944975Z" level=info msg="StartContainer for \"aff236758663c6e56bb1c374c7bb724c17611ba580261cd22568d28d25606a3a\"" Sep 4 16:38:09.988655 containerd[1609]: time="2025-09-04T16:38:09.988632744Z" level=info msg="connecting to shim aff236758663c6e56bb1c374c7bb724c17611ba580261cd22568d28d25606a3a" address="unix:///run/containerd/s/371a42b406191b0a8a41e85cd4df952be833257cbd365e908290c443b4da7472" protocol=ttrpc version=3 Sep 4 16:38:10.037005 systemd[1]: Started cri-containerd-aff236758663c6e56bb1c374c7bb724c17611ba580261cd22568d28d25606a3a.scope - libcontainer container aff236758663c6e56bb1c374c7bb724c17611ba580261cd22568d28d25606a3a. Sep 4 16:38:10.063861 containerd[1609]: time="2025-09-04T16:38:10.063823340Z" level=info msg="StartContainer for \"aff236758663c6e56bb1c374c7bb724c17611ba580261cd22568d28d25606a3a\" returns successfully" Sep 4 16:38:10.254075 kubelet[2751]: I0904 16:38:10.253961 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-5jq66" podStartSLOduration=1.483687566 podStartE2EDuration="3.253942625s" podCreationTimestamp="2025-09-04 16:38:07 +0000 UTC" firstStartedPulling="2025-09-04 16:38:08.199153425 +0000 UTC m=+8.060094535" lastFinishedPulling="2025-09-04 16:38:09.969408474 +0000 UTC m=+9.830349594" observedRunningTime="2025-09-04 16:38:10.252602761 +0000 UTC m=+10.113543881" watchObservedRunningTime="2025-09-04 16:38:10.253942625 +0000 UTC m=+10.114883765" Sep 4 16:38:11.913251 kubelet[2751]: E0904 16:38:11.913207 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:14.856458 kubelet[2751]: E0904 16:38:14.856412 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:15.117442 sudo[1824]: pam_unix(sudo:session): session closed for user root Sep 4 16:38:15.119520 sshd[1823]: Connection closed by 10.0.0.1 port 54614 Sep 4 16:38:15.120808 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Sep 4 16:38:15.125630 systemd[1]: sshd@6-10.0.0.3:22-10.0.0.1:54614.service: Deactivated successfully. Sep 4 16:38:15.127840 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 16:38:15.129486 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Sep 4 16:38:15.132521 systemd[1]: session-7.scope: Consumed 4.104s CPU time, 222.9M memory peak. Sep 4 16:38:15.139075 systemd-logind[1591]: Removed session 7. Sep 4 16:38:15.261410 kubelet[2751]: E0904 16:38:15.261371 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:15.312808 kubelet[2751]: E0904 16:38:15.312180 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:17.393121 systemd[1]: Created slice kubepods-besteffort-pod43a445d9_8d6f_4a67_a6b7_a792a4580a8f.slice - libcontainer container kubepods-besteffort-pod43a445d9_8d6f_4a67_a6b7_a792a4580a8f.slice. Sep 4 16:38:17.414779 kubelet[2751]: I0904 16:38:17.414718 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a445d9-8d6f-4a67-a6b7-a792a4580a8f-tigera-ca-bundle\") pod \"calico-typha-6665c8cdf9-v6hl8\" (UID: \"43a445d9-8d6f-4a67-a6b7-a792a4580a8f\") " pod="calico-system/calico-typha-6665c8cdf9-v6hl8" Sep 4 16:38:17.414779 kubelet[2751]: I0904 16:38:17.414762 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdkgn\" (UniqueName: \"kubernetes.io/projected/43a445d9-8d6f-4a67-a6b7-a792a4580a8f-kube-api-access-sdkgn\") pod \"calico-typha-6665c8cdf9-v6hl8\" (UID: \"43a445d9-8d6f-4a67-a6b7-a792a4580a8f\") " pod="calico-system/calico-typha-6665c8cdf9-v6hl8" Sep 4 16:38:17.414779 kubelet[2751]: I0904 16:38:17.414779 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/43a445d9-8d6f-4a67-a6b7-a792a4580a8f-typha-certs\") pod \"calico-typha-6665c8cdf9-v6hl8\" (UID: \"43a445d9-8d6f-4a67-a6b7-a792a4580a8f\") " pod="calico-system/calico-typha-6665c8cdf9-v6hl8" Sep 4 16:38:17.697290 kubelet[2751]: E0904 16:38:17.697176 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:17.697646 containerd[1609]: time="2025-09-04T16:38:17.697589878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6665c8cdf9-v6hl8,Uid:43a445d9-8d6f-4a67-a6b7-a792a4580a8f,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:17.831500 systemd[1]: Created slice kubepods-besteffort-pod86e4d66e_4e37_452e_976c_40a7c77c3ac5.slice - libcontainer container kubepods-besteffort-pod86e4d66e_4e37_452e_976c_40a7c77c3ac5.slice. Sep 4 16:38:17.839750 containerd[1609]: time="2025-09-04T16:38:17.839308231Z" level=info msg="connecting to shim 6cf517f0c46e0249685655de333191d624f4bb9dccd046342410f12b886091bc" address="unix:///run/containerd/s/12ef5898168602668d0bb5d81cf38e3d0768ebd1de8fe4fb96e3451b84e40b69" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:17.866016 systemd[1]: Started cri-containerd-6cf517f0c46e0249685655de333191d624f4bb9dccd046342410f12b886091bc.scope - libcontainer container 6cf517f0c46e0249685655de333191d624f4bb9dccd046342410f12b886091bc. Sep 4 16:38:17.906873 containerd[1609]: time="2025-09-04T16:38:17.906795199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6665c8cdf9-v6hl8,Uid:43a445d9-8d6f-4a67-a6b7-a792a4580a8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6cf517f0c46e0249685655de333191d624f4bb9dccd046342410f12b886091bc\"" Sep 4 16:38:17.907568 kubelet[2751]: E0904 16:38:17.907538 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:17.908454 containerd[1609]: time="2025-09-04T16:38:17.908430103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 16:38:17.917528 kubelet[2751]: I0904 16:38:17.917496 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-lib-modules\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917528 kubelet[2751]: I0904 16:38:17.917527 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-var-run-calico\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917614 kubelet[2751]: I0904 16:38:17.917544 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-xtables-lock\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917614 kubelet[2751]: I0904 16:38:17.917561 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-cni-net-dir\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917614 kubelet[2751]: I0904 16:38:17.917575 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86e4d66e-4e37-452e-976c-40a7c77c3ac5-node-certs\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917614 kubelet[2751]: I0904 16:38:17.917589 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-var-lib-calico\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917712 kubelet[2751]: I0904 16:38:17.917687 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-cni-log-dir\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917760 kubelet[2751]: I0904 16:38:17.917737 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86e4d66e-4e37-452e-976c-40a7c77c3ac5-tigera-ca-bundle\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917837 kubelet[2751]: I0904 16:38:17.917821 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-flexvol-driver-host\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917866 kubelet[2751]: I0904 16:38:17.917849 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-policysync\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917951 kubelet[2751]: I0904 16:38:17.917921 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86e4d66e-4e37-452e-976c-40a7c77c3ac5-cni-bin-dir\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:17.917976 kubelet[2751]: I0904 16:38:17.917955 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wgmh\" (UniqueName: \"kubernetes.io/projected/86e4d66e-4e37-452e-976c-40a7c77c3ac5-kube-api-access-8wgmh\") pod \"calico-node-v9f7d\" (UID: \"86e4d66e-4e37-452e-976c-40a7c77c3ac5\") " pod="calico-system/calico-node-v9f7d" Sep 4 16:38:18.020253 kubelet[2751]: E0904 16:38:18.019973 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.020253 kubelet[2751]: W0904 16:38:18.019995 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.020253 kubelet[2751]: E0904 16:38:18.020012 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.022985 kubelet[2751]: E0904 16:38:18.022960 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.022985 kubelet[2751]: W0904 16:38:18.022974 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.022985 kubelet[2751]: E0904 16:38:18.022985 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.026505 kubelet[2751]: E0904 16:38:18.026311 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.026505 kubelet[2751]: W0904 16:38:18.026327 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.026505 kubelet[2751]: E0904 16:38:18.026337 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.073919 kubelet[2751]: E0904 16:38:18.073296 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vr9" podUID="f37925be-17ad-4b82-a217-d98beb0e0897" Sep 4 16:38:18.111117 kubelet[2751]: E0904 16:38:18.111072 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.111117 kubelet[2751]: W0904 16:38:18.111095 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.111117 kubelet[2751]: E0904 16:38:18.111114 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.111375 kubelet[2751]: E0904 16:38:18.111355 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.111375 kubelet[2751]: W0904 16:38:18.111366 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.111375 kubelet[2751]: E0904 16:38:18.111375 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.111555 kubelet[2751]: E0904 16:38:18.111537 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.111555 kubelet[2751]: W0904 16:38:18.111547 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.111555 kubelet[2751]: E0904 16:38:18.111554 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.111736 kubelet[2751]: E0904 16:38:18.111720 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.111736 kubelet[2751]: W0904 16:38:18.111731 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.111790 kubelet[2751]: E0904 16:38:18.111740 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.111941 kubelet[2751]: E0904 16:38:18.111926 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.111941 kubelet[2751]: W0904 16:38:18.111939 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.111993 kubelet[2751]: E0904 16:38:18.111947 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.112120 kubelet[2751]: E0904 16:38:18.112105 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.112120 kubelet[2751]: W0904 16:38:18.112115 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.112182 kubelet[2751]: E0904 16:38:18.112122 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.112296 kubelet[2751]: E0904 16:38:18.112281 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.112296 kubelet[2751]: W0904 16:38:18.112291 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.112341 kubelet[2751]: E0904 16:38:18.112298 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.112465 kubelet[2751]: E0904 16:38:18.112451 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.112465 kubelet[2751]: W0904 16:38:18.112460 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.112514 kubelet[2751]: E0904 16:38:18.112468 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.112633 kubelet[2751]: E0904 16:38:18.112618 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.112633 kubelet[2751]: W0904 16:38:18.112627 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.112633 kubelet[2751]: E0904 16:38:18.112634 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.112796 kubelet[2751]: E0904 16:38:18.112782 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.112796 kubelet[2751]: W0904 16:38:18.112791 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.112839 kubelet[2751]: E0904 16:38:18.112798 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.112971 kubelet[2751]: E0904 16:38:18.112957 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.112971 kubelet[2751]: W0904 16:38:18.112966 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.113014 kubelet[2751]: E0904 16:38:18.112974 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.113137 kubelet[2751]: E0904 16:38:18.113122 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.113137 kubelet[2751]: W0904 16:38:18.113132 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.113182 kubelet[2751]: E0904 16:38:18.113139 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.113367 kubelet[2751]: E0904 16:38:18.113339 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.113367 kubelet[2751]: W0904 16:38:18.113354 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.113367 kubelet[2751]: E0904 16:38:18.113362 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.113554 kubelet[2751]: E0904 16:38:18.113527 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.113554 kubelet[2751]: W0904 16:38:18.113535 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.113554 kubelet[2751]: E0904 16:38:18.113543 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.113767 kubelet[2751]: E0904 16:38:18.113690 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.113767 kubelet[2751]: W0904 16:38:18.113699 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.113767 kubelet[2751]: E0904 16:38:18.113706 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.113981 kubelet[2751]: E0904 16:38:18.113944 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.113981 kubelet[2751]: W0904 16:38:18.113969 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.114042 kubelet[2751]: E0904 16:38:18.113994 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.114289 kubelet[2751]: E0904 16:38:18.114245 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.114289 kubelet[2751]: W0904 16:38:18.114256 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.114289 kubelet[2751]: E0904 16:38:18.114263 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.114451 kubelet[2751]: E0904 16:38:18.114434 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.114451 kubelet[2751]: W0904 16:38:18.114444 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.114451 kubelet[2751]: E0904 16:38:18.114453 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.114619 kubelet[2751]: E0904 16:38:18.114605 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.114619 kubelet[2751]: W0904 16:38:18.114614 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.114666 kubelet[2751]: E0904 16:38:18.114621 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.114797 kubelet[2751]: E0904 16:38:18.114783 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.114797 kubelet[2751]: W0904 16:38:18.114793 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.114846 kubelet[2751]: E0904 16:38:18.114800 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.119080 kubelet[2751]: E0904 16:38:18.119052 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.119080 kubelet[2751]: W0904 16:38:18.119066 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.119080 kubelet[2751]: E0904 16:38:18.119076 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.119170 kubelet[2751]: I0904 16:38:18.119101 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f37925be-17ad-4b82-a217-d98beb0e0897-registration-dir\") pod \"csi-node-driver-x4vr9\" (UID: \"f37925be-17ad-4b82-a217-d98beb0e0897\") " pod="calico-system/csi-node-driver-x4vr9" Sep 4 16:38:18.119313 kubelet[2751]: E0904 16:38:18.119287 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.119313 kubelet[2751]: W0904 16:38:18.119303 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.119361 kubelet[2751]: E0904 16:38:18.119318 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.119361 kubelet[2751]: I0904 16:38:18.119354 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f37925be-17ad-4b82-a217-d98beb0e0897-kubelet-dir\") pod \"csi-node-driver-x4vr9\" (UID: \"f37925be-17ad-4b82-a217-d98beb0e0897\") " pod="calico-system/csi-node-driver-x4vr9" Sep 4 16:38:18.119575 kubelet[2751]: E0904 16:38:18.119560 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.119575 kubelet[2751]: W0904 16:38:18.119571 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.119626 kubelet[2751]: E0904 16:38:18.119587 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.119626 kubelet[2751]: I0904 16:38:18.119602 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f37925be-17ad-4b82-a217-d98beb0e0897-socket-dir\") pod \"csi-node-driver-x4vr9\" (UID: \"f37925be-17ad-4b82-a217-d98beb0e0897\") " pod="calico-system/csi-node-driver-x4vr9" Sep 4 16:38:18.119805 kubelet[2751]: E0904 16:38:18.119781 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.119805 kubelet[2751]: W0904 16:38:18.119794 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.119851 kubelet[2751]: E0904 16:38:18.119809 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.119991 kubelet[2751]: E0904 16:38:18.119976 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.119991 kubelet[2751]: W0904 16:38:18.119986 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.120043 kubelet[2751]: E0904 16:38:18.119999 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.120182 kubelet[2751]: E0904 16:38:18.120167 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.120182 kubelet[2751]: W0904 16:38:18.120177 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.120225 kubelet[2751]: E0904 16:38:18.120190 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.120368 kubelet[2751]: E0904 16:38:18.120349 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.120368 kubelet[2751]: W0904 16:38:18.120359 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.120368 kubelet[2751]: E0904 16:38:18.120372 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.120539 kubelet[2751]: E0904 16:38:18.120526 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.120539 kubelet[2751]: W0904 16:38:18.120537 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.120582 kubelet[2751]: E0904 16:38:18.120549 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.120582 kubelet[2751]: I0904 16:38:18.120564 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f37925be-17ad-4b82-a217-d98beb0e0897-varrun\") pod \"csi-node-driver-x4vr9\" (UID: \"f37925be-17ad-4b82-a217-d98beb0e0897\") " pod="calico-system/csi-node-driver-x4vr9" Sep 4 16:38:18.120761 kubelet[2751]: E0904 16:38:18.120743 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.120761 kubelet[2751]: W0904 16:38:18.120758 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.120809 kubelet[2751]: E0904 16:38:18.120770 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.120809 kubelet[2751]: I0904 16:38:18.120788 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfbbk\" (UniqueName: \"kubernetes.io/projected/f37925be-17ad-4b82-a217-d98beb0e0897-kube-api-access-gfbbk\") pod \"csi-node-driver-x4vr9\" (UID: \"f37925be-17ad-4b82-a217-d98beb0e0897\") " pod="calico-system/csi-node-driver-x4vr9" Sep 4 16:38:18.121018 kubelet[2751]: E0904 16:38:18.121002 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.121018 kubelet[2751]: W0904 16:38:18.121013 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.121073 kubelet[2751]: E0904 16:38:18.121047 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.121197 kubelet[2751]: E0904 16:38:18.121184 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.121197 kubelet[2751]: W0904 16:38:18.121193 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.121252 kubelet[2751]: E0904 16:38:18.121221 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.121378 kubelet[2751]: E0904 16:38:18.121363 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.121378 kubelet[2751]: W0904 16:38:18.121374 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.121437 kubelet[2751]: E0904 16:38:18.121386 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.121568 kubelet[2751]: E0904 16:38:18.121552 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.121568 kubelet[2751]: W0904 16:38:18.121566 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.121609 kubelet[2751]: E0904 16:38:18.121576 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.121771 kubelet[2751]: E0904 16:38:18.121757 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.121771 kubelet[2751]: W0904 16:38:18.121766 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.121810 kubelet[2751]: E0904 16:38:18.121774 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.122000 kubelet[2751]: E0904 16:38:18.121985 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.122000 kubelet[2751]: W0904 16:38:18.121995 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.122071 kubelet[2751]: E0904 16:38:18.122005 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.135076 containerd[1609]: time="2025-09-04T16:38:18.135037552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9f7d,Uid:86e4d66e-4e37-452e-976c-40a7c77c3ac5,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:18.163600 containerd[1609]: time="2025-09-04T16:38:18.163551436Z" level=info msg="connecting to shim 30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6" address="unix:///run/containerd/s/4dcf845fb3159060d7e20e144a3d87b95d4d6c2d3eec8c05d8a557b582dd408b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:18.210021 systemd[1]: Started cri-containerd-30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6.scope - libcontainer container 30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6. Sep 4 16:38:18.222556 kubelet[2751]: E0904 16:38:18.222424 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.222556 kubelet[2751]: W0904 16:38:18.222442 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.222556 kubelet[2751]: E0904 16:38:18.222460 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.222731 kubelet[2751]: E0904 16:38:18.222708 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.222757 kubelet[2751]: W0904 16:38:18.222728 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.222784 kubelet[2751]: E0904 16:38:18.222753 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.223051 kubelet[2751]: E0904 16:38:18.223035 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.223051 kubelet[2751]: W0904 16:38:18.223046 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.223100 kubelet[2751]: E0904 16:38:18.223075 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.223340 kubelet[2751]: E0904 16:38:18.223324 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.223340 kubelet[2751]: W0904 16:38:18.223334 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.223402 kubelet[2751]: E0904 16:38:18.223347 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.223573 kubelet[2751]: E0904 16:38:18.223558 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.223573 kubelet[2751]: W0904 16:38:18.223569 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.223614 kubelet[2751]: E0904 16:38:18.223583 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.223837 kubelet[2751]: E0904 16:38:18.223822 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.223837 kubelet[2751]: W0904 16:38:18.223835 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.223904 kubelet[2751]: E0904 16:38:18.223879 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.224077 kubelet[2751]: E0904 16:38:18.224052 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.224077 kubelet[2751]: W0904 16:38:18.224072 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.224129 kubelet[2751]: E0904 16:38:18.224098 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.224262 kubelet[2751]: E0904 16:38:18.224242 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.224262 kubelet[2751]: W0904 16:38:18.224259 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.224320 kubelet[2751]: E0904 16:38:18.224304 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.224487 kubelet[2751]: E0904 16:38:18.224473 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.224487 kubelet[2751]: W0904 16:38:18.224484 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.224535 kubelet[2751]: E0904 16:38:18.224513 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.224704 kubelet[2751]: E0904 16:38:18.224690 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.224704 kubelet[2751]: W0904 16:38:18.224699 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.224754 kubelet[2751]: E0904 16:38:18.224725 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.224866 kubelet[2751]: E0904 16:38:18.224852 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.224866 kubelet[2751]: W0904 16:38:18.224862 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.225039 kubelet[2751]: E0904 16:38:18.224981 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.225076 kubelet[2751]: E0904 16:38:18.225065 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.225076 kubelet[2751]: W0904 16:38:18.225074 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.225116 kubelet[2751]: E0904 16:38:18.225095 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.225319 kubelet[2751]: E0904 16:38:18.225305 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.225319 kubelet[2751]: W0904 16:38:18.225316 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.225368 kubelet[2751]: E0904 16:38:18.225327 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.225552 kubelet[2751]: E0904 16:38:18.225538 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.225552 kubelet[2751]: W0904 16:38:18.225548 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.225597 kubelet[2751]: E0904 16:38:18.225574 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.225725 kubelet[2751]: E0904 16:38:18.225712 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.225725 kubelet[2751]: W0904 16:38:18.225722 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.225769 kubelet[2751]: E0904 16:38:18.225745 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.226010 kubelet[2751]: E0904 16:38:18.225994 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.226010 kubelet[2751]: W0904 16:38:18.226005 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.226071 kubelet[2751]: E0904 16:38:18.226055 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.226190 kubelet[2751]: E0904 16:38:18.226176 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.226190 kubelet[2751]: W0904 16:38:18.226185 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.226266 kubelet[2751]: E0904 16:38:18.226243 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.226401 kubelet[2751]: E0904 16:38:18.226387 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.226401 kubelet[2751]: W0904 16:38:18.226397 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.226511 kubelet[2751]: E0904 16:38:18.226489 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.226657 kubelet[2751]: E0904 16:38:18.226637 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.226657 kubelet[2751]: W0904 16:38:18.226651 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.226706 kubelet[2751]: E0904 16:38:18.226673 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.226906 kubelet[2751]: E0904 16:38:18.226876 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.226939 kubelet[2751]: W0904 16:38:18.226910 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.226939 kubelet[2751]: E0904 16:38:18.226926 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.227095 kubelet[2751]: E0904 16:38:18.227076 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.227095 kubelet[2751]: W0904 16:38:18.227088 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.227153 kubelet[2751]: E0904 16:38:18.227102 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.227383 kubelet[2751]: E0904 16:38:18.227357 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.227383 kubelet[2751]: W0904 16:38:18.227377 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.227442 kubelet[2751]: E0904 16:38:18.227399 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.227687 kubelet[2751]: E0904 16:38:18.227667 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.227687 kubelet[2751]: W0904 16:38:18.227680 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.227740 kubelet[2751]: E0904 16:38:18.227694 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.227920 kubelet[2751]: E0904 16:38:18.227894 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.227920 kubelet[2751]: W0904 16:38:18.227909 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.227981 kubelet[2751]: E0904 16:38:18.227924 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.228241 kubelet[2751]: E0904 16:38:18.228219 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.228241 kubelet[2751]: W0904 16:38:18.228232 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.228241 kubelet[2751]: E0904 16:38:18.228241 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.235413 kubelet[2751]: E0904 16:38:18.235359 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:18.235413 kubelet[2751]: W0904 16:38:18.235374 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:18.235413 kubelet[2751]: E0904 16:38:18.235388 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:18.241759 containerd[1609]: time="2025-09-04T16:38:18.241733791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9f7d,Uid:86e4d66e-4e37-452e-976c-40a7c77c3ac5,Namespace:calico-system,Attempt:0,} returns sandbox id \"30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6\"" Sep 4 16:38:19.563422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286469599.mount: Deactivated successfully. Sep 4 16:38:19.791900 update_engine[1593]: I20250904 16:38:19.791841 1593 update_attempter.cc:509] Updating boot flags... Sep 4 16:38:20.214985 kubelet[2751]: E0904 16:38:20.214930 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vr9" podUID="f37925be-17ad-4b82-a217-d98beb0e0897" Sep 4 16:38:21.571111 containerd[1609]: time="2025-09-04T16:38:21.571059693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:21.571921 containerd[1609]: time="2025-09-04T16:38:21.571853798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 16:38:21.573078 containerd[1609]: time="2025-09-04T16:38:21.573047596Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:21.575148 containerd[1609]: time="2025-09-04T16:38:21.575100354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:21.575609 containerd[1609]: time="2025-09-04T16:38:21.575551573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.667097299s" Sep 4 16:38:21.575609 containerd[1609]: time="2025-09-04T16:38:21.575601897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 16:38:21.576412 containerd[1609]: time="2025-09-04T16:38:21.576374948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 16:38:21.583907 containerd[1609]: time="2025-09-04T16:38:21.582928684Z" level=info msg="CreateContainer within sandbox \"6cf517f0c46e0249685655de333191d624f4bb9dccd046342410f12b886091bc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 16:38:21.591246 containerd[1609]: time="2025-09-04T16:38:21.591204936Z" level=info msg="Container 034882b469885523265f8d8f43f68ca5e50b601297f92095e5baf5faf6872766: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:21.599448 containerd[1609]: time="2025-09-04T16:38:21.599407367Z" level=info msg="CreateContainer within sandbox \"6cf517f0c46e0249685655de333191d624f4bb9dccd046342410f12b886091bc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"034882b469885523265f8d8f43f68ca5e50b601297f92095e5baf5faf6872766\"" Sep 4 16:38:21.599898 containerd[1609]: time="2025-09-04T16:38:21.599838675Z" level=info msg="StartContainer for \"034882b469885523265f8d8f43f68ca5e50b601297f92095e5baf5faf6872766\"" Sep 4 16:38:21.600826 containerd[1609]: time="2025-09-04T16:38:21.600792600Z" level=info msg="connecting to shim 034882b469885523265f8d8f43f68ca5e50b601297f92095e5baf5faf6872766" address="unix:///run/containerd/s/12ef5898168602668d0bb5d81cf38e3d0768ebd1de8fe4fb96e3451b84e40b69" protocol=ttrpc version=3 Sep 4 16:38:21.630016 systemd[1]: Started cri-containerd-034882b469885523265f8d8f43f68ca5e50b601297f92095e5baf5faf6872766.scope - libcontainer container 034882b469885523265f8d8f43f68ca5e50b601297f92095e5baf5faf6872766. Sep 4 16:38:21.678765 containerd[1609]: time="2025-09-04T16:38:21.678724718Z" level=info msg="StartContainer for \"034882b469885523265f8d8f43f68ca5e50b601297f92095e5baf5faf6872766\" returns successfully" Sep 4 16:38:22.215121 kubelet[2751]: E0904 16:38:22.215079 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vr9" podUID="f37925be-17ad-4b82-a217-d98beb0e0897" Sep 4 16:38:22.277677 kubelet[2751]: E0904 16:38:22.277644 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:22.286648 kubelet[2751]: I0904 16:38:22.286575 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6665c8cdf9-v6hl8" podStartSLOduration=1.6184525440000002 podStartE2EDuration="5.286534142s" podCreationTimestamp="2025-09-04 16:38:17 +0000 UTC" firstStartedPulling="2025-09-04 16:38:17.908165177 +0000 UTC m=+17.769106297" lastFinishedPulling="2025-09-04 16:38:21.576246775 +0000 UTC m=+21.437187895" observedRunningTime="2025-09-04 16:38:22.285665972 +0000 UTC m=+22.146607092" watchObservedRunningTime="2025-09-04 16:38:22.286534142 +0000 UTC m=+22.147475262" Sep 4 16:38:22.345637 kubelet[2751]: E0904 16:38:22.345614 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.345637 kubelet[2751]: W0904 16:38:22.345632 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.345637 kubelet[2751]: E0904 16:38:22.345649 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.345901 kubelet[2751]: E0904 16:38:22.345874 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.345942 kubelet[2751]: W0904 16:38:22.345906 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.345942 kubelet[2751]: E0904 16:38:22.345916 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.346096 kubelet[2751]: E0904 16:38:22.346082 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.346096 kubelet[2751]: W0904 16:38:22.346093 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.346144 kubelet[2751]: E0904 16:38:22.346101 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.346283 kubelet[2751]: E0904 16:38:22.346268 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.346283 kubelet[2751]: W0904 16:38:22.346278 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.346350 kubelet[2751]: E0904 16:38:22.346287 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.346480 kubelet[2751]: E0904 16:38:22.346465 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.346480 kubelet[2751]: W0904 16:38:22.346475 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.346480 kubelet[2751]: E0904 16:38:22.346482 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.346663 kubelet[2751]: E0904 16:38:22.346638 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.346663 kubelet[2751]: W0904 16:38:22.346648 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.346663 kubelet[2751]: E0904 16:38:22.346657 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.346834 kubelet[2751]: E0904 16:38:22.346807 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.346834 kubelet[2751]: W0904 16:38:22.346817 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.346834 kubelet[2751]: E0904 16:38:22.346824 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.347070 kubelet[2751]: E0904 16:38:22.347044 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.347070 kubelet[2751]: W0904 16:38:22.347055 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.347070 kubelet[2751]: E0904 16:38:22.347062 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.347256 kubelet[2751]: E0904 16:38:22.347239 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.347256 kubelet[2751]: W0904 16:38:22.347249 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.347256 kubelet[2751]: E0904 16:38:22.347256 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.347420 kubelet[2751]: E0904 16:38:22.347404 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.347420 kubelet[2751]: W0904 16:38:22.347413 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.347469 kubelet[2751]: E0904 16:38:22.347422 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.347597 kubelet[2751]: E0904 16:38:22.347582 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.347597 kubelet[2751]: W0904 16:38:22.347591 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.347666 kubelet[2751]: E0904 16:38:22.347598 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.347774 kubelet[2751]: E0904 16:38:22.347757 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.347774 kubelet[2751]: W0904 16:38:22.347766 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.347774 kubelet[2751]: E0904 16:38:22.347774 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.347954 kubelet[2751]: E0904 16:38:22.347937 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.347954 kubelet[2751]: W0904 16:38:22.347947 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.347954 kubelet[2751]: E0904 16:38:22.347954 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.348128 kubelet[2751]: E0904 16:38:22.348111 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.348128 kubelet[2751]: W0904 16:38:22.348120 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.348128 kubelet[2751]: E0904 16:38:22.348127 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.348292 kubelet[2751]: E0904 16:38:22.348275 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.348292 kubelet[2751]: W0904 16:38:22.348285 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.348292 kubelet[2751]: E0904 16:38:22.348292 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.351733 kubelet[2751]: E0904 16:38:22.351709 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.351733 kubelet[2751]: W0904 16:38:22.351729 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.351791 kubelet[2751]: E0904 16:38:22.351751 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.352003 kubelet[2751]: E0904 16:38:22.351987 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.352003 kubelet[2751]: W0904 16:38:22.351998 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.352058 kubelet[2751]: E0904 16:38:22.352013 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.352218 kubelet[2751]: E0904 16:38:22.352202 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.352218 kubelet[2751]: W0904 16:38:22.352212 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.352269 kubelet[2751]: E0904 16:38:22.352225 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.352426 kubelet[2751]: E0904 16:38:22.352412 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.352426 kubelet[2751]: W0904 16:38:22.352421 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.352482 kubelet[2751]: E0904 16:38:22.352436 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.352643 kubelet[2751]: E0904 16:38:22.352626 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.352643 kubelet[2751]: W0904 16:38:22.352639 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.352697 kubelet[2751]: E0904 16:38:22.352656 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.352854 kubelet[2751]: E0904 16:38:22.352839 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.352854 kubelet[2751]: W0904 16:38:22.352849 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.352927 kubelet[2751]: E0904 16:38:22.352860 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.353092 kubelet[2751]: E0904 16:38:22.353076 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.353092 kubelet[2751]: W0904 16:38:22.353086 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.353159 kubelet[2751]: E0904 16:38:22.353117 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.353278 kubelet[2751]: E0904 16:38:22.353247 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.353278 kubelet[2751]: W0904 16:38:22.353257 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.353335 kubelet[2751]: E0904 16:38:22.353301 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.353410 kubelet[2751]: E0904 16:38:22.353396 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.353410 kubelet[2751]: W0904 16:38:22.353406 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.353464 kubelet[2751]: E0904 16:38:22.353420 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.353618 kubelet[2751]: E0904 16:38:22.353590 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.353618 kubelet[2751]: W0904 16:38:22.353600 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.353618 kubelet[2751]: E0904 16:38:22.353613 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.353792 kubelet[2751]: E0904 16:38:22.353779 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.353792 kubelet[2751]: W0904 16:38:22.353788 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.353846 kubelet[2751]: E0904 16:38:22.353801 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.354012 kubelet[2751]: E0904 16:38:22.353998 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.354012 kubelet[2751]: W0904 16:38:22.354007 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.354067 kubelet[2751]: E0904 16:38:22.354020 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.354264 kubelet[2751]: E0904 16:38:22.354246 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.354264 kubelet[2751]: W0904 16:38:22.354260 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.354326 kubelet[2751]: E0904 16:38:22.354276 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.354470 kubelet[2751]: E0904 16:38:22.354444 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.354470 kubelet[2751]: W0904 16:38:22.354456 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.354521 kubelet[2751]: E0904 16:38:22.354478 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.354676 kubelet[2751]: E0904 16:38:22.354662 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.354676 kubelet[2751]: W0904 16:38:22.354672 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.354745 kubelet[2751]: E0904 16:38:22.354684 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.354866 kubelet[2751]: E0904 16:38:22.354849 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.354866 kubelet[2751]: W0904 16:38:22.354861 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.354955 kubelet[2751]: E0904 16:38:22.354872 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.355209 kubelet[2751]: E0904 16:38:22.355192 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.355209 kubelet[2751]: W0904 16:38:22.355207 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.355341 kubelet[2751]: E0904 16:38:22.355222 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:22.355414 kubelet[2751]: E0904 16:38:22.355395 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:22.355456 kubelet[2751]: W0904 16:38:22.355418 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:22.355456 kubelet[2751]: E0904 16:38:22.355427 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:23.257792 containerd[1609]: time="2025-09-04T16:38:23.257748944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:23.258681 containerd[1609]: time="2025-09-04T16:38:23.258624031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 16:38:23.259668 containerd[1609]: time="2025-09-04T16:38:23.259629705Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:23.261586 containerd[1609]: time="2025-09-04T16:38:23.261552070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:23.262000 containerd[1609]: time="2025-09-04T16:38:23.261966197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.685564133s" Sep 4 16:38:23.262000 containerd[1609]: time="2025-09-04T16:38:23.261998352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 16:38:23.263830 containerd[1609]: time="2025-09-04T16:38:23.263783438Z" level=info msg="CreateContainer within sandbox \"30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 16:38:23.273127 containerd[1609]: time="2025-09-04T16:38:23.273090414Z" level=info msg="Container cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:23.278608 kubelet[2751]: I0904 16:38:23.278582 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:38:23.278912 kubelet[2751]: E0904 16:38:23.278881 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:23.281732 containerd[1609]: time="2025-09-04T16:38:23.281692791Z" level=info msg="CreateContainer within sandbox \"30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35\"" Sep 4 16:38:23.282025 containerd[1609]: time="2025-09-04T16:38:23.282003125Z" level=info msg="StartContainer for \"cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35\"" Sep 4 16:38:23.283237 containerd[1609]: time="2025-09-04T16:38:23.283210151Z" level=info msg="connecting to shim cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35" address="unix:///run/containerd/s/4dcf845fb3159060d7e20e144a3d87b95d4d6c2d3eec8c05d8a557b582dd408b" protocol=ttrpc version=3 Sep 4 16:38:23.308113 systemd[1]: Started cri-containerd-cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35.scope - libcontainer container cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35. Sep 4 16:38:23.356858 kubelet[2751]: E0904 16:38:23.356830 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:23.356858 kubelet[2751]: W0904 16:38:23.356848 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:23.356858 kubelet[2751]: E0904 16:38:23.356864 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:23.357201 kubelet[2751]: E0904 16:38:23.357079 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:23.357201 kubelet[2751]: W0904 16:38:23.357087 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:23.357201 kubelet[2751]: E0904 16:38:23.357095 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:23.357305 kubelet[2751]: E0904 16:38:23.357260 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:23.357305 kubelet[2751]: W0904 16:38:23.357268 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:23.357305 kubelet[2751]: E0904 16:38:23.357275 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:23.357441 kubelet[2751]: E0904 16:38:23.357422 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:23.357441 kubelet[2751]: W0904 16:38:23.357429 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:23.357441 kubelet[2751]: E0904 16:38:23.357436 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:23.357609 kubelet[2751]: E0904 16:38:23.357587 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:38:23.357609 kubelet[2751]: W0904 16:38:23.357602 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:38:23.357609 kubelet[2751]: E0904 16:38:23.357609 2751 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:38:23.358854 systemd[1]: cri-containerd-cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35.scope: Deactivated successfully. Sep 4 16:38:23.362419 containerd[1609]: time="2025-09-04T16:38:23.362362592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35\" id:\"cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35\" pid:3452 exited_at:{seconds:1757003903 nanos:361857501}" Sep 4 16:38:23.389940 containerd[1609]: time="2025-09-04T16:38:23.389900550Z" level=info msg="received exit event container_id:\"cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35\" id:\"cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35\" pid:3452 exited_at:{seconds:1757003903 nanos:361857501}" Sep 4 16:38:23.390973 containerd[1609]: time="2025-09-04T16:38:23.390936746Z" level=info msg="StartContainer for \"cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35\" returns successfully" Sep 4 16:38:23.412636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfc6ab5077334662695555647b77da9ff2a1e375323509b16706833f10f19b35-rootfs.mount: Deactivated successfully. Sep 4 16:38:24.215071 kubelet[2751]: E0904 16:38:24.215025 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vr9" podUID="f37925be-17ad-4b82-a217-d98beb0e0897" Sep 4 16:38:24.282388 containerd[1609]: time="2025-09-04T16:38:24.282346040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 16:38:26.214495 kubelet[2751]: E0904 16:38:26.214447 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vr9" podUID="f37925be-17ad-4b82-a217-d98beb0e0897" Sep 4 16:38:28.215028 kubelet[2751]: E0904 16:38:28.214978 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vr9" podUID="f37925be-17ad-4b82-a217-d98beb0e0897" Sep 4 16:38:28.369834 containerd[1609]: time="2025-09-04T16:38:28.369781061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:28.370497 containerd[1609]: time="2025-09-04T16:38:28.370464073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 16:38:28.371434 containerd[1609]: time="2025-09-04T16:38:28.371403319Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:28.373284 containerd[1609]: time="2025-09-04T16:38:28.373254005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:28.373839 containerd[1609]: time="2025-09-04T16:38:28.373798017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.091415903s" Sep 4 16:38:28.373839 containerd[1609]: time="2025-09-04T16:38:28.373835823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 16:38:28.375471 containerd[1609]: time="2025-09-04T16:38:28.375421307Z" level=info msg="CreateContainer within sandbox \"30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 16:38:28.384231 containerd[1609]: time="2025-09-04T16:38:28.384196648Z" level=info msg="Container 4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:28.392819 containerd[1609]: time="2025-09-04T16:38:28.392786116Z" level=info msg="CreateContainer within sandbox \"30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a\"" Sep 4 16:38:28.393280 containerd[1609]: time="2025-09-04T16:38:28.393242723Z" level=info msg="StartContainer for \"4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a\"" Sep 4 16:38:28.394585 containerd[1609]: time="2025-09-04T16:38:28.394561932Z" level=info msg="connecting to shim 4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a" address="unix:///run/containerd/s/4dcf845fb3159060d7e20e144a3d87b95d4d6c2d3eec8c05d8a557b582dd408b" protocol=ttrpc version=3 Sep 4 16:38:28.421025 systemd[1]: Started cri-containerd-4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a.scope - libcontainer container 4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a. Sep 4 16:38:28.462977 containerd[1609]: time="2025-09-04T16:38:28.462935009Z" level=info msg="StartContainer for \"4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a\" returns successfully" Sep 4 16:38:29.449473 containerd[1609]: time="2025-09-04T16:38:29.449421275Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 16:38:29.452367 systemd[1]: cri-containerd-4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a.scope: Deactivated successfully. Sep 4 16:38:29.452955 systemd[1]: cri-containerd-4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a.scope: Consumed 593ms CPU time, 179.5M memory peak, 2.3M read from disk, 171.3M written to disk. Sep 4 16:38:29.453353 containerd[1609]: time="2025-09-04T16:38:29.453326496Z" level=info msg="received exit event container_id:\"4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a\" id:\"4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a\" pid:3523 exited_at:{seconds:1757003909 nanos:453052036}" Sep 4 16:38:29.453662 containerd[1609]: time="2025-09-04T16:38:29.453599232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a\" id:\"4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a\" pid:3523 exited_at:{seconds:1757003909 nanos:453052036}" Sep 4 16:38:29.458826 kubelet[2751]: I0904 16:38:29.458800 2751 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 4 16:38:29.479386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f3d8c9794c088bcb0d435fe16712eae64ec4e235c265d4f0059b722a4a9275a-rootfs.mount: Deactivated successfully. Sep 4 16:38:29.498069 systemd[1]: Created slice kubepods-burstable-poda4efebbf_9f8c_4a90_84b0_5c91397dcaf7.slice - libcontainer container kubepods-burstable-poda4efebbf_9f8c_4a90_84b0_5c91397dcaf7.slice. Sep 4 16:38:29.501294 kubelet[2751]: I0904 16:38:29.498479 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1cf2ef1c-ed9a-45fe-b40b-c7c747394568-calico-apiserver-certs\") pod \"calico-apiserver-64b68b94f-sh2d4\" (UID: \"1cf2ef1c-ed9a-45fe-b40b-c7c747394568\") " pod="calico-apiserver/calico-apiserver-64b68b94f-sh2d4" Sep 4 16:38:29.501294 kubelet[2751]: I0904 16:38:29.498551 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74549879-72cf-44a7-aeac-80e0d4bf8fab-calico-apiserver-certs\") pod \"calico-apiserver-64b68b94f-jk85x\" (UID: \"74549879-72cf-44a7-aeac-80e0d4bf8fab\") " pod="calico-apiserver/calico-apiserver-64b68b94f-jk85x" Sep 4 16:38:29.501294 kubelet[2751]: I0904 16:38:29.498582 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qdp\" (UniqueName: \"kubernetes.io/projected/f71802a3-8f51-4535-ba85-ea33c10efa51-kube-api-access-65qdp\") pod \"calico-kube-controllers-7865959f87-8s5vl\" (UID: \"f71802a3-8f51-4535-ba85-ea33c10efa51\") " pod="calico-system/calico-kube-controllers-7865959f87-8s5vl" Sep 4 16:38:29.501294 kubelet[2751]: I0904 16:38:29.498627 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4efebbf-9f8c-4a90-84b0-5c91397dcaf7-config-volume\") pod \"coredns-7c65d6cfc9-tt6nk\" (UID: \"a4efebbf-9f8c-4a90-84b0-5c91397dcaf7\") " pod="kube-system/coredns-7c65d6cfc9-tt6nk" Sep 4 16:38:29.501294 kubelet[2751]: I0904 16:38:29.498643 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/736b22e8-5edf-4ff6-a411-ae55a48db23f-goldmane-ca-bundle\") pod \"goldmane-7988f88666-rpcnd\" (UID: \"736b22e8-5edf-4ff6-a411-ae55a48db23f\") " pod="calico-system/goldmane-7988f88666-rpcnd" Sep 4 16:38:29.501545 kubelet[2751]: I0904 16:38:29.498733 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/184689f5-f11a-4c95-81d1-690a0392b7bc-config-volume\") pod \"coredns-7c65d6cfc9-j9q9l\" (UID: \"184689f5-f11a-4c95-81d1-690a0392b7bc\") " pod="kube-system/coredns-7c65d6cfc9-j9q9l" Sep 4 16:38:29.501545 kubelet[2751]: I0904 16:38:29.498752 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f71802a3-8f51-4535-ba85-ea33c10efa51-tigera-ca-bundle\") pod \"calico-kube-controllers-7865959f87-8s5vl\" (UID: \"f71802a3-8f51-4535-ba85-ea33c10efa51\") " pod="calico-system/calico-kube-controllers-7865959f87-8s5vl" Sep 4 16:38:29.501545 kubelet[2751]: I0904 16:38:29.498768 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-whisker-backend-key-pair\") pod \"whisker-7fff96445d-dm5hl\" (UID: \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\") " pod="calico-system/whisker-7fff96445d-dm5hl" Sep 4 16:38:29.501545 kubelet[2751]: I0904 16:38:29.498810 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xkf\" (UniqueName: \"kubernetes.io/projected/184689f5-f11a-4c95-81d1-690a0392b7bc-kube-api-access-z2xkf\") pod \"coredns-7c65d6cfc9-j9q9l\" (UID: \"184689f5-f11a-4c95-81d1-690a0392b7bc\") " pod="kube-system/coredns-7c65d6cfc9-j9q9l" Sep 4 16:38:29.501545 kubelet[2751]: I0904 16:38:29.498827 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-whisker-ca-bundle\") pod \"whisker-7fff96445d-dm5hl\" (UID: \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\") " pod="calico-system/whisker-7fff96445d-dm5hl" Sep 4 16:38:29.501709 kubelet[2751]: I0904 16:38:29.498841 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6lc\" (UniqueName: \"kubernetes.io/projected/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-kube-api-access-2j6lc\") pod \"whisker-7fff96445d-dm5hl\" (UID: \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\") " pod="calico-system/whisker-7fff96445d-dm5hl" Sep 4 16:38:29.501709 kubelet[2751]: I0904 16:38:29.498864 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9qkv\" (UniqueName: \"kubernetes.io/projected/74549879-72cf-44a7-aeac-80e0d4bf8fab-kube-api-access-g9qkv\") pod \"calico-apiserver-64b68b94f-jk85x\" (UID: \"74549879-72cf-44a7-aeac-80e0d4bf8fab\") " pod="calico-apiserver/calico-apiserver-64b68b94f-jk85x" Sep 4 16:38:29.501709 kubelet[2751]: I0904 16:38:29.498924 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxfw8\" (UniqueName: \"kubernetes.io/projected/1cf2ef1c-ed9a-45fe-b40b-c7c747394568-kube-api-access-mxfw8\") pod \"calico-apiserver-64b68b94f-sh2d4\" (UID: \"1cf2ef1c-ed9a-45fe-b40b-c7c747394568\") " pod="calico-apiserver/calico-apiserver-64b68b94f-sh2d4" Sep 4 16:38:29.501709 kubelet[2751]: I0904 16:38:29.498941 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t82r\" (UniqueName: \"kubernetes.io/projected/a4efebbf-9f8c-4a90-84b0-5c91397dcaf7-kube-api-access-9t82r\") pod \"coredns-7c65d6cfc9-tt6nk\" (UID: \"a4efebbf-9f8c-4a90-84b0-5c91397dcaf7\") " pod="kube-system/coredns-7c65d6cfc9-tt6nk" Sep 4 16:38:29.501709 kubelet[2751]: I0904 16:38:29.498985 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736b22e8-5edf-4ff6-a411-ae55a48db23f-config\") pod \"goldmane-7988f88666-rpcnd\" (UID: \"736b22e8-5edf-4ff6-a411-ae55a48db23f\") " pod="calico-system/goldmane-7988f88666-rpcnd" Sep 4 16:38:29.501849 kubelet[2751]: I0904 16:38:29.499002 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/736b22e8-5edf-4ff6-a411-ae55a48db23f-goldmane-key-pair\") pod \"goldmane-7988f88666-rpcnd\" (UID: \"736b22e8-5edf-4ff6-a411-ae55a48db23f\") " pod="calico-system/goldmane-7988f88666-rpcnd" Sep 4 16:38:29.501849 kubelet[2751]: I0904 16:38:29.499018 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gtxk\" (UniqueName: \"kubernetes.io/projected/736b22e8-5edf-4ff6-a411-ae55a48db23f-kube-api-access-4gtxk\") pod \"goldmane-7988f88666-rpcnd\" (UID: \"736b22e8-5edf-4ff6-a411-ae55a48db23f\") " pod="calico-system/goldmane-7988f88666-rpcnd" Sep 4 16:38:29.507600 systemd[1]: Created slice kubepods-besteffort-podf71802a3_8f51_4535_ba85_ea33c10efa51.slice - libcontainer container kubepods-besteffort-podf71802a3_8f51_4535_ba85_ea33c10efa51.slice. Sep 4 16:38:29.513548 systemd[1]: Created slice kubepods-besteffort-pod74549879_72cf_44a7_aeac_80e0d4bf8fab.slice - libcontainer container kubepods-besteffort-pod74549879_72cf_44a7_aeac_80e0d4bf8fab.slice. Sep 4 16:38:29.518559 systemd[1]: Created slice kubepods-besteffort-pod5ecdf0cb_2c4d_43d1_9392_70e9820b6e71.slice - libcontainer container kubepods-besteffort-pod5ecdf0cb_2c4d_43d1_9392_70e9820b6e71.slice. Sep 4 16:38:29.530094 systemd[1]: Created slice kubepods-besteffort-pod1cf2ef1c_ed9a_45fe_b40b_c7c747394568.slice - libcontainer container kubepods-besteffort-pod1cf2ef1c_ed9a_45fe_b40b_c7c747394568.slice. Sep 4 16:38:29.535344 systemd[1]: Created slice kubepods-besteffort-pod736b22e8_5edf_4ff6_a411_ae55a48db23f.slice - libcontainer container kubepods-besteffort-pod736b22e8_5edf_4ff6_a411_ae55a48db23f.slice. Sep 4 16:38:29.539829 systemd[1]: Created slice kubepods-burstable-pod184689f5_f11a_4c95_81d1_690a0392b7bc.slice - libcontainer container kubepods-burstable-pod184689f5_f11a_4c95_81d1_690a0392b7bc.slice. Sep 4 16:38:29.858394 kubelet[2751]: E0904 16:38:29.858357 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:29.858394 kubelet[2751]: E0904 16:38:29.858496 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:29.858721 containerd[1609]: time="2025-09-04T16:38:29.858574342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b68b94f-sh2d4,Uid:1cf2ef1c-ed9a-45fe-b40b-c7c747394568,Namespace:calico-apiserver,Attempt:0,}" Sep 4 16:38:29.858963 containerd[1609]: time="2025-09-04T16:38:29.858935956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b68b94f-jk85x,Uid:74549879-72cf-44a7-aeac-80e0d4bf8fab,Namespace:calico-apiserver,Attempt:0,}" Sep 4 16:38:29.859068 containerd[1609]: time="2025-09-04T16:38:29.859037319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fff96445d-dm5hl,Uid:5ecdf0cb-2c4d-43d1-9392-70e9820b6e71,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:29.859147 containerd[1609]: time="2025-09-04T16:38:29.859127530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tt6nk,Uid:a4efebbf-9f8c-4a90-84b0-5c91397dcaf7,Namespace:kube-system,Attempt:0,}" Sep 4 16:38:29.859173 containerd[1609]: time="2025-09-04T16:38:29.859144985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7865959f87-8s5vl,Uid:f71802a3-8f51-4535-ba85-ea33c10efa51,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:29.859201 containerd[1609]: time="2025-09-04T16:38:29.859129404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j9q9l,Uid:184689f5-f11a-4c95-81d1-690a0392b7bc,Namespace:kube-system,Attempt:0,}" Sep 4 16:38:29.859361 containerd[1609]: time="2025-09-04T16:38:29.859328142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rpcnd,Uid:736b22e8-5edf-4ff6-a411-ae55a48db23f,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:29.972233 containerd[1609]: time="2025-09-04T16:38:29.972177787Z" level=error msg="Failed to destroy network for sandbox \"ad42668a2f7086ed8a57e18bc3324a8108721be2cfe53ce6fe99a1ed1a64131f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.974091 containerd[1609]: time="2025-09-04T16:38:29.974050396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fff96445d-dm5hl,Uid:5ecdf0cb-2c4d-43d1-9392-70e9820b6e71,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad42668a2f7086ed8a57e18bc3324a8108721be2cfe53ce6fe99a1ed1a64131f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.980568 kubelet[2751]: E0904 16:38:29.980519 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad42668a2f7086ed8a57e18bc3324a8108721be2cfe53ce6fe99a1ed1a64131f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.980655 kubelet[2751]: E0904 16:38:29.980601 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad42668a2f7086ed8a57e18bc3324a8108721be2cfe53ce6fe99a1ed1a64131f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fff96445d-dm5hl" Sep 4 16:38:29.980655 kubelet[2751]: E0904 16:38:29.980622 2751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad42668a2f7086ed8a57e18bc3324a8108721be2cfe53ce6fe99a1ed1a64131f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fff96445d-dm5hl" Sep 4 16:38:29.980718 kubelet[2751]: E0904 16:38:29.980668 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fff96445d-dm5hl_calico-system(5ecdf0cb-2c4d-43d1-9392-70e9820b6e71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fff96445d-dm5hl_calico-system(5ecdf0cb-2c4d-43d1-9392-70e9820b6e71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad42668a2f7086ed8a57e18bc3324a8108721be2cfe53ce6fe99a1ed1a64131f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fff96445d-dm5hl" podUID="5ecdf0cb-2c4d-43d1-9392-70e9820b6e71" Sep 4 16:38:29.982986 containerd[1609]: time="2025-09-04T16:38:29.982749316Z" level=error msg="Failed to destroy network for sandbox \"fde2e481a4d644f0803af5a71432edf9ac23bc815ef03189a7b694f69518b640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.989031 containerd[1609]: time="2025-09-04T16:38:29.988994121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tt6nk,Uid:a4efebbf-9f8c-4a90-84b0-5c91397dcaf7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde2e481a4d644f0803af5a71432edf9ac23bc815ef03189a7b694f69518b640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.989250 kubelet[2751]: E0904 16:38:29.989218 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde2e481a4d644f0803af5a71432edf9ac23bc815ef03189a7b694f69518b640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.989349 kubelet[2751]: E0904 16:38:29.989333 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde2e481a4d644f0803af5a71432edf9ac23bc815ef03189a7b694f69518b640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tt6nk" Sep 4 16:38:29.989446 kubelet[2751]: E0904 16:38:29.989418 2751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde2e481a4d644f0803af5a71432edf9ac23bc815ef03189a7b694f69518b640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tt6nk" Sep 4 16:38:29.989568 kubelet[2751]: E0904 16:38:29.989526 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tt6nk_kube-system(a4efebbf-9f8c-4a90-84b0-5c91397dcaf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tt6nk_kube-system(a4efebbf-9f8c-4a90-84b0-5c91397dcaf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fde2e481a4d644f0803af5a71432edf9ac23bc815ef03189a7b694f69518b640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tt6nk" podUID="a4efebbf-9f8c-4a90-84b0-5c91397dcaf7" Sep 4 16:38:29.994704 containerd[1609]: time="2025-09-04T16:38:29.994672231Z" level=error msg="Failed to destroy network for sandbox \"bf954b107c1e90c73329dc2266738ab9753a038c145eeacc10121fdd37b026db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.996327 containerd[1609]: time="2025-09-04T16:38:29.996278488Z" level=error msg="Failed to destroy network for sandbox \"dfb3d50388249506a07eba56d16e67c0655371cf81982ef2dcf6541acb9f22cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.996917 containerd[1609]: time="2025-09-04T16:38:29.996869951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j9q9l,Uid:184689f5-f11a-4c95-81d1-690a0392b7bc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf954b107c1e90c73329dc2266738ab9753a038c145eeacc10121fdd37b026db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.997235 kubelet[2751]: E0904 16:38:29.997183 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf954b107c1e90c73329dc2266738ab9753a038c145eeacc10121fdd37b026db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.997235 kubelet[2751]: E0904 16:38:29.997218 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf954b107c1e90c73329dc2266738ab9753a038c145eeacc10121fdd37b026db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j9q9l" Sep 4 16:38:29.997485 kubelet[2751]: E0904 16:38:29.997254 2751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf954b107c1e90c73329dc2266738ab9753a038c145eeacc10121fdd37b026db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j9q9l" Sep 4 16:38:29.997728 kubelet[2751]: E0904 16:38:29.997292 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-j9q9l_kube-system(184689f5-f11a-4c95-81d1-690a0392b7bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-j9q9l_kube-system(184689f5-f11a-4c95-81d1-690a0392b7bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf954b107c1e90c73329dc2266738ab9753a038c145eeacc10121fdd37b026db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j9q9l" podUID="184689f5-f11a-4c95-81d1-690a0392b7bc" Sep 4 16:38:29.998104 containerd[1609]: time="2025-09-04T16:38:29.998064223Z" level=error msg="Failed to destroy network for sandbox \"36c4d9778e827e4602c2c5c1b547bdfa42b5f0179168b42a875abaa2898f4f20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.999319 containerd[1609]: time="2025-09-04T16:38:29.999243414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7865959f87-8s5vl,Uid:f71802a3-8f51-4535-ba85-ea33c10efa51,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb3d50388249506a07eba56d16e67c0655371cf81982ef2dcf6541acb9f22cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.999534 kubelet[2751]: E0904 16:38:29.999500 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb3d50388249506a07eba56d16e67c0655371cf81982ef2dcf6541acb9f22cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:29.999705 kubelet[2751]: E0904 16:38:29.999664 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb3d50388249506a07eba56d16e67c0655371cf81982ef2dcf6541acb9f22cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7865959f87-8s5vl" Sep 4 16:38:29.999864 kubelet[2751]: E0904 16:38:29.999761 2751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb3d50388249506a07eba56d16e67c0655371cf81982ef2dcf6541acb9f22cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7865959f87-8s5vl" Sep 4 16:38:30.000053 kubelet[2751]: E0904 16:38:29.999841 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7865959f87-8s5vl_calico-system(f71802a3-8f51-4535-ba85-ea33c10efa51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7865959f87-8s5vl_calico-system(f71802a3-8f51-4535-ba85-ea33c10efa51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfb3d50388249506a07eba56d16e67c0655371cf81982ef2dcf6541acb9f22cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7865959f87-8s5vl" podUID="f71802a3-8f51-4535-ba85-ea33c10efa51" Sep 4 16:38:30.001187 containerd[1609]: time="2025-09-04T16:38:30.001158998Z" level=error msg="Failed to destroy network for sandbox \"8d5f40690af384d1eb6b743b474b42f96eea9a43bfc25a25dffa0e088d248395\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.002044 containerd[1609]: time="2025-09-04T16:38:30.001979568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b68b94f-sh2d4,Uid:1cf2ef1c-ed9a-45fe-b40b-c7c747394568,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36c4d9778e827e4602c2c5c1b547bdfa42b5f0179168b42a875abaa2898f4f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.002379 containerd[1609]: time="2025-09-04T16:38:30.002350638Z" level=error msg="Failed to destroy network for sandbox \"12bf1956be20e8146ec89abc27181098998b21ab22336baa8c39799ff87fcc2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.002738 kubelet[2751]: E0904 16:38:30.002447 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36c4d9778e827e4602c2c5c1b547bdfa42b5f0179168b42a875abaa2898f4f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.002738 kubelet[2751]: E0904 16:38:30.002485 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36c4d9778e827e4602c2c5c1b547bdfa42b5f0179168b42a875abaa2898f4f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64b68b94f-sh2d4" Sep 4 16:38:30.002738 kubelet[2751]: E0904 16:38:30.002518 2751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36c4d9778e827e4602c2c5c1b547bdfa42b5f0179168b42a875abaa2898f4f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64b68b94f-sh2d4" Sep 4 16:38:30.002850 kubelet[2751]: E0904 16:38:30.002545 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64b68b94f-sh2d4_calico-apiserver(1cf2ef1c-ed9a-45fe-b40b-c7c747394568)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64b68b94f-sh2d4_calico-apiserver(1cf2ef1c-ed9a-45fe-b40b-c7c747394568)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36c4d9778e827e4602c2c5c1b547bdfa42b5f0179168b42a875abaa2898f4f20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64b68b94f-sh2d4" podUID="1cf2ef1c-ed9a-45fe-b40b-c7c747394568" Sep 4 16:38:30.003097 containerd[1609]: time="2025-09-04T16:38:30.003063803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b68b94f-jk85x,Uid:74549879-72cf-44a7-aeac-80e0d4bf8fab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d5f40690af384d1eb6b743b474b42f96eea9a43bfc25a25dffa0e088d248395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.003354 kubelet[2751]: E0904 16:38:30.003322 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d5f40690af384d1eb6b743b474b42f96eea9a43bfc25a25dffa0e088d248395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.003403 kubelet[2751]: E0904 16:38:30.003368 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d5f40690af384d1eb6b743b474b42f96eea9a43bfc25a25dffa0e088d248395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64b68b94f-jk85x" Sep 4 16:38:30.003403 kubelet[2751]: E0904 16:38:30.003387 2751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d5f40690af384d1eb6b743b474b42f96eea9a43bfc25a25dffa0e088d248395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64b68b94f-jk85x" Sep 4 16:38:30.003458 kubelet[2751]: E0904 16:38:30.003421 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64b68b94f-jk85x_calico-apiserver(74549879-72cf-44a7-aeac-80e0d4bf8fab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64b68b94f-jk85x_calico-apiserver(74549879-72cf-44a7-aeac-80e0d4bf8fab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d5f40690af384d1eb6b743b474b42f96eea9a43bfc25a25dffa0e088d248395\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64b68b94f-jk85x" podUID="74549879-72cf-44a7-aeac-80e0d4bf8fab" Sep 4 16:38:30.004320 containerd[1609]: time="2025-09-04T16:38:30.004283519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rpcnd,Uid:736b22e8-5edf-4ff6-a411-ae55a48db23f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12bf1956be20e8146ec89abc27181098998b21ab22336baa8c39799ff87fcc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.004425 kubelet[2751]: E0904 16:38:30.004401 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12bf1956be20e8146ec89abc27181098998b21ab22336baa8c39799ff87fcc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.004455 kubelet[2751]: E0904 16:38:30.004429 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12bf1956be20e8146ec89abc27181098998b21ab22336baa8c39799ff87fcc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-rpcnd" Sep 4 16:38:30.004455 kubelet[2751]: E0904 16:38:30.004443 2751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12bf1956be20e8146ec89abc27181098998b21ab22336baa8c39799ff87fcc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-rpcnd" Sep 4 16:38:30.004588 kubelet[2751]: E0904 16:38:30.004473 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-rpcnd_calico-system(736b22e8-5edf-4ff6-a411-ae55a48db23f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-rpcnd_calico-system(736b22e8-5edf-4ff6-a411-ae55a48db23f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12bf1956be20e8146ec89abc27181098998b21ab22336baa8c39799ff87fcc2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-rpcnd" podUID="736b22e8-5edf-4ff6-a411-ae55a48db23f" Sep 4 16:38:30.220559 systemd[1]: Created slice kubepods-besteffort-podf37925be_17ad_4b82_a217_d98beb0e0897.slice - libcontainer container kubepods-besteffort-podf37925be_17ad_4b82_a217_d98beb0e0897.slice. Sep 4 16:38:30.222605 containerd[1609]: time="2025-09-04T16:38:30.222574839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4vr9,Uid:f37925be-17ad-4b82-a217-d98beb0e0897,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:30.270872 containerd[1609]: time="2025-09-04T16:38:30.270820716Z" level=error msg="Failed to destroy network for sandbox \"250f78a67a8c631f5ea528d72c5c340baee15c0ca3e3e568cc2417bedc7f08a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.272162 containerd[1609]: time="2025-09-04T16:38:30.272116493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4vr9,Uid:f37925be-17ad-4b82-a217-d98beb0e0897,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"250f78a67a8c631f5ea528d72c5c340baee15c0ca3e3e568cc2417bedc7f08a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.272303 kubelet[2751]: E0904 16:38:30.272269 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250f78a67a8c631f5ea528d72c5c340baee15c0ca3e3e568cc2417bedc7f08a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:38:30.272355 kubelet[2751]: E0904 16:38:30.272321 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250f78a67a8c631f5ea528d72c5c340baee15c0ca3e3e568cc2417bedc7f08a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4vr9" Sep 4 16:38:30.272355 kubelet[2751]: E0904 16:38:30.272340 2751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250f78a67a8c631f5ea528d72c5c340baee15c0ca3e3e568cc2417bedc7f08a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4vr9" Sep 4 16:38:30.272421 kubelet[2751]: E0904 16:38:30.272382 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x4vr9_calico-system(f37925be-17ad-4b82-a217-d98beb0e0897)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x4vr9_calico-system(f37925be-17ad-4b82-a217-d98beb0e0897)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"250f78a67a8c631f5ea528d72c5c340baee15c0ca3e3e568cc2417bedc7f08a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x4vr9" podUID="f37925be-17ad-4b82-a217-d98beb0e0897" Sep 4 16:38:30.301726 containerd[1609]: time="2025-09-04T16:38:30.301688615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 16:38:38.114861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799721441.mount: Deactivated successfully. Sep 4 16:38:38.963846 containerd[1609]: time="2025-09-04T16:38:38.963788193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:38.965717 containerd[1609]: time="2025-09-04T16:38:38.965663178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 16:38:38.966433 containerd[1609]: time="2025-09-04T16:38:38.966401038Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:38.971855 containerd[1609]: time="2025-09-04T16:38:38.971800891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:38.972851 containerd[1609]: time="2025-09-04T16:38:38.972815115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.671095076s" Sep 4 16:38:38.972851 containerd[1609]: time="2025-09-04T16:38:38.972841626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 16:38:38.988683 containerd[1609]: time="2025-09-04T16:38:38.988636951Z" level=info msg="CreateContainer within sandbox \"30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 16:38:38.999351 systemd[1]: Started sshd@7-10.0.0.3:22-10.0.0.1:58740.service - OpenSSH per-connection server daemon (10.0.0.1:58740). Sep 4 16:38:39.029798 containerd[1609]: time="2025-09-04T16:38:39.029704822Z" level=info msg="Container 6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:39.056541 containerd[1609]: time="2025-09-04T16:38:39.056491522Z" level=info msg="CreateContainer within sandbox \"30d903eb5bb5142ec6fa55546e3b20e21549fe1abcc50bfb663426f4640302d6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b\"" Sep 4 16:38:39.062459 containerd[1609]: time="2025-09-04T16:38:39.062329890Z" level=info msg="StartContainer for \"6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b\"" Sep 4 16:38:39.063753 containerd[1609]: time="2025-09-04T16:38:39.063727011Z" level=info msg="connecting to shim 6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b" address="unix:///run/containerd/s/4dcf845fb3159060d7e20e144a3d87b95d4d6c2d3eec8c05d8a557b582dd408b" protocol=ttrpc version=3 Sep 4 16:38:39.077504 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 58740 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:38:39.078824 sshd-session[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:38:39.084600 systemd-logind[1591]: New session 8 of user core. Sep 4 16:38:39.092022 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 16:38:39.095215 systemd[1]: Started cri-containerd-6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b.scope - libcontainer container 6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b. Sep 4 16:38:39.143568 containerd[1609]: time="2025-09-04T16:38:39.143521147Z" level=info msg="StartContainer for \"6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b\" returns successfully" Sep 4 16:38:39.216102 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 16:38:39.216556 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 16:38:39.309521 sshd[3862]: Connection closed by 10.0.0.1 port 58740 Sep 4 16:38:39.311690 sshd-session[3845]: pam_unix(sshd:session): session closed for user core Sep 4 16:38:39.319594 systemd[1]: sshd@7-10.0.0.3:22-10.0.0.1:58740.service: Deactivated successfully. Sep 4 16:38:39.322281 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 16:38:39.328217 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Sep 4 16:38:39.335396 systemd-logind[1591]: Removed session 8. Sep 4 16:38:39.356687 kubelet[2751]: I0904 16:38:39.356627 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-whisker-backend-key-pair\") pod \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\" (UID: \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\") " Sep 4 16:38:39.360216 kubelet[2751]: I0904 16:38:39.358831 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j6lc\" (UniqueName: \"kubernetes.io/projected/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-kube-api-access-2j6lc\") pod \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\" (UID: \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\") " Sep 4 16:38:39.360216 kubelet[2751]: I0904 16:38:39.358973 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-whisker-ca-bundle\") pod \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\" (UID: \"5ecdf0cb-2c4d-43d1-9392-70e9820b6e71\") " Sep 4 16:38:39.361063 kubelet[2751]: I0904 16:38:39.361043 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5ecdf0cb-2c4d-43d1-9392-70e9820b6e71" (UID: "5ecdf0cb-2c4d-43d1-9392-70e9820b6e71"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 16:38:39.367476 kubelet[2751]: I0904 16:38:39.367250 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v9f7d" podStartSLOduration=1.630555353 podStartE2EDuration="22.36157133s" podCreationTimestamp="2025-09-04 16:38:17 +0000 UTC" firstStartedPulling="2025-09-04 16:38:18.243014326 +0000 UTC m=+18.103955446" lastFinishedPulling="2025-09-04 16:38:38.974030313 +0000 UTC m=+38.834971423" observedRunningTime="2025-09-04 16:38:39.355702273 +0000 UTC m=+39.216643383" watchObservedRunningTime="2025-09-04 16:38:39.36157133 +0000 UTC m=+39.222512450" Sep 4 16:38:39.368253 systemd[1]: var-lib-kubelet-pods-5ecdf0cb\x2d2c4d\x2d43d1\x2d9392\x2d70e9820b6e71-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 16:38:39.368621 kubelet[2751]: I0904 16:38:39.368576 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-kube-api-access-2j6lc" (OuterVolumeSpecName: "kube-api-access-2j6lc") pod "5ecdf0cb-2c4d-43d1-9392-70e9820b6e71" (UID: "5ecdf0cb-2c4d-43d1-9392-70e9820b6e71"). InnerVolumeSpecName "kube-api-access-2j6lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 16:38:39.372713 systemd[1]: var-lib-kubelet-pods-5ecdf0cb\x2d2c4d\x2d43d1\x2d9392\x2d70e9820b6e71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2j6lc.mount: Deactivated successfully. Sep 4 16:38:39.375303 kubelet[2751]: I0904 16:38:39.375281 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5ecdf0cb-2c4d-43d1-9392-70e9820b6e71" (UID: "5ecdf0cb-2c4d-43d1-9392-70e9820b6e71"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 16:38:39.460368 kubelet[2751]: I0904 16:38:39.460193 2751 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 4 16:38:39.460368 kubelet[2751]: I0904 16:38:39.460226 2751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j6lc\" (UniqueName: \"kubernetes.io/projected/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-kube-api-access-2j6lc\") on node \"localhost\" DevicePath \"\"" Sep 4 16:38:39.460368 kubelet[2751]: I0904 16:38:39.460235 2751 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 16:38:39.552573 containerd[1609]: time="2025-09-04T16:38:39.552481772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b\" id:\"ddd36e637826c3aac2327ce2cfca82d7a7dd10cb510072831bb58c9524d67e6b\" pid:3948 exit_status:1 exited_at:{seconds:1757003919 nanos:552214658}" Sep 4 16:38:40.228010 systemd[1]: Removed slice kubepods-besteffort-pod5ecdf0cb_2c4d_43d1_9392_70e9820b6e71.slice - libcontainer container kubepods-besteffort-pod5ecdf0cb_2c4d_43d1_9392_70e9820b6e71.slice. Sep 4 16:38:40.379463 systemd[1]: Created slice kubepods-besteffort-pod144c2272_3463_4266_b16f_fe5c69de4bee.slice - libcontainer container kubepods-besteffort-pod144c2272_3463_4266_b16f_fe5c69de4bee.slice. Sep 4 16:38:40.416235 containerd[1609]: time="2025-09-04T16:38:40.416177491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b\" id:\"6db720eff784d8f9a9e7edf138680a1a0b5a975c04713002a15e8ff049f8f3f8\" pid:3974 exit_status:1 exited_at:{seconds:1757003920 nanos:415868564}" Sep 4 16:38:40.466789 kubelet[2751]: I0904 16:38:40.466736 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/144c2272-3463-4266-b16f-fe5c69de4bee-whisker-backend-key-pair\") pod \"whisker-7589d758df-fw28l\" (UID: \"144c2272-3463-4266-b16f-fe5c69de4bee\") " pod="calico-system/whisker-7589d758df-fw28l" Sep 4 16:38:40.466789 kubelet[2751]: I0904 16:38:40.466774 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/144c2272-3463-4266-b16f-fe5c69de4bee-whisker-ca-bundle\") pod \"whisker-7589d758df-fw28l\" (UID: \"144c2272-3463-4266-b16f-fe5c69de4bee\") " pod="calico-system/whisker-7589d758df-fw28l" Sep 4 16:38:40.467207 kubelet[2751]: I0904 16:38:40.466809 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47qts\" (UniqueName: \"kubernetes.io/projected/144c2272-3463-4266-b16f-fe5c69de4bee-kube-api-access-47qts\") pod \"whisker-7589d758df-fw28l\" (UID: \"144c2272-3463-4266-b16f-fe5c69de4bee\") " pod="calico-system/whisker-7589d758df-fw28l" Sep 4 16:38:40.685367 containerd[1609]: time="2025-09-04T16:38:40.685321534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7589d758df-fw28l,Uid:144c2272-3463-4266-b16f-fe5c69de4bee,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:40.945073 systemd-networkd[1496]: cali5321da7782d: Link UP Sep 4 16:38:40.945785 systemd-networkd[1496]: cali5321da7782d: Gained carrier Sep 4 16:38:40.958791 containerd[1609]: 2025-09-04 16:38:40.835 [INFO][4088] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 16:38:40.958791 containerd[1609]: 2025-09-04 16:38:40.852 [INFO][4088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7589d758df--fw28l-eth0 whisker-7589d758df- calico-system 144c2272-3463-4266-b16f-fe5c69de4bee 963 0 2025-09-04 16:38:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7589d758df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7589d758df-fw28l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5321da7782d [] [] }} ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Namespace="calico-system" Pod="whisker-7589d758df-fw28l" WorkloadEndpoint="localhost-k8s-whisker--7589d758df--fw28l-" Sep 4 16:38:40.958791 containerd[1609]: 2025-09-04 16:38:40.852 [INFO][4088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Namespace="calico-system" Pod="whisker-7589d758df-fw28l" WorkloadEndpoint="localhost-k8s-whisker--7589d758df--fw28l-eth0" Sep 4 16:38:40.958791 containerd[1609]: 2025-09-04 16:38:40.907 [INFO][4103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" HandleID="k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Workload="localhost-k8s-whisker--7589d758df--fw28l-eth0" Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.908 [INFO][4103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" HandleID="k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Workload="localhost-k8s-whisker--7589d758df--fw28l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502e80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7589d758df-fw28l", "timestamp":"2025-09-04 16:38:40.907591116 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.908 [INFO][4103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.908 [INFO][4103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.908 [INFO][4103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.914 [INFO][4103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" host="localhost" Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.919 [INFO][4103] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.922 [INFO][4103] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.923 [INFO][4103] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.925 [INFO][4103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:40.959039 containerd[1609]: 2025-09-04 16:38:40.925 [INFO][4103] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" host="localhost" Sep 4 16:38:40.959254 containerd[1609]: 2025-09-04 16:38:40.926 [INFO][4103] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9 Sep 4 16:38:40.959254 containerd[1609]: 2025-09-04 16:38:40.931 [INFO][4103] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" host="localhost" Sep 4 16:38:40.959254 containerd[1609]: 2025-09-04 16:38:40.935 [INFO][4103] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" host="localhost" Sep 4 16:38:40.959254 containerd[1609]: 2025-09-04 16:38:40.935 [INFO][4103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" host="localhost" Sep 4 16:38:40.959254 containerd[1609]: 2025-09-04 16:38:40.935 [INFO][4103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:38:40.959254 containerd[1609]: 2025-09-04 16:38:40.935 [INFO][4103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" HandleID="k8s-pod-network.8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Workload="localhost-k8s-whisker--7589d758df--fw28l-eth0" Sep 4 16:38:40.959378 containerd[1609]: 2025-09-04 16:38:40.938 [INFO][4088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Namespace="calico-system" Pod="whisker-7589d758df-fw28l" WorkloadEndpoint="localhost-k8s-whisker--7589d758df--fw28l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7589d758df--fw28l-eth0", GenerateName:"whisker-7589d758df-", Namespace:"calico-system", SelfLink:"", UID:"144c2272-3463-4266-b16f-fe5c69de4bee", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7589d758df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7589d758df-fw28l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5321da7782d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:40.959378 containerd[1609]: 2025-09-04 16:38:40.938 [INFO][4088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Namespace="calico-system" Pod="whisker-7589d758df-fw28l" WorkloadEndpoint="localhost-k8s-whisker--7589d758df--fw28l-eth0" Sep 4 16:38:40.959447 containerd[1609]: 2025-09-04 16:38:40.938 [INFO][4088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5321da7782d ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Namespace="calico-system" Pod="whisker-7589d758df-fw28l" WorkloadEndpoint="localhost-k8s-whisker--7589d758df--fw28l-eth0" Sep 4 16:38:40.959447 containerd[1609]: 2025-09-04 16:38:40.946 [INFO][4088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Namespace="calico-system" Pod="whisker-7589d758df-fw28l" WorkloadEndpoint="localhost-k8s-whisker--7589d758df--fw28l-eth0" Sep 4 16:38:40.959492 containerd[1609]: 2025-09-04 16:38:40.946 [INFO][4088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Namespace="calico-system" Pod="whisker-7589d758df-fw28l" WorkloadEndpoint="localhost-k8s-whisker--7589d758df--fw28l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7589d758df--fw28l-eth0", GenerateName:"whisker-7589d758df-", Namespace:"calico-system", SelfLink:"", UID:"144c2272-3463-4266-b16f-fe5c69de4bee", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7589d758df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9", Pod:"whisker-7589d758df-fw28l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5321da7782d", MAC:"92:2c:d4:74:2a:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:40.959541 containerd[1609]: 2025-09-04 16:38:40.954 [INFO][4088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" Namespace="calico-system" Pod="whisker-7589d758df-fw28l" WorkloadEndpoint="localhost-k8s-whisker--7589d758df--fw28l-eth0" Sep 4 16:38:41.043995 containerd[1609]: time="2025-09-04T16:38:41.043936978Z" level=info msg="connecting to shim 8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9" address="unix:///run/containerd/s/edf0f5ce8a5dfb29519122a3ea2aa95a9baa4a10573625ce828abbf3d09cd2f4" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:41.071002 systemd[1]: Started cri-containerd-8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9.scope - libcontainer container 8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9. Sep 4 16:38:41.083156 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:38:41.110596 containerd[1609]: time="2025-09-04T16:38:41.110563227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7589d758df-fw28l,Uid:144c2272-3463-4266-b16f-fe5c69de4bee,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9\"" Sep 4 16:38:41.111956 containerd[1609]: time="2025-09-04T16:38:41.111920243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 16:38:41.215492 containerd[1609]: time="2025-09-04T16:38:41.215348709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7865959f87-8s5vl,Uid:f71802a3-8f51-4535-ba85-ea33c10efa51,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:41.215492 containerd[1609]: time="2025-09-04T16:38:41.215402394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b68b94f-sh2d4,Uid:1cf2ef1c-ed9a-45fe-b40b-c7c747394568,Namespace:calico-apiserver,Attempt:0,}" Sep 4 16:38:41.216057 containerd[1609]: time="2025-09-04T16:38:41.216027036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rpcnd,Uid:736b22e8-5edf-4ff6-a411-ae55a48db23f,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:41.327631 systemd-networkd[1496]: cali30c2c343997: Link UP Sep 4 16:38:41.328326 systemd-networkd[1496]: cali30c2c343997: Gained carrier Sep 4 16:38:41.342626 containerd[1609]: 2025-09-04 16:38:41.246 [INFO][4174] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 16:38:41.342626 containerd[1609]: 2025-09-04 16:38:41.259 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0 calico-apiserver-64b68b94f- calico-apiserver 1cf2ef1c-ed9a-45fe-b40b-c7c747394568 853 0 2025-09-04 16:38:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64b68b94f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64b68b94f-sh2d4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30c2c343997 [] [] }} ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-sh2d4" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-" Sep 4 16:38:41.342626 containerd[1609]: 2025-09-04 16:38:41.259 [INFO][4174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-sh2d4" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" Sep 4 16:38:41.342626 containerd[1609]: 2025-09-04 16:38:41.294 [INFO][4206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" HandleID="k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Workload="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.294 [INFO][4206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" HandleID="k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Workload="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64b68b94f-sh2d4", "timestamp":"2025-09-04 16:38:41.294337343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.294 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.294 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.294 [INFO][4206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.300 [INFO][4206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" host="localhost" Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.305 [INFO][4206] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.308 [INFO][4206] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.310 [INFO][4206] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.312 [INFO][4206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:41.342861 containerd[1609]: 2025-09-04 16:38:41.312 [INFO][4206] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" host="localhost" Sep 4 16:38:41.343127 containerd[1609]: 2025-09-04 16:38:41.313 [INFO][4206] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634 Sep 4 16:38:41.343127 containerd[1609]: 2025-09-04 16:38:41.316 [INFO][4206] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" host="localhost" Sep 4 16:38:41.343127 containerd[1609]: 2025-09-04 16:38:41.322 [INFO][4206] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" host="localhost" Sep 4 16:38:41.343127 containerd[1609]: 2025-09-04 16:38:41.322 [INFO][4206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" host="localhost" Sep 4 16:38:41.343127 containerd[1609]: 2025-09-04 16:38:41.322 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:38:41.343127 containerd[1609]: 2025-09-04 16:38:41.322 [INFO][4206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" HandleID="k8s-pod-network.385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Workload="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" Sep 4 16:38:41.343242 containerd[1609]: 2025-09-04 16:38:41.325 [INFO][4174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-sh2d4" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0", GenerateName:"calico-apiserver-64b68b94f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cf2ef1c-ed9a-45fe-b40b-c7c747394568", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64b68b94f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64b68b94f-sh2d4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30c2c343997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:41.343294 containerd[1609]: 2025-09-04 16:38:41.326 [INFO][4174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-sh2d4" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" Sep 4 16:38:41.343294 containerd[1609]: 2025-09-04 16:38:41.326 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30c2c343997 ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-sh2d4" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" Sep 4 16:38:41.343294 containerd[1609]: 2025-09-04 16:38:41.328 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-sh2d4" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" Sep 4 16:38:41.343361 containerd[1609]: 2025-09-04 16:38:41.328 [INFO][4174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-sh2d4" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0", GenerateName:"calico-apiserver-64b68b94f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cf2ef1c-ed9a-45fe-b40b-c7c747394568", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64b68b94f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634", Pod:"calico-apiserver-64b68b94f-sh2d4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30c2c343997", MAC:"42:33:5d:34:ad:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:41.343409 containerd[1609]: 2025-09-04 16:38:41.339 [INFO][4174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-sh2d4" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--sh2d4-eth0" Sep 4 16:38:41.385517 containerd[1609]: time="2025-09-04T16:38:41.385460105Z" level=info msg="connecting to shim 385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634" address="unix:///run/containerd/s/278c97f18bd8d9ed7e46c0b4df0c6d06016f1d62646809e00c33a6049fe9c0b7" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:41.430036 systemd[1]: Started cri-containerd-385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634.scope - libcontainer container 385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634. Sep 4 16:38:41.435075 containerd[1609]: time="2025-09-04T16:38:41.435034761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b\" id:\"999b1c6051077d3fab13d3993d59c5ee43e24889ed8df7a6e2f4d2a3650b2ab3\" pid:4247 exit_status:1 exited_at:{seconds:1757003921 nanos:434426109}" Sep 4 16:38:41.441223 systemd-networkd[1496]: cali0979777c2fc: Link UP Sep 4 16:38:41.442138 systemd-networkd[1496]: cali0979777c2fc: Gained carrier Sep 4 16:38:41.451069 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:38:41.452909 containerd[1609]: 2025-09-04 16:38:41.257 [INFO][4184] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 16:38:41.452909 containerd[1609]: 2025-09-04 16:38:41.269 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--rpcnd-eth0 goldmane-7988f88666- calico-system 736b22e8-5edf-4ff6-a411-ae55a48db23f 854 0 2025-09-04 16:38:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-rpcnd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0979777c2fc [] [] }} ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Namespace="calico-system" Pod="goldmane-7988f88666-rpcnd" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rpcnd-" Sep 4 16:38:41.452909 containerd[1609]: 2025-09-04 16:38:41.269 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Namespace="calico-system" Pod="goldmane-7988f88666-rpcnd" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" Sep 4 16:38:41.452909 containerd[1609]: 2025-09-04 16:38:41.297 [INFO][4218] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" HandleID="k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Workload="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.297 [INFO][4218] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" HandleID="k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Workload="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-rpcnd", "timestamp":"2025-09-04 16:38:41.297109929 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.297 [INFO][4218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.322 [INFO][4218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.322 [INFO][4218] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.402 [INFO][4218] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" host="localhost" Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.407 [INFO][4218] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.411 [INFO][4218] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.413 [INFO][4218] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.417 [INFO][4218] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:41.453076 containerd[1609]: 2025-09-04 16:38:41.417 [INFO][4218] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" host="localhost" Sep 4 16:38:41.453303 containerd[1609]: 2025-09-04 16:38:41.419 [INFO][4218] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e Sep 4 16:38:41.453303 containerd[1609]: 2025-09-04 16:38:41.423 [INFO][4218] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" host="localhost" Sep 4 16:38:41.453303 containerd[1609]: 2025-09-04 16:38:41.428 [INFO][4218] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" host="localhost" Sep 4 16:38:41.453303 containerd[1609]: 2025-09-04 16:38:41.428 [INFO][4218] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" host="localhost" Sep 4 16:38:41.453303 containerd[1609]: 2025-09-04 16:38:41.428 [INFO][4218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:38:41.453303 containerd[1609]: 2025-09-04 16:38:41.428 [INFO][4218] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" HandleID="k8s-pod-network.ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Workload="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" Sep 4 16:38:41.453438 containerd[1609]: 2025-09-04 16:38:41.435 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Namespace="calico-system" Pod="goldmane-7988f88666-rpcnd" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--rpcnd-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"736b22e8-5edf-4ff6-a411-ae55a48db23f", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-rpcnd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0979777c2fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:41.453438 containerd[1609]: 2025-09-04 16:38:41.435 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Namespace="calico-system" Pod="goldmane-7988f88666-rpcnd" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" Sep 4 16:38:41.453510 containerd[1609]: 2025-09-04 16:38:41.435 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0979777c2fc ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Namespace="calico-system" Pod="goldmane-7988f88666-rpcnd" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" Sep 4 16:38:41.453510 containerd[1609]: 2025-09-04 16:38:41.441 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Namespace="calico-system" Pod="goldmane-7988f88666-rpcnd" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" Sep 4 16:38:41.453557 containerd[1609]: 2025-09-04 16:38:41.442 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Namespace="calico-system" Pod="goldmane-7988f88666-rpcnd" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--rpcnd-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"736b22e8-5edf-4ff6-a411-ae55a48db23f", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e", Pod:"goldmane-7988f88666-rpcnd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0979777c2fc", MAC:"7a:d2:51:b5:04:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:41.453608 containerd[1609]: 2025-09-04 16:38:41.449 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" Namespace="calico-system" Pod="goldmane-7988f88666-rpcnd" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rpcnd-eth0" Sep 4 16:38:41.485112 containerd[1609]: time="2025-09-04T16:38:41.484962407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b68b94f-sh2d4,Uid:1cf2ef1c-ed9a-45fe-b40b-c7c747394568,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634\"" Sep 4 16:38:41.491464 containerd[1609]: time="2025-09-04T16:38:41.491443982Z" level=info msg="connecting to shim ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e" address="unix:///run/containerd/s/ed51fcf0fb20f605f3608b3ce10f1fffb605a5c5c574e01f9ae5a0f96d882d94" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:41.521024 systemd[1]: Started cri-containerd-ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e.scope - libcontainer container ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e. Sep 4 16:38:41.534429 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:38:41.536096 systemd-networkd[1496]: calice5f967eb75: Link UP Sep 4 16:38:41.536714 systemd-networkd[1496]: calice5f967eb75: Gained carrier Sep 4 16:38:41.548244 containerd[1609]: 2025-09-04 16:38:41.247 [INFO][4163] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 16:38:41.548244 containerd[1609]: 2025-09-04 16:38:41.260 [INFO][4163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0 calico-kube-controllers-7865959f87- calico-system f71802a3-8f51-4535-ba85-ea33c10efa51 855 0 2025-09-04 16:38:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7865959f87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7865959f87-8s5vl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice5f967eb75 [] [] }} ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Namespace="calico-system" Pod="calico-kube-controllers-7865959f87-8s5vl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-" Sep 4 16:38:41.548244 containerd[1609]: 2025-09-04 16:38:41.261 [INFO][4163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Namespace="calico-system" Pod="calico-kube-controllers-7865959f87-8s5vl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" Sep 4 16:38:41.548244 containerd[1609]: 2025-09-04 16:38:41.298 [INFO][4208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" HandleID="k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Workload="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.298 [INFO][4208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" HandleID="k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Workload="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7865959f87-8s5vl", "timestamp":"2025-09-04 16:38:41.298284839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.298 [INFO][4208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.428 [INFO][4208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.429 [INFO][4208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.503 [INFO][4208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" host="localhost" Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.509 [INFO][4208] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.512 [INFO][4208] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.514 [INFO][4208] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.515 [INFO][4208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:41.548407 containerd[1609]: 2025-09-04 16:38:41.515 [INFO][4208] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" host="localhost" Sep 4 16:38:41.548632 containerd[1609]: 2025-09-04 16:38:41.516 [INFO][4208] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c Sep 4 16:38:41.548632 containerd[1609]: 2025-09-04 16:38:41.521 [INFO][4208] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" host="localhost" Sep 4 16:38:41.548632 containerd[1609]: 2025-09-04 16:38:41.530 [INFO][4208] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" host="localhost" Sep 4 16:38:41.548632 containerd[1609]: 2025-09-04 16:38:41.530 [INFO][4208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" host="localhost" Sep 4 16:38:41.548632 containerd[1609]: 2025-09-04 16:38:41.530 [INFO][4208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:38:41.548632 containerd[1609]: 2025-09-04 16:38:41.530 [INFO][4208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" HandleID="k8s-pod-network.165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Workload="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" Sep 4 16:38:41.548750 containerd[1609]: 2025-09-04 16:38:41.533 [INFO][4163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Namespace="calico-system" Pod="calico-kube-controllers-7865959f87-8s5vl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0", GenerateName:"calico-kube-controllers-7865959f87-", Namespace:"calico-system", SelfLink:"", UID:"f71802a3-8f51-4535-ba85-ea33c10efa51", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7865959f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7865959f87-8s5vl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice5f967eb75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:41.548812 containerd[1609]: 2025-09-04 16:38:41.533 [INFO][4163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Namespace="calico-system" Pod="calico-kube-controllers-7865959f87-8s5vl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" Sep 4 16:38:41.548812 containerd[1609]: 2025-09-04 16:38:41.533 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice5f967eb75 ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Namespace="calico-system" Pod="calico-kube-controllers-7865959f87-8s5vl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" Sep 4 16:38:41.548812 containerd[1609]: 2025-09-04 16:38:41.535 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Namespace="calico-system" Pod="calico-kube-controllers-7865959f87-8s5vl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" Sep 4 16:38:41.548903 containerd[1609]: 2025-09-04 16:38:41.536 [INFO][4163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Namespace="calico-system" Pod="calico-kube-controllers-7865959f87-8s5vl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0", GenerateName:"calico-kube-controllers-7865959f87-", Namespace:"calico-system", SelfLink:"", UID:"f71802a3-8f51-4535-ba85-ea33c10efa51", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7865959f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c", Pod:"calico-kube-controllers-7865959f87-8s5vl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice5f967eb75", MAC:"a6:0b:78:93:b9:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:41.548964 containerd[1609]: 2025-09-04 16:38:41.545 [INFO][4163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" Namespace="calico-system" Pod="calico-kube-controllers-7865959f87-8s5vl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7865959f87--8s5vl-eth0" Sep 4 16:38:41.567534 containerd[1609]: time="2025-09-04T16:38:41.567488885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rpcnd,Uid:736b22e8-5edf-4ff6-a411-ae55a48db23f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e\"" Sep 4 16:38:41.571284 containerd[1609]: time="2025-09-04T16:38:41.571244836Z" level=info msg="connecting to shim 165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c" address="unix:///run/containerd/s/7951bb183cedf703c0c7f430203de073d4fcde88d8d7cbafc965323df5bf1b90" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:41.598024 systemd[1]: Started cri-containerd-165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c.scope - libcontainer container 165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c. Sep 4 16:38:41.609354 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:38:41.645371 containerd[1609]: time="2025-09-04T16:38:41.645322448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7865959f87-8s5vl,Uid:f71802a3-8f51-4535-ba85-ea33c10efa51,Namespace:calico-system,Attempt:0,} returns sandbox id \"165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c\"" Sep 4 16:38:41.813143 kubelet[2751]: I0904 16:38:41.813033 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:38:41.813533 kubelet[2751]: E0904 16:38:41.813386 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:42.214876 containerd[1609]: time="2025-09-04T16:38:42.214811777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4vr9,Uid:f37925be-17ad-4b82-a217-d98beb0e0897,Namespace:calico-system,Attempt:0,}" Sep 4 16:38:42.217427 kubelet[2751]: I0904 16:38:42.217376 2751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ecdf0cb-2c4d-43d1-9392-70e9820b6e71" path="/var/lib/kubelet/pods/5ecdf0cb-2c4d-43d1-9392-70e9820b6e71/volumes" Sep 4 16:38:42.302652 systemd-networkd[1496]: cali5fcf35e6f46: Link UP Sep 4 16:38:42.304161 systemd-networkd[1496]: cali5fcf35e6f46: Gained carrier Sep 4 16:38:42.319472 containerd[1609]: 2025-09-04 16:38:42.238 [INFO][4437] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 16:38:42.319472 containerd[1609]: 2025-09-04 16:38:42.247 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x4vr9-eth0 csi-node-driver- calico-system f37925be-17ad-4b82-a217-d98beb0e0897 735 0 2025-09-04 16:38:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x4vr9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5fcf35e6f46 [] [] }} ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Namespace="calico-system" Pod="csi-node-driver-x4vr9" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vr9-" Sep 4 16:38:42.319472 containerd[1609]: 2025-09-04 16:38:42.247 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Namespace="calico-system" Pod="csi-node-driver-x4vr9" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vr9-eth0" Sep 4 16:38:42.319472 containerd[1609]: 2025-09-04 16:38:42.270 [INFO][4452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" HandleID="k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Workload="localhost-k8s-csi--node--driver--x4vr9-eth0" Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.270 [INFO][4452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" HandleID="k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Workload="localhost-k8s-csi--node--driver--x4vr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b0e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x4vr9", "timestamp":"2025-09-04 16:38:42.270250946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.270 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.270 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.270 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.275 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" host="localhost" Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.279 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.283 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.284 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.286 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:42.319699 containerd[1609]: 2025-09-04 16:38:42.286 [INFO][4452] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" host="localhost" Sep 4 16:38:42.319939 containerd[1609]: 2025-09-04 16:38:42.287 [INFO][4452] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b Sep 4 16:38:42.319939 containerd[1609]: 2025-09-04 16:38:42.291 [INFO][4452] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" host="localhost" Sep 4 16:38:42.319939 containerd[1609]: 2025-09-04 16:38:42.296 [INFO][4452] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" host="localhost" Sep 4 16:38:42.319939 containerd[1609]: 2025-09-04 16:38:42.296 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" host="localhost" Sep 4 16:38:42.319939 containerd[1609]: 2025-09-04 16:38:42.296 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:38:42.319939 containerd[1609]: 2025-09-04 16:38:42.296 [INFO][4452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" HandleID="k8s-pod-network.c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Workload="localhost-k8s-csi--node--driver--x4vr9-eth0" Sep 4 16:38:42.320062 containerd[1609]: 2025-09-04 16:38:42.299 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Namespace="calico-system" Pod="csi-node-driver-x4vr9" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4vr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f37925be-17ad-4b82-a217-d98beb0e0897", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x4vr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fcf35e6f46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:42.320116 containerd[1609]: 2025-09-04 16:38:42.299 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Namespace="calico-system" Pod="csi-node-driver-x4vr9" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vr9-eth0" Sep 4 16:38:42.320116 containerd[1609]: 2025-09-04 16:38:42.299 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fcf35e6f46 ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Namespace="calico-system" Pod="csi-node-driver-x4vr9" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vr9-eth0" Sep 4 16:38:42.320116 containerd[1609]: 2025-09-04 16:38:42.304 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Namespace="calico-system" Pod="csi-node-driver-x4vr9" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vr9-eth0" Sep 4 16:38:42.320185 containerd[1609]: 2025-09-04 16:38:42.305 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Namespace="calico-system" Pod="csi-node-driver-x4vr9" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4vr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f37925be-17ad-4b82-a217-d98beb0e0897", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b", Pod:"csi-node-driver-x4vr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fcf35e6f46", MAC:"e6:7c:b5:50:7b:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:42.320233 containerd[1609]: 2025-09-04 16:38:42.315 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" Namespace="calico-system" Pod="csi-node-driver-x4vr9" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vr9-eth0" Sep 4 16:38:42.341673 kubelet[2751]: E0904 16:38:42.341631 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:42.345046 containerd[1609]: time="2025-09-04T16:38:42.345013167Z" level=info msg="connecting to shim c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b" address="unix:///run/containerd/s/cf44dc6eff52be921eab665cda09b4da6305194621664309903c5afee0d90dc1" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:42.373005 systemd[1]: Started cri-containerd-c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b.scope - libcontainer container c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b. Sep 4 16:38:42.384584 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:38:42.397464 containerd[1609]: time="2025-09-04T16:38:42.397414729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4vr9,Uid:f37925be-17ad-4b82-a217-d98beb0e0897,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b\"" Sep 4 16:38:42.721035 systemd-networkd[1496]: cali5321da7782d: Gained IPv6LL Sep 4 16:38:42.977091 systemd-networkd[1496]: cali30c2c343997: Gained IPv6LL Sep 4 16:38:43.068408 systemd-networkd[1496]: vxlan.calico: Link UP Sep 4 16:38:43.068418 systemd-networkd[1496]: vxlan.calico: Gained carrier Sep 4 16:38:43.119041 containerd[1609]: time="2025-09-04T16:38:43.118990497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:43.119774 containerd[1609]: time="2025-09-04T16:38:43.119739319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 16:38:43.122624 containerd[1609]: time="2025-09-04T16:38:43.122593843Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:43.126397 containerd[1609]: time="2025-09-04T16:38:43.126365226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:43.126763 containerd[1609]: time="2025-09-04T16:38:43.126732884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.014777874s" Sep 4 16:38:43.126794 containerd[1609]: time="2025-09-04T16:38:43.126766510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 16:38:43.129193 containerd[1609]: time="2025-09-04T16:38:43.128635459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 16:38:43.133456 containerd[1609]: time="2025-09-04T16:38:43.133431893Z" level=info msg="CreateContainer within sandbox \"8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 16:38:43.146718 containerd[1609]: time="2025-09-04T16:38:43.146334195Z" level=info msg="Container 6fac90599e89fc170ebcc003a824ef7a5dce678915232148b8e7b14ba373d1c0: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:43.154557 containerd[1609]: time="2025-09-04T16:38:43.154516671Z" level=info msg="CreateContainer within sandbox \"8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6fac90599e89fc170ebcc003a824ef7a5dce678915232148b8e7b14ba373d1c0\"" Sep 4 16:38:43.155138 containerd[1609]: time="2025-09-04T16:38:43.155114688Z" level=info msg="StartContainer for \"6fac90599e89fc170ebcc003a824ef7a5dce678915232148b8e7b14ba373d1c0\"" Sep 4 16:38:43.155983 containerd[1609]: time="2025-09-04T16:38:43.155965039Z" level=info msg="connecting to shim 6fac90599e89fc170ebcc003a824ef7a5dce678915232148b8e7b14ba373d1c0" address="unix:///run/containerd/s/edf0f5ce8a5dfb29519122a3ea2aa95a9baa4a10573625ce828abbf3d09cd2f4" protocol=ttrpc version=3 Sep 4 16:38:43.184010 systemd[1]: Started cri-containerd-6fac90599e89fc170ebcc003a824ef7a5dce678915232148b8e7b14ba373d1c0.scope - libcontainer container 6fac90599e89fc170ebcc003a824ef7a5dce678915232148b8e7b14ba373d1c0. Sep 4 16:38:43.215257 kubelet[2751]: E0904 16:38:43.215220 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:43.216144 containerd[1609]: time="2025-09-04T16:38:43.215730820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tt6nk,Uid:a4efebbf-9f8c-4a90-84b0-5c91397dcaf7,Namespace:kube-system,Attempt:0,}" Sep 4 16:38:43.216144 containerd[1609]: time="2025-09-04T16:38:43.216013301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b68b94f-jk85x,Uid:74549879-72cf-44a7-aeac-80e0d4bf8fab,Namespace:calico-apiserver,Attempt:0,}" Sep 4 16:38:43.233073 systemd-networkd[1496]: cali0979777c2fc: Gained IPv6LL Sep 4 16:38:43.236332 containerd[1609]: time="2025-09-04T16:38:43.236293388Z" level=info msg="StartContainer for \"6fac90599e89fc170ebcc003a824ef7a5dce678915232148b8e7b14ba373d1c0\" returns successfully" Sep 4 16:38:43.348822 systemd-networkd[1496]: calib438ee010d1: Link UP Sep 4 16:38:43.350081 systemd-networkd[1496]: calib438ee010d1: Gained carrier Sep 4 16:38:43.361201 systemd-networkd[1496]: calice5f967eb75: Gained IPv6LL Sep 4 16:38:43.361984 containerd[1609]: 2025-09-04 16:38:43.268 [INFO][4640] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0 coredns-7c65d6cfc9- kube-system a4efebbf-9f8c-4a90-84b0-5c91397dcaf7 847 0 2025-09-04 16:38:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-tt6nk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib438ee010d1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tt6nk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tt6nk-" Sep 4 16:38:43.361984 containerd[1609]: 2025-09-04 16:38:43.268 [INFO][4640] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tt6nk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" Sep 4 16:38:43.361984 containerd[1609]: 2025-09-04 16:38:43.305 [INFO][4678] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" HandleID="k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Workload="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.305 [INFO][4678] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" HandleID="k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Workload="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-tt6nk", "timestamp":"2025-09-04 16:38:43.305386291 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.305 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.305 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.305 [INFO][4678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.312 [INFO][4678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" host="localhost" Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.315 [INFO][4678] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.318 [INFO][4678] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.319 [INFO][4678] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.321 [INFO][4678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:43.362108 containerd[1609]: 2025-09-04 16:38:43.321 [INFO][4678] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" host="localhost" Sep 4 16:38:43.362307 containerd[1609]: 2025-09-04 16:38:43.322 [INFO][4678] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95 Sep 4 16:38:43.362307 containerd[1609]: 2025-09-04 16:38:43.335 [INFO][4678] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" host="localhost" Sep 4 16:38:43.362307 containerd[1609]: 2025-09-04 16:38:43.340 [INFO][4678] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" host="localhost" Sep 4 16:38:43.362307 containerd[1609]: 2025-09-04 16:38:43.340 [INFO][4678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" host="localhost" Sep 4 16:38:43.362307 containerd[1609]: 2025-09-04 16:38:43.341 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:38:43.362307 containerd[1609]: 2025-09-04 16:38:43.341 [INFO][4678] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" HandleID="k8s-pod-network.0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Workload="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" Sep 4 16:38:43.362427 containerd[1609]: 2025-09-04 16:38:43.345 [INFO][4640] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tt6nk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4efebbf-9f8c-4a90-84b0-5c91397dcaf7", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-tt6nk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib438ee010d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:43.362497 containerd[1609]: 2025-09-04 16:38:43.345 [INFO][4640] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tt6nk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" Sep 4 16:38:43.362497 containerd[1609]: 2025-09-04 16:38:43.345 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib438ee010d1 ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tt6nk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" Sep 4 16:38:43.362497 containerd[1609]: 2025-09-04 16:38:43.350 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tt6nk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" Sep 4 16:38:43.362563 containerd[1609]: 2025-09-04 16:38:43.351 [INFO][4640] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tt6nk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4efebbf-9f8c-4a90-84b0-5c91397dcaf7", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95", Pod:"coredns-7c65d6cfc9-tt6nk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib438ee010d1", MAC:"96:75:32:c3:9a:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:43.362563 containerd[1609]: 2025-09-04 16:38:43.358 [INFO][4640] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tt6nk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tt6nk-eth0" Sep 4 16:38:43.395822 containerd[1609]: time="2025-09-04T16:38:43.395748471Z" level=info msg="connecting to shim 0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95" address="unix:///run/containerd/s/36ceab7eb52496216a22fee3d96e2ecc19358cf3134c939bb20675bbe4059eaa" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:43.420009 systemd[1]: Started cri-containerd-0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95.scope - libcontainer container 0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95. Sep 4 16:38:43.435621 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:38:43.447360 systemd-networkd[1496]: cali0924873a630: Link UP Sep 4 16:38:43.447673 systemd-networkd[1496]: cali0924873a630: Gained carrier Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.278 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0 calico-apiserver-64b68b94f- calico-apiserver 74549879-72cf-44a7-aeac-80e0d4bf8fab 857 0 2025-09-04 16:38:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64b68b94f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64b68b94f-jk85x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0924873a630 [] [] }} ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-jk85x" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--jk85x-" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.278 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-jk85x" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.310 [INFO][4685] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" HandleID="k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Workload="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.311 [INFO][4685] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" HandleID="k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Workload="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000be740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64b68b94f-jk85x", "timestamp":"2025-09-04 16:38:43.3109177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.311 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.341 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.341 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.413 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.418 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.422 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.424 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.426 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.426 [INFO][4685] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.427 [INFO][4685] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941 Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.432 [INFO][4685] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.440 [INFO][4685] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.440 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" host="localhost" Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.440 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:38:43.465130 containerd[1609]: 2025-09-04 16:38:43.440 [INFO][4685] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" HandleID="k8s-pod-network.82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Workload="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" Sep 4 16:38:43.465631 containerd[1609]: 2025-09-04 16:38:43.444 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-jk85x" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0", GenerateName:"calico-apiserver-64b68b94f-", Namespace:"calico-apiserver", SelfLink:"", UID:"74549879-72cf-44a7-aeac-80e0d4bf8fab", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64b68b94f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64b68b94f-jk85x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0924873a630", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:43.465631 containerd[1609]: 2025-09-04 16:38:43.444 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-jk85x" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" Sep 4 16:38:43.465631 containerd[1609]: 2025-09-04 16:38:43.444 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0924873a630 ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-jk85x" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" Sep 4 16:38:43.465631 containerd[1609]: 2025-09-04 16:38:43.446 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-jk85x" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" Sep 4 16:38:43.465631 containerd[1609]: 2025-09-04 16:38:43.446 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-jk85x" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0", GenerateName:"calico-apiserver-64b68b94f-", Namespace:"calico-apiserver", SelfLink:"", UID:"74549879-72cf-44a7-aeac-80e0d4bf8fab", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64b68b94f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941", Pod:"calico-apiserver-64b68b94f-jk85x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0924873a630", MAC:"46:20:df:82:c2:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:43.465631 containerd[1609]: 2025-09-04 16:38:43.461 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" Namespace="calico-apiserver" Pod="calico-apiserver-64b68b94f-jk85x" WorkloadEndpoint="localhost-k8s-calico--apiserver--64b68b94f--jk85x-eth0" Sep 4 16:38:43.475316 containerd[1609]: time="2025-09-04T16:38:43.475271346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tt6nk,Uid:a4efebbf-9f8c-4a90-84b0-5c91397dcaf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95\"" Sep 4 16:38:43.476002 kubelet[2751]: E0904 16:38:43.475973 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:43.478370 containerd[1609]: time="2025-09-04T16:38:43.478332504Z" level=info msg="CreateContainer within sandbox \"0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 16:38:43.492434 containerd[1609]: time="2025-09-04T16:38:43.492349381Z" level=info msg="Container 86e1c80a9168cd61eb6a12db73bb7e6f936652c83929447d5fd438717e155bc8: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:43.495367 containerd[1609]: time="2025-09-04T16:38:43.495323939Z" level=info msg="connecting to shim 82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941" address="unix:///run/containerd/s/dbf08c6d729f5ebb1d299417f115111de64a664f99ab9c6299d3eb2e93246c30" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:43.500563 containerd[1609]: time="2025-09-04T16:38:43.500467802Z" level=info msg="CreateContainer within sandbox \"0babcd94aefc6873270bc84392a9d5eda8e83b7cfaa5dbbaf4ea0c277eb1ec95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86e1c80a9168cd61eb6a12db73bb7e6f936652c83929447d5fd438717e155bc8\"" Sep 4 16:38:43.501064 containerd[1609]: time="2025-09-04T16:38:43.501027424Z" level=info msg="StartContainer for \"86e1c80a9168cd61eb6a12db73bb7e6f936652c83929447d5fd438717e155bc8\"" Sep 4 16:38:43.502113 containerd[1609]: time="2025-09-04T16:38:43.502083176Z" level=info msg="connecting to shim 86e1c80a9168cd61eb6a12db73bb7e6f936652c83929447d5fd438717e155bc8" address="unix:///run/containerd/s/36ceab7eb52496216a22fee3d96e2ecc19358cf3134c939bb20675bbe4059eaa" protocol=ttrpc version=3 Sep 4 16:38:43.525391 systemd[1]: Started cri-containerd-82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941.scope - libcontainer container 82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941. Sep 4 16:38:43.527034 systemd[1]: Started cri-containerd-86e1c80a9168cd61eb6a12db73bb7e6f936652c83929447d5fd438717e155bc8.scope - libcontainer container 86e1c80a9168cd61eb6a12db73bb7e6f936652c83929447d5fd438717e155bc8. Sep 4 16:38:43.540807 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:38:43.563083 containerd[1609]: time="2025-09-04T16:38:43.562527372Z" level=info msg="StartContainer for \"86e1c80a9168cd61eb6a12db73bb7e6f936652c83929447d5fd438717e155bc8\" returns successfully" Sep 4 16:38:43.580157 containerd[1609]: time="2025-09-04T16:38:43.580064582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b68b94f-jk85x,Uid:74549879-72cf-44a7-aeac-80e0d4bf8fab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941\"" Sep 4 16:38:43.873057 systemd-networkd[1496]: cali5fcf35e6f46: Gained IPv6LL Sep 4 16:38:44.215046 kubelet[2751]: E0904 16:38:44.214951 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:44.215605 containerd[1609]: time="2025-09-04T16:38:44.215555317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j9q9l,Uid:184689f5-f11a-4c95-81d1-690a0392b7bc,Namespace:kube-system,Attempt:0,}" Sep 4 16:38:44.257068 systemd-networkd[1496]: vxlan.calico: Gained IPv6LL Sep 4 16:38:44.302392 systemd-networkd[1496]: cali70f792ee2a6: Link UP Sep 4 16:38:44.303167 systemd-networkd[1496]: cali70f792ee2a6: Gained carrier Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.245 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0 coredns-7c65d6cfc9- kube-system 184689f5-f11a-4c95-81d1-690a0392b7bc 856 0 2025-09-04 16:38:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-j9q9l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali70f792ee2a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j9q9l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j9q9l-" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.245 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j9q9l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.268 [INFO][4889] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" HandleID="k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Workload="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.268 [INFO][4889] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" HandleID="k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Workload="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-j9q9l", "timestamp":"2025-09-04 16:38:44.268813699 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.269 [INFO][4889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.269 [INFO][4889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.269 [INFO][4889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.276 [INFO][4889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.279 [INFO][4889] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.283 [INFO][4889] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.284 [INFO][4889] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.286 [INFO][4889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.286 [INFO][4889] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.287 [INFO][4889] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7 Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.291 [INFO][4889] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.296 [INFO][4889] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.296 [INFO][4889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" host="localhost" Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.296 [INFO][4889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:38:44.318148 containerd[1609]: 2025-09-04 16:38:44.296 [INFO][4889] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" HandleID="k8s-pod-network.6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Workload="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" Sep 4 16:38:44.318672 containerd[1609]: 2025-09-04 16:38:44.299 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j9q9l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"184689f5-f11a-4c95-81d1-690a0392b7bc", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-j9q9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70f792ee2a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:44.318672 containerd[1609]: 2025-09-04 16:38:44.300 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j9q9l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" Sep 4 16:38:44.318672 containerd[1609]: 2025-09-04 16:38:44.300 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70f792ee2a6 ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j9q9l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" Sep 4 16:38:44.318672 containerd[1609]: 2025-09-04 16:38:44.303 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j9q9l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" Sep 4 16:38:44.318672 containerd[1609]: 2025-09-04 16:38:44.303 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j9q9l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"184689f5-f11a-4c95-81d1-690a0392b7bc", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7", Pod:"coredns-7c65d6cfc9-j9q9l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70f792ee2a6", MAC:"32:44:93:63:9b:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:38:44.318672 containerd[1609]: 2025-09-04 16:38:44.313 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j9q9l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--j9q9l-eth0" Sep 4 16:38:44.325962 systemd[1]: Started sshd@8-10.0.0.3:22-10.0.0.1:38192.service - OpenSSH per-connection server daemon (10.0.0.1:38192). Sep 4 16:38:44.345902 containerd[1609]: time="2025-09-04T16:38:44.345848905Z" level=info msg="connecting to shim 6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7" address="unix:///run/containerd/s/e8ec397b7cdaa21c115c6157f6cdad58b6bdc4aa0cf90601d300e0dffd1ed685" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:38:44.358636 kubelet[2751]: E0904 16:38:44.358602 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:44.374230 kubelet[2751]: I0904 16:38:44.374181 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tt6nk" podStartSLOduration=37.374163718 podStartE2EDuration="37.374163718s" podCreationTimestamp="2025-09-04 16:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:38:44.373878432 +0000 UTC m=+44.234819552" watchObservedRunningTime="2025-09-04 16:38:44.374163718 +0000 UTC m=+44.235104838" Sep 4 16:38:44.380233 systemd[1]: Started cri-containerd-6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7.scope - libcontainer container 6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7. Sep 4 16:38:44.398681 sshd[4907]: Accepted publickey for core from 10.0.0.1 port 38192 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:38:44.400339 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:38:44.406999 systemd-logind[1591]: New session 9 of user core. Sep 4 16:38:44.414140 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 16:38:44.418619 systemd-resolved[1268]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:38:44.458959 containerd[1609]: time="2025-09-04T16:38:44.458881197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j9q9l,Uid:184689f5-f11a-4c95-81d1-690a0392b7bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7\"" Sep 4 16:38:44.459704 kubelet[2751]: E0904 16:38:44.459666 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:44.463268 containerd[1609]: time="2025-09-04T16:38:44.461809949Z" level=info msg="CreateContainer within sandbox \"6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 16:38:44.481864 containerd[1609]: time="2025-09-04T16:38:44.481800608Z" level=info msg="Container e952543703905039459877eb9952379d1191cca5dcbd9086753e171918328e3c: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:44.490161 containerd[1609]: time="2025-09-04T16:38:44.490084143Z" level=info msg="CreateContainer within sandbox \"6cb556e6d048e40e6e09f3b022bc97b8bc8c398b532e8161ebfbdce1ea6643e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e952543703905039459877eb9952379d1191cca5dcbd9086753e171918328e3c\"" Sep 4 16:38:44.490761 containerd[1609]: time="2025-09-04T16:38:44.490734081Z" level=info msg="StartContainer for \"e952543703905039459877eb9952379d1191cca5dcbd9086753e171918328e3c\"" Sep 4 16:38:44.491920 containerd[1609]: time="2025-09-04T16:38:44.491595972Z" level=info msg="connecting to shim e952543703905039459877eb9952379d1191cca5dcbd9086753e171918328e3c" address="unix:///run/containerd/s/e8ec397b7cdaa21c115c6157f6cdad58b6bdc4aa0cf90601d300e0dffd1ed685" protocol=ttrpc version=3 Sep 4 16:38:44.513228 systemd[1]: Started cri-containerd-e952543703905039459877eb9952379d1191cca5dcbd9086753e171918328e3c.scope - libcontainer container e952543703905039459877eb9952379d1191cca5dcbd9086753e171918328e3c. Sep 4 16:38:44.548181 containerd[1609]: time="2025-09-04T16:38:44.548149210Z" level=info msg="StartContainer for \"e952543703905039459877eb9952379d1191cca5dcbd9086753e171918328e3c\" returns successfully" Sep 4 16:38:44.573376 sshd[4958]: Connection closed by 10.0.0.1 port 38192 Sep 4 16:38:44.573705 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Sep 4 16:38:44.578557 systemd[1]: sshd@8-10.0.0.3:22-10.0.0.1:38192.service: Deactivated successfully. Sep 4 16:38:44.582535 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 16:38:44.585083 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Sep 4 16:38:44.586776 systemd-logind[1591]: Removed session 9. Sep 4 16:38:44.641077 systemd-networkd[1496]: cali0924873a630: Gained IPv6LL Sep 4 16:38:45.156014 systemd-networkd[1496]: calib438ee010d1: Gained IPv6LL Sep 4 16:38:45.368777 kubelet[2751]: E0904 16:38:45.368747 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:45.369301 kubelet[2751]: E0904 16:38:45.368801 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:45.377461 kubelet[2751]: I0904 16:38:45.377169 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j9q9l" podStartSLOduration=38.377151875 podStartE2EDuration="38.377151875s" podCreationTimestamp="2025-09-04 16:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:38:45.37680023 +0000 UTC m=+45.237741350" watchObservedRunningTime="2025-09-04 16:38:45.377151875 +0000 UTC m=+45.238092985" Sep 4 16:38:45.665155 systemd-networkd[1496]: cali70f792ee2a6: Gained IPv6LL Sep 4 16:38:46.327864 containerd[1609]: time="2025-09-04T16:38:46.327820373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:46.328615 containerd[1609]: time="2025-09-04T16:38:46.328574451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 16:38:46.329732 containerd[1609]: time="2025-09-04T16:38:46.329698358Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:46.331715 containerd[1609]: time="2025-09-04T16:38:46.331679354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:46.332211 containerd[1609]: time="2025-09-04T16:38:46.332183987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.203511486s" Sep 4 16:38:46.332267 containerd[1609]: time="2025-09-04T16:38:46.332215057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 16:38:46.333105 containerd[1609]: time="2025-09-04T16:38:46.333074540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 16:38:46.334931 containerd[1609]: time="2025-09-04T16:38:46.334909241Z" level=info msg="CreateContainer within sandbox \"385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 16:38:46.342639 containerd[1609]: time="2025-09-04T16:38:46.342609640Z" level=info msg="Container 74a208a0431464d6b5233b3fbc6e845b7c30bcbe2308f7da7ff1e55fa279803f: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:46.351133 containerd[1609]: time="2025-09-04T16:38:46.351094887Z" level=info msg="CreateContainer within sandbox \"385230410317e46007cd35eac94b7e3445eb9946a2c092746eb6fbe309f80634\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"74a208a0431464d6b5233b3fbc6e845b7c30bcbe2308f7da7ff1e55fa279803f\"" Sep 4 16:38:46.351549 containerd[1609]: time="2025-09-04T16:38:46.351518561Z" level=info msg="StartContainer for \"74a208a0431464d6b5233b3fbc6e845b7c30bcbe2308f7da7ff1e55fa279803f\"" Sep 4 16:38:46.352580 containerd[1609]: time="2025-09-04T16:38:46.352546061Z" level=info msg="connecting to shim 74a208a0431464d6b5233b3fbc6e845b7c30bcbe2308f7da7ff1e55fa279803f" address="unix:///run/containerd/s/278c97f18bd8d9ed7e46c0b4df0c6d06016f1d62646809e00c33a6049fe9c0b7" protocol=ttrpc version=3 Sep 4 16:38:46.373127 systemd[1]: Started cri-containerd-74a208a0431464d6b5233b3fbc6e845b7c30bcbe2308f7da7ff1e55fa279803f.scope - libcontainer container 74a208a0431464d6b5233b3fbc6e845b7c30bcbe2308f7da7ff1e55fa279803f. Sep 4 16:38:46.377817 kubelet[2751]: E0904 16:38:46.377773 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:46.378609 kubelet[2751]: E0904 16:38:46.377960 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:46.464346 containerd[1609]: time="2025-09-04T16:38:46.464292184Z" level=info msg="StartContainer for \"74a208a0431464d6b5233b3fbc6e845b7c30bcbe2308f7da7ff1e55fa279803f\" returns successfully" Sep 4 16:38:47.381676 kubelet[2751]: E0904 16:38:47.381640 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:38:47.390637 kubelet[2751]: I0904 16:38:47.390587 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64b68b94f-sh2d4" podStartSLOduration=27.543798505 podStartE2EDuration="32.390554515s" podCreationTimestamp="2025-09-04 16:38:15 +0000 UTC" firstStartedPulling="2025-09-04 16:38:41.48614885 +0000 UTC m=+41.347089970" lastFinishedPulling="2025-09-04 16:38:46.33290486 +0000 UTC m=+46.193845980" observedRunningTime="2025-09-04 16:38:47.389895584 +0000 UTC m=+47.250836704" watchObservedRunningTime="2025-09-04 16:38:47.390554515 +0000 UTC m=+47.251495635" Sep 4 16:38:48.382475 kubelet[2751]: I0904 16:38:48.382440 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:38:49.187266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229422307.mount: Deactivated successfully. Sep 4 16:38:49.587558 systemd[1]: Started sshd@9-10.0.0.3:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). Sep 4 16:38:49.799662 containerd[1609]: time="2025-09-04T16:38:49.799013109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:49.800055 containerd[1609]: time="2025-09-04T16:38:49.799687298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 16:38:49.801695 containerd[1609]: time="2025-09-04T16:38:49.801648656Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:49.804673 containerd[1609]: time="2025-09-04T16:38:49.804505192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:49.805385 containerd[1609]: time="2025-09-04T16:38:49.805344922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.472233329s" Sep 4 16:38:49.805505 containerd[1609]: time="2025-09-04T16:38:49.805394037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 16:38:49.806737 containerd[1609]: time="2025-09-04T16:38:49.806492781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 16:38:49.808667 containerd[1609]: time="2025-09-04T16:38:49.808646081Z" level=info msg="CreateContainer within sandbox \"ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 16:38:49.832928 containerd[1609]: time="2025-09-04T16:38:49.832034522Z" level=info msg="Container 2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:49.889351 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:38:49.891070 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:38:49.894860 containerd[1609]: time="2025-09-04T16:38:49.894831084Z" level=info msg="CreateContainer within sandbox \"ace51777c8af867fe7307a4e2fe53ab22a05b85c80c05fd643aa66c94fb5c27e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22\"" Sep 4 16:38:49.895438 containerd[1609]: time="2025-09-04T16:38:49.895403456Z" level=info msg="StartContainer for \"2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22\"" Sep 4 16:38:49.896393 systemd-logind[1591]: New session 10 of user core. Sep 4 16:38:49.896672 containerd[1609]: time="2025-09-04T16:38:49.896494032Z" level=info msg="connecting to shim 2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22" address="unix:///run/containerd/s/ed51fcf0fb20f605f3608b3ce10f1fffb605a5c5c574e01f9ae5a0f96d882d94" protocol=ttrpc version=3 Sep 4 16:38:49.906059 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 16:38:49.945019 systemd[1]: Started cri-containerd-2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22.scope - libcontainer container 2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22. Sep 4 16:38:49.996878 containerd[1609]: time="2025-09-04T16:38:49.996743932Z" level=info msg="StartContainer for \"2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22\" returns successfully" Sep 4 16:38:50.045491 sshd[5095]: Connection closed by 10.0.0.1 port 38198 Sep 4 16:38:50.045824 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Sep 4 16:38:50.056564 systemd[1]: sshd@9-10.0.0.3:22-10.0.0.1:38198.service: Deactivated successfully. Sep 4 16:38:50.058432 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 16:38:50.059316 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Sep 4 16:38:50.062480 systemd[1]: Started sshd@10-10.0.0.3:22-10.0.0.1:55640.service - OpenSSH per-connection server daemon (10.0.0.1:55640). Sep 4 16:38:50.063170 systemd-logind[1591]: Removed session 10. Sep 4 16:38:50.116460 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 55640 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:38:50.118081 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:38:50.123553 systemd-logind[1591]: New session 11 of user core. Sep 4 16:38:50.125824 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 16:38:50.185901 containerd[1609]: time="2025-09-04T16:38:50.185763978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b\" id:\"dc29bc34e7fdedb5f366bdf40fc438dd728b04099733812cd44316f8eeac22a2\" pid:5155 exited_at:{seconds:1757003930 nanos:185439287}" Sep 4 16:38:50.274939 sshd[5161]: Connection closed by 10.0.0.1 port 55640 Sep 4 16:38:50.274660 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Sep 4 16:38:50.286364 systemd[1]: sshd@10-10.0.0.3:22-10.0.0.1:55640.service: Deactivated successfully. Sep 4 16:38:50.288630 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 16:38:50.289830 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Sep 4 16:38:50.293942 systemd[1]: Started sshd@11-10.0.0.3:22-10.0.0.1:55654.service - OpenSSH per-connection server daemon (10.0.0.1:55654). Sep 4 16:38:50.295619 systemd-logind[1591]: Removed session 11. Sep 4 16:38:50.345560 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 55654 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:38:50.347387 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:38:50.355028 systemd-logind[1591]: New session 12 of user core. Sep 4 16:38:50.358113 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 16:38:50.404362 kubelet[2751]: I0904 16:38:50.404233 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-rpcnd" podStartSLOduration=25.166297932 podStartE2EDuration="33.404216968s" podCreationTimestamp="2025-09-04 16:38:17 +0000 UTC" firstStartedPulling="2025-09-04 16:38:41.56838032 +0000 UTC m=+41.429321430" lastFinishedPulling="2025-09-04 16:38:49.806299346 +0000 UTC m=+49.667240466" observedRunningTime="2025-09-04 16:38:50.400095683 +0000 UTC m=+50.261036803" watchObservedRunningTime="2025-09-04 16:38:50.404216968 +0000 UTC m=+50.265158088" Sep 4 16:38:50.485800 sshd[5187]: Connection closed by 10.0.0.1 port 55654 Sep 4 16:38:50.487036 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Sep 4 16:38:50.487746 containerd[1609]: time="2025-09-04T16:38:50.487702219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22\" id:\"bde88a5af115d4d4225b81db686fa82d29f42c2f82b96cdcc1a529a5406dd1c6\" pid:5204 exit_status:1 exited_at:{seconds:1757003930 nanos:487269900}" Sep 4 16:38:50.494001 systemd[1]: sshd@11-10.0.0.3:22-10.0.0.1:55654.service: Deactivated successfully. Sep 4 16:38:50.496390 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 16:38:50.497174 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Sep 4 16:38:50.498428 systemd-logind[1591]: Removed session 12. Sep 4 16:38:51.472646 containerd[1609]: time="2025-09-04T16:38:51.472600545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22\" id:\"0a7b42960f98fd7eae0aa427d37979eb48c51c6cb639ef74710b918e974d9229\" pid:5239 exit_status:1 exited_at:{seconds:1757003931 nanos:472298790}" Sep 4 16:38:53.461256 containerd[1609]: time="2025-09-04T16:38:53.461212535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:53.462568 containerd[1609]: time="2025-09-04T16:38:53.462531850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 16:38:53.464183 containerd[1609]: time="2025-09-04T16:38:53.464147007Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:53.480828 containerd[1609]: time="2025-09-04T16:38:53.480769483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:53.481473 containerd[1609]: time="2025-09-04T16:38:53.481439420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.674916772s" Sep 4 16:38:53.481513 containerd[1609]: time="2025-09-04T16:38:53.481479929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 16:38:53.482469 containerd[1609]: time="2025-09-04T16:38:53.482443996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 16:38:53.504144 containerd[1609]: time="2025-09-04T16:38:53.504108744Z" level=info msg="CreateContainer within sandbox \"165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 16:38:53.517137 containerd[1609]: time="2025-09-04T16:38:53.517097114Z" level=info msg="Container d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:53.517923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238400660.mount: Deactivated successfully. Sep 4 16:38:53.525280 containerd[1609]: time="2025-09-04T16:38:53.525243277Z" level=info msg="CreateContainer within sandbox \"165034a68ea5ceb463ee6d3985c2047b5393ff3d4783018281c46791052c7a7c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42\"" Sep 4 16:38:53.525597 containerd[1609]: time="2025-09-04T16:38:53.525569679Z" level=info msg="StartContainer for \"d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42\"" Sep 4 16:38:53.526491 containerd[1609]: time="2025-09-04T16:38:53.526449663Z" level=info msg="connecting to shim d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42" address="unix:///run/containerd/s/7951bb183cedf703c0c7f430203de073d4fcde88d8d7cbafc965323df5bf1b90" protocol=ttrpc version=3 Sep 4 16:38:53.559007 systemd[1]: Started cri-containerd-d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42.scope - libcontainer container d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42. Sep 4 16:38:53.607618 containerd[1609]: time="2025-09-04T16:38:53.607581318Z" level=info msg="StartContainer for \"d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42\" returns successfully" Sep 4 16:38:54.424997 kubelet[2751]: I0904 16:38:54.424932 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7865959f87-8s5vl" podStartSLOduration=24.589636706 podStartE2EDuration="36.424913437s" podCreationTimestamp="2025-09-04 16:38:18 +0000 UTC" firstStartedPulling="2025-09-04 16:38:41.646960253 +0000 UTC m=+41.507901373" lastFinishedPulling="2025-09-04 16:38:53.482236984 +0000 UTC m=+53.343178104" observedRunningTime="2025-09-04 16:38:54.424800087 +0000 UTC m=+54.285741207" watchObservedRunningTime="2025-09-04 16:38:54.424913437 +0000 UTC m=+54.285854557" Sep 4 16:38:54.455538 containerd[1609]: time="2025-09-04T16:38:54.455497743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42\" id:\"33eccdd9980be588aa382abd2d561eb900931fb695fb68ee115ccaeb672a2d38\" pid:5320 exited_at:{seconds:1757003934 nanos:455270363}" Sep 4 16:38:55.506605 systemd[1]: Started sshd@12-10.0.0.3:22-10.0.0.1:55662.service - OpenSSH per-connection server daemon (10.0.0.1:55662). Sep 4 16:38:55.570840 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 55662 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:38:55.572153 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:38:55.576194 systemd-logind[1591]: New session 13 of user core. Sep 4 16:38:55.586040 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 16:38:55.707633 sshd[5334]: Connection closed by 10.0.0.1 port 55662 Sep 4 16:38:55.708096 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Sep 4 16:38:55.712784 systemd[1]: sshd@12-10.0.0.3:22-10.0.0.1:55662.service: Deactivated successfully. Sep 4 16:38:55.714816 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 16:38:55.715582 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Sep 4 16:38:55.716561 systemd-logind[1591]: Removed session 13. Sep 4 16:38:57.542236 containerd[1609]: time="2025-09-04T16:38:57.542177102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:57.543432 containerd[1609]: time="2025-09-04T16:38:57.543399373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 16:38:57.544749 containerd[1609]: time="2025-09-04T16:38:57.544717871Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:57.547152 containerd[1609]: time="2025-09-04T16:38:57.547112087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:38:57.547562 containerd[1609]: time="2025-09-04T16:38:57.547523643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 4.065054168s" Sep 4 16:38:57.547562 containerd[1609]: time="2025-09-04T16:38:57.547557759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 16:38:57.548378 containerd[1609]: time="2025-09-04T16:38:57.548344088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 16:38:57.550614 containerd[1609]: time="2025-09-04T16:38:57.550581120Z" level=info msg="CreateContainer within sandbox \"c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 16:38:57.561306 containerd[1609]: time="2025-09-04T16:38:57.561266057Z" level=info msg="Container 94f182c4e7e4fe30e6fdcba877e6d11c61a8227c7b1543336cd6cd8a15a2a0ae: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:38:57.570413 containerd[1609]: time="2025-09-04T16:38:57.570372693Z" level=info msg="CreateContainer within sandbox \"c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"94f182c4e7e4fe30e6fdcba877e6d11c61a8227c7b1543336cd6cd8a15a2a0ae\"" Sep 4 16:38:57.570935 containerd[1609]: time="2025-09-04T16:38:57.570877488Z" level=info msg="StartContainer for \"94f182c4e7e4fe30e6fdcba877e6d11c61a8227c7b1543336cd6cd8a15a2a0ae\"" Sep 4 16:38:57.572269 containerd[1609]: time="2025-09-04T16:38:57.572241473Z" level=info msg="connecting to shim 94f182c4e7e4fe30e6fdcba877e6d11c61a8227c7b1543336cd6cd8a15a2a0ae" address="unix:///run/containerd/s/cf44dc6eff52be921eab665cda09b4da6305194621664309903c5afee0d90dc1" protocol=ttrpc version=3 Sep 4 16:38:57.594009 systemd[1]: Started cri-containerd-94f182c4e7e4fe30e6fdcba877e6d11c61a8227c7b1543336cd6cd8a15a2a0ae.scope - libcontainer container 94f182c4e7e4fe30e6fdcba877e6d11c61a8227c7b1543336cd6cd8a15a2a0ae. Sep 4 16:38:57.636814 containerd[1609]: time="2025-09-04T16:38:57.636778303Z" level=info msg="StartContainer for \"94f182c4e7e4fe30e6fdcba877e6d11c61a8227c7b1543336cd6cd8a15a2a0ae\" returns successfully" Sep 4 16:38:59.905594 containerd[1609]: time="2025-09-04T16:38:59.905529829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22\" id:\"0585b1565041765a177ef73b4fb3c43cf38c394d4ee4946fb010fc24b901262c\" pid:5399 exited_at:{seconds:1757003939 nanos:904740325}" Sep 4 16:39:00.581719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440307306.mount: Deactivated successfully. Sep 4 16:39:00.707622 containerd[1609]: time="2025-09-04T16:39:00.700610720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:39:00.707743 containerd[1609]: time="2025-09-04T16:39:00.701478405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 16:39:00.707772 containerd[1609]: time="2025-09-04T16:39:00.705746922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.157374089s" Sep 4 16:39:00.707799 containerd[1609]: time="2025-09-04T16:39:00.707774556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 16:39:00.708449 containerd[1609]: time="2025-09-04T16:39:00.708411124Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:39:00.709138 containerd[1609]: time="2025-09-04T16:39:00.709105955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:39:00.709681 containerd[1609]: time="2025-09-04T16:39:00.709585991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 16:39:00.710578 containerd[1609]: time="2025-09-04T16:39:00.710537136Z" level=info msg="CreateContainer within sandbox \"8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 16:39:00.723472 systemd[1]: Started sshd@13-10.0.0.3:22-10.0.0.1:51548.service - OpenSSH per-connection server daemon (10.0.0.1:51548). Sep 4 16:39:00.729525 containerd[1609]: time="2025-09-04T16:39:00.729393807Z" level=info msg="Container 5f547be8acf4f7411d1514564ba77b051eb3bfbad85792823044031f09e1bd1a: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:39:00.745127 containerd[1609]: time="2025-09-04T16:39:00.745082332Z" level=info msg="CreateContainer within sandbox \"8b6645598e4c889583ffd524e36749cdeb4e44a0868bb132bce371990cbe35e9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5f547be8acf4f7411d1514564ba77b051eb3bfbad85792823044031f09e1bd1a\"" Sep 4 16:39:00.745549 containerd[1609]: time="2025-09-04T16:39:00.745519766Z" level=info msg="StartContainer for \"5f547be8acf4f7411d1514564ba77b051eb3bfbad85792823044031f09e1bd1a\"" Sep 4 16:39:00.746478 containerd[1609]: time="2025-09-04T16:39:00.746442417Z" level=info msg="connecting to shim 5f547be8acf4f7411d1514564ba77b051eb3bfbad85792823044031f09e1bd1a" address="unix:///run/containerd/s/edf0f5ce8a5dfb29519122a3ea2aa95a9baa4a10573625ce828abbf3d09cd2f4" protocol=ttrpc version=3 Sep 4 16:39:00.777121 systemd[1]: Started cri-containerd-5f547be8acf4f7411d1514564ba77b051eb3bfbad85792823044031f09e1bd1a.scope - libcontainer container 5f547be8acf4f7411d1514564ba77b051eb3bfbad85792823044031f09e1bd1a. Sep 4 16:39:00.800316 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 51548 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:00.802618 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:00.808117 systemd-logind[1591]: New session 14 of user core. Sep 4 16:39:00.813099 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 16:39:00.826184 containerd[1609]: time="2025-09-04T16:39:00.826151780Z" level=info msg="StartContainer for \"5f547be8acf4f7411d1514564ba77b051eb3bfbad85792823044031f09e1bd1a\" returns successfully" Sep 4 16:39:00.939556 sshd[5445]: Connection closed by 10.0.0.1 port 51548 Sep 4 16:39:00.939905 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:00.944939 systemd[1]: sshd@13-10.0.0.3:22-10.0.0.1:51548.service: Deactivated successfully. Sep 4 16:39:00.946942 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 16:39:00.947705 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Sep 4 16:39:00.948830 systemd-logind[1591]: Removed session 14. Sep 4 16:39:01.142304 containerd[1609]: time="2025-09-04T16:39:01.142264345Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:39:01.143096 containerd[1609]: time="2025-09-04T16:39:01.143054098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 16:39:01.144619 containerd[1609]: time="2025-09-04T16:39:01.144584992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 434.959946ms" Sep 4 16:39:01.144619 containerd[1609]: time="2025-09-04T16:39:01.144617453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 16:39:01.145684 containerd[1609]: time="2025-09-04T16:39:01.145549382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 16:39:01.146590 containerd[1609]: time="2025-09-04T16:39:01.146552647Z" level=info msg="CreateContainer within sandbox \"82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 16:39:01.157399 containerd[1609]: time="2025-09-04T16:39:01.157365836Z" level=info msg="Container 4a5a55281303748c34cc57ac09b0683cf7ba846fb88c1df9bf62061c45c37ece: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:39:01.165049 containerd[1609]: time="2025-09-04T16:39:01.165015674Z" level=info msg="CreateContainer within sandbox \"82f9ecdeda601de2a31b532876f1d2b91c2eaaea7e12d83f483ecc989dbd6941\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a5a55281303748c34cc57ac09b0683cf7ba846fb88c1df9bf62061c45c37ece\"" Sep 4 16:39:01.165443 containerd[1609]: time="2025-09-04T16:39:01.165420275Z" level=info msg="StartContainer for \"4a5a55281303748c34cc57ac09b0683cf7ba846fb88c1df9bf62061c45c37ece\"" Sep 4 16:39:01.166275 containerd[1609]: time="2025-09-04T16:39:01.166251358Z" level=info msg="connecting to shim 4a5a55281303748c34cc57ac09b0683cf7ba846fb88c1df9bf62061c45c37ece" address="unix:///run/containerd/s/dbf08c6d729f5ebb1d299417f115111de64a664f99ab9c6299d3eb2e93246c30" protocol=ttrpc version=3 Sep 4 16:39:01.186014 systemd[1]: Started cri-containerd-4a5a55281303748c34cc57ac09b0683cf7ba846fb88c1df9bf62061c45c37ece.scope - libcontainer container 4a5a55281303748c34cc57ac09b0683cf7ba846fb88c1df9bf62061c45c37ece. Sep 4 16:39:01.235903 containerd[1609]: time="2025-09-04T16:39:01.232863980Z" level=info msg="StartContainer for \"4a5a55281303748c34cc57ac09b0683cf7ba846fb88c1df9bf62061c45c37ece\" returns successfully" Sep 4 16:39:01.439781 kubelet[2751]: I0904 16:39:01.439724 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7589d758df-fw28l" podStartSLOduration=1.842006707 podStartE2EDuration="21.439708833s" podCreationTimestamp="2025-09-04 16:38:40 +0000 UTC" firstStartedPulling="2025-09-04 16:38:41.111669832 +0000 UTC m=+40.972610952" lastFinishedPulling="2025-09-04 16:39:00.709371958 +0000 UTC m=+60.570313078" observedRunningTime="2025-09-04 16:39:01.439336765 +0000 UTC m=+61.300277885" watchObservedRunningTime="2025-09-04 16:39:01.439708833 +0000 UTC m=+61.300649953" Sep 4 16:39:01.456031 kubelet[2751]: I0904 16:39:01.455473 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64b68b94f-jk85x" podStartSLOduration=28.891513909 podStartE2EDuration="46.455453355s" podCreationTimestamp="2025-09-04 16:38:15 +0000 UTC" firstStartedPulling="2025-09-04 16:38:43.581388427 +0000 UTC m=+43.442329548" lastFinishedPulling="2025-09-04 16:39:01.145327874 +0000 UTC m=+61.006268994" observedRunningTime="2025-09-04 16:39:01.455419439 +0000 UTC m=+61.316360560" watchObservedRunningTime="2025-09-04 16:39:01.455453355 +0000 UTC m=+61.316394475" Sep 4 16:39:01.932081 containerd[1609]: time="2025-09-04T16:39:01.932027901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a83cfc10283e56e7b3c3805950162b7e55b23dbad3130a0cd25a86073383a22\" id:\"21c973aa47182b78c68531aa4f1cc218f9a6c07d1ff1a7e534853eabb15edb03\" pid:5521 exited_at:{seconds:1757003941 nanos:931618421}" Sep 4 16:39:02.433050 kubelet[2751]: I0904 16:39:02.433012 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:39:03.885904 containerd[1609]: time="2025-09-04T16:39:03.885770403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:39:03.886597 containerd[1609]: time="2025-09-04T16:39:03.886564414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 16:39:03.887855 containerd[1609]: time="2025-09-04T16:39:03.887812610Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:39:03.889916 containerd[1609]: time="2025-09-04T16:39:03.889881568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:39:03.890391 containerd[1609]: time="2025-09-04T16:39:03.890351164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.744756005s" Sep 4 16:39:03.890391 containerd[1609]: time="2025-09-04T16:39:03.890389157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 16:39:03.892532 containerd[1609]: time="2025-09-04T16:39:03.892486711Z" level=info msg="CreateContainer within sandbox \"c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 16:39:03.901979 containerd[1609]: time="2025-09-04T16:39:03.901935176Z" level=info msg="Container 6eaf6083418dfc9f9f97129db02be7f272d678c392bb3d4c51b72a769c535f7f: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:39:03.919869 containerd[1609]: time="2025-09-04T16:39:03.919834702Z" level=info msg="CreateContainer within sandbox \"c1ef0c62674a5f47e091c82f2be141d7c41fb70962230fe48b1e62cf9562132b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6eaf6083418dfc9f9f97129db02be7f272d678c392bb3d4c51b72a769c535f7f\"" Sep 4 16:39:03.920860 containerd[1609]: time="2025-09-04T16:39:03.920306111Z" level=info msg="StartContainer for \"6eaf6083418dfc9f9f97129db02be7f272d678c392bb3d4c51b72a769c535f7f\"" Sep 4 16:39:03.921523 containerd[1609]: time="2025-09-04T16:39:03.921497258Z" level=info msg="connecting to shim 6eaf6083418dfc9f9f97129db02be7f272d678c392bb3d4c51b72a769c535f7f" address="unix:///run/containerd/s/cf44dc6eff52be921eab665cda09b4da6305194621664309903c5afee0d90dc1" protocol=ttrpc version=3 Sep 4 16:39:03.945017 systemd[1]: Started cri-containerd-6eaf6083418dfc9f9f97129db02be7f272d678c392bb3d4c51b72a769c535f7f.scope - libcontainer container 6eaf6083418dfc9f9f97129db02be7f272d678c392bb3d4c51b72a769c535f7f. Sep 4 16:39:03.989375 containerd[1609]: time="2025-09-04T16:39:03.989337018Z" level=info msg="StartContainer for \"6eaf6083418dfc9f9f97129db02be7f272d678c392bb3d4c51b72a769c535f7f\" returns successfully" Sep 4 16:39:04.285851 kubelet[2751]: I0904 16:39:04.285734 2751 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 16:39:04.285851 kubelet[2751]: I0904 16:39:04.285771 2751 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 16:39:04.450307 kubelet[2751]: I0904 16:39:04.450232 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x4vr9" podStartSLOduration=24.957518886 podStartE2EDuration="46.450210268s" podCreationTimestamp="2025-09-04 16:38:18 +0000 UTC" firstStartedPulling="2025-09-04 16:38:42.398496795 +0000 UTC m=+42.259437905" lastFinishedPulling="2025-09-04 16:39:03.891188167 +0000 UTC m=+63.752129287" observedRunningTime="2025-09-04 16:39:04.449208598 +0000 UTC m=+64.310149718" watchObservedRunningTime="2025-09-04 16:39:04.450210268 +0000 UTC m=+64.311151388" Sep 4 16:39:05.951564 systemd[1]: Started sshd@14-10.0.0.3:22-10.0.0.1:51552.service - OpenSSH per-connection server daemon (10.0.0.1:51552). Sep 4 16:39:06.006147 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 51552 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:06.007367 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:06.011310 systemd-logind[1591]: New session 15 of user core. Sep 4 16:39:06.021002 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 16:39:06.173771 sshd[5584]: Connection closed by 10.0.0.1 port 51552 Sep 4 16:39:06.174100 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:06.177800 systemd[1]: sshd@14-10.0.0.3:22-10.0.0.1:51552.service: Deactivated successfully. Sep 4 16:39:06.179815 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 16:39:06.181255 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Sep 4 16:39:06.182404 systemd-logind[1591]: Removed session 15. Sep 4 16:39:07.749868 containerd[1609]: time="2025-09-04T16:39:07.749827811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42\" id:\"bcf3a04891280424e3a7f6f20b788c2b22533aba482e964ca809e91bda5773cc\" pid:5609 exited_at:{seconds:1757003947 nanos:749647594}" Sep 4 16:39:11.186716 systemd[1]: Started sshd@15-10.0.0.3:22-10.0.0.1:42516.service - OpenSSH per-connection server daemon (10.0.0.1:42516). Sep 4 16:39:11.264418 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 42516 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:11.266244 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:11.271599 systemd-logind[1591]: New session 16 of user core. Sep 4 16:39:11.279024 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 16:39:11.421307 sshd[5627]: Connection closed by 10.0.0.1 port 42516 Sep 4 16:39:11.421732 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:11.431582 systemd[1]: sshd@15-10.0.0.3:22-10.0.0.1:42516.service: Deactivated successfully. Sep 4 16:39:11.433419 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 16:39:11.434146 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Sep 4 16:39:11.437840 systemd[1]: Started sshd@16-10.0.0.3:22-10.0.0.1:42532.service - OpenSSH per-connection server daemon (10.0.0.1:42532). Sep 4 16:39:11.439056 systemd-logind[1591]: Removed session 16. Sep 4 16:39:11.487745 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 42532 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:11.492176 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:11.503261 systemd-logind[1591]: New session 17 of user core. Sep 4 16:39:11.512014 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 16:39:11.720968 sshd[5645]: Connection closed by 10.0.0.1 port 42532 Sep 4 16:39:11.723593 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:11.733735 systemd[1]: sshd@16-10.0.0.3:22-10.0.0.1:42532.service: Deactivated successfully. Sep 4 16:39:11.736912 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 16:39:11.738096 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Sep 4 16:39:11.742820 systemd[1]: Started sshd@17-10.0.0.3:22-10.0.0.1:42546.service - OpenSSH per-connection server daemon (10.0.0.1:42546). Sep 4 16:39:11.745605 systemd-logind[1591]: Removed session 17. Sep 4 16:39:11.797856 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 42546 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:11.799916 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:11.804935 systemd-logind[1591]: New session 18 of user core. Sep 4 16:39:11.815063 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 16:39:13.358164 sshd[5661]: Connection closed by 10.0.0.1 port 42546 Sep 4 16:39:13.360431 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:13.375565 systemd[1]: Started sshd@18-10.0.0.3:22-10.0.0.1:42560.service - OpenSSH per-connection server daemon (10.0.0.1:42560). Sep 4 16:39:13.376325 systemd[1]: sshd@17-10.0.0.3:22-10.0.0.1:42546.service: Deactivated successfully. Sep 4 16:39:13.378198 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 16:39:13.378491 systemd[1]: session-18.scope: Consumed 611ms CPU time, 74.1M memory peak. Sep 4 16:39:13.380248 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Sep 4 16:39:13.383395 systemd-logind[1591]: Removed session 18. Sep 4 16:39:13.449012 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 42560 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:13.450579 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:13.457174 systemd-logind[1591]: New session 19 of user core. Sep 4 16:39:13.462040 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 16:39:13.832122 sshd[5685]: Connection closed by 10.0.0.1 port 42560 Sep 4 16:39:13.834006 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:13.844610 systemd[1]: sshd@18-10.0.0.3:22-10.0.0.1:42560.service: Deactivated successfully. Sep 4 16:39:13.846522 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 16:39:13.848054 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Sep 4 16:39:13.850278 systemd[1]: Started sshd@19-10.0.0.3:22-10.0.0.1:42562.service - OpenSSH per-connection server daemon (10.0.0.1:42562). Sep 4 16:39:13.850979 systemd-logind[1591]: Removed session 19. Sep 4 16:39:13.904907 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 42562 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:13.906214 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:13.912861 systemd-logind[1591]: New session 20 of user core. Sep 4 16:39:13.926028 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 16:39:14.034276 sshd[5699]: Connection closed by 10.0.0.1 port 42562 Sep 4 16:39:14.034589 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:14.039359 systemd[1]: sshd@19-10.0.0.3:22-10.0.0.1:42562.service: Deactivated successfully. Sep 4 16:39:14.041482 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 16:39:14.042324 systemd-logind[1591]: Session 20 logged out. Waiting for processes to exit. Sep 4 16:39:14.043648 systemd-logind[1591]: Removed session 20. Sep 4 16:39:14.254405 kubelet[2751]: I0904 16:39:14.254363 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:39:17.957520 containerd[1609]: time="2025-09-04T16:39:17.957477043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5d86a83355ef041795f00c62922b74ce5720fd40be4ce765c64ca91fa5c5e42\" id:\"fb70e66739e4a12b253078f8ecb98074c115aa50bea8cbdc1ea60850c17f3669\" pid:5725 exited_at:{seconds:1757003957 nanos:957167231}" Sep 4 16:39:19.053304 systemd[1]: Started sshd@20-10.0.0.3:22-10.0.0.1:42572.service - OpenSSH per-connection server daemon (10.0.0.1:42572). Sep 4 16:39:19.098103 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 42572 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:19.099333 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:19.104865 systemd-logind[1591]: New session 21 of user core. Sep 4 16:39:19.109035 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 16:39:19.215905 kubelet[2751]: E0904 16:39:19.214339 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:39:19.253501 sshd[5742]: Connection closed by 10.0.0.1 port 42572 Sep 4 16:39:19.253880 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:19.261335 systemd[1]: sshd@20-10.0.0.3:22-10.0.0.1:42572.service: Deactivated successfully. Sep 4 16:39:19.263289 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 16:39:19.267209 systemd-logind[1591]: Session 21 logged out. Waiting for processes to exit. Sep 4 16:39:19.271190 systemd-logind[1591]: Removed session 21. Sep 4 16:39:20.180433 containerd[1609]: time="2025-09-04T16:39:20.180350317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e32fe61aab88102611665c473cb414da52d2f25a7574b435f02bed24c8a185b\" id:\"5cf5f8d0b3e1591a8dc33043999d58fa2a5708dd0a301c8d61d02a554e993e44\" pid:5768 exited_at:{seconds:1757003960 nanos:180025858}" Sep 4 16:39:23.418565 kubelet[2751]: I0904 16:39:23.418289 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:39:24.276092 systemd[1]: Started sshd@21-10.0.0.3:22-10.0.0.1:43012.service - OpenSSH per-connection server daemon (10.0.0.1:43012). Sep 4 16:39:24.324729 sshd[5789]: Accepted publickey for core from 10.0.0.1 port 43012 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:24.325967 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:24.330929 systemd-logind[1591]: New session 22 of user core. Sep 4 16:39:24.337023 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 16:39:24.452585 sshd[5792]: Connection closed by 10.0.0.1 port 43012 Sep 4 16:39:24.453281 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:24.458285 systemd[1]: sshd@21-10.0.0.3:22-10.0.0.1:43012.service: Deactivated successfully. Sep 4 16:39:24.460355 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 16:39:24.461299 systemd-logind[1591]: Session 22 logged out. Waiting for processes to exit. Sep 4 16:39:24.462624 systemd-logind[1591]: Removed session 22. Sep 4 16:39:26.214808 kubelet[2751]: E0904 16:39:26.214774 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:39:29.466545 systemd[1]: Started sshd@22-10.0.0.3:22-10.0.0.1:43018.service - OpenSSH per-connection server daemon (10.0.0.1:43018). Sep 4 16:39:29.520807 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 43018 ssh2: RSA SHA256:mPsqTLGsbkUoN2OVYQ3lqy/0sdpKo6WWb9yO3cg116E Sep 4 16:39:29.522019 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:39:29.526022 systemd-logind[1591]: New session 23 of user core. Sep 4 16:39:29.534048 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 16:39:29.638791 sshd[5810]: Connection closed by 10.0.0.1 port 43018 Sep 4 16:39:29.639136 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Sep 4 16:39:29.644046 systemd[1]: sshd@22-10.0.0.3:22-10.0.0.1:43018.service: Deactivated successfully. Sep 4 16:39:29.646097 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 16:39:29.646901 systemd-logind[1591]: Session 23 logged out. Waiting for processes to exit. Sep 4 16:39:29.648109 systemd-logind[1591]: Removed session 23. Sep 4 16:39:30.215128 kubelet[2751]: E0904 16:39:30.215096 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"