Sep 12 00:12:54.954129 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 11 22:16:52 -00 2025 Sep 12 00:12:54.954160 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7794b6bf71a37449b8ef0617d533e34208c88beb959bf84503da9899186bdb34 Sep 12 00:12:54.954172 kernel: BIOS-provided physical RAM map: Sep 12 00:12:54.954181 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 00:12:54.954190 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 00:12:54.954198 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 00:12:54.954208 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 00:12:54.954218 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 00:12:54.954233 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 00:12:54.954241 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 00:12:54.954249 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 00:12:54.954258 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 00:12:54.954266 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 00:12:54.954275 kernel: NX (Execute Disable) protection: active Sep 12 00:12:54.954289 kernel: APIC: Static calls initialized Sep 12 00:12:54.954299 kernel: SMBIOS 2.8 present. Sep 12 00:12:54.954312 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 00:12:54.954321 kernel: DMI: Memory slots populated: 1/1 Sep 12 00:12:54.954330 kernel: Hypervisor detected: KVM Sep 12 00:12:54.954340 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 00:12:54.954349 kernel: kvm-clock: using sched offset of 5140107162 cycles Sep 12 00:12:54.954359 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 00:12:54.954369 kernel: tsc: Detected 2794.748 MHz processor Sep 12 00:12:54.954381 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 00:12:54.954391 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 00:12:54.954401 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 00:12:54.954411 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 00:12:54.954434 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 00:12:54.954454 kernel: Using GB pages for direct mapping Sep 12 00:12:54.954464 kernel: ACPI: Early table checksum verification disabled Sep 12 00:12:54.954473 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 00:12:54.954483 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:12:54.954497 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:12:54.954507 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:12:54.954516 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 00:12:54.954526 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:12:54.954536 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:12:54.954546 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:12:54.954566 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:12:54.954577 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 00:12:54.954595 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 00:12:54.954605 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 00:12:54.954615 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 00:12:54.954625 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 00:12:54.954635 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 00:12:54.954645 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 00:12:54.954658 kernel: No NUMA configuration found Sep 12 00:12:54.954668 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 00:12:54.954679 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 12 00:12:54.954689 kernel: Zone ranges: Sep 12 00:12:54.954700 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 00:12:54.954710 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 00:12:54.954720 kernel: Normal empty Sep 12 00:12:54.954730 kernel: Device empty Sep 12 00:12:54.954741 kernel: Movable zone start for each node Sep 12 00:12:54.954751 kernel: Early memory node ranges Sep 12 00:12:54.954766 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 00:12:54.954776 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 00:12:54.954787 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 00:12:54.954798 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 00:12:54.954808 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 00:12:54.954819 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 00:12:54.954830 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 00:12:54.954845 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 00:12:54.954855 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 00:12:54.954869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 00:12:54.954880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 00:12:54.954893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 00:12:54.954904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 00:12:54.954915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 00:12:54.954925 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 00:12:54.954936 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 00:12:54.954946 kernel: TSC deadline timer available Sep 12 00:12:54.954957 kernel: CPU topo: Max. logical packages: 1 Sep 12 00:12:54.954971 kernel: CPU topo: Max. logical dies: 1 Sep 12 00:12:54.954981 kernel: CPU topo: Max. dies per package: 1 Sep 12 00:12:54.954991 kernel: CPU topo: Max. threads per core: 1 Sep 12 00:12:54.955002 kernel: CPU topo: Num. cores per package: 4 Sep 12 00:12:54.955012 kernel: CPU topo: Num. threads per package: 4 Sep 12 00:12:54.955022 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 00:12:54.955033 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 00:12:54.955060 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 00:12:54.955071 kernel: kvm-guest: setup PV sched yield Sep 12 00:12:54.955082 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 00:12:54.955099 kernel: Booting paravirtualized kernel on KVM Sep 12 00:12:54.955113 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 00:12:54.955124 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 00:12:54.955135 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 00:12:54.955146 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 00:12:54.955156 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 00:12:54.955167 kernel: kvm-guest: PV spinlocks enabled Sep 12 00:12:54.955177 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 00:12:54.955190 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7794b6bf71a37449b8ef0617d533e34208c88beb959bf84503da9899186bdb34 Sep 12 00:12:54.955204 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 00:12:54.955214 kernel: random: crng init done Sep 12 00:12:54.955224 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 00:12:54.955235 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 00:12:54.955246 kernel: Fallback order for Node 0: 0 Sep 12 00:12:54.955257 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 12 00:12:54.955268 kernel: Policy zone: DMA32 Sep 12 00:12:54.955279 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 00:12:54.955293 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 00:12:54.955303 kernel: ftrace: allocating 40123 entries in 157 pages Sep 12 00:12:54.955314 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 00:12:54.955325 kernel: Dynamic Preempt: voluntary Sep 12 00:12:54.955336 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 00:12:54.955347 kernel: rcu: RCU event tracing is enabled. Sep 12 00:12:54.955358 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 00:12:54.955370 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 00:12:54.955384 kernel: Rude variant of Tasks RCU enabled. Sep 12 00:12:54.955397 kernel: Tracing variant of Tasks RCU enabled. Sep 12 00:12:54.955409 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 00:12:54.955419 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 00:12:54.955430 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:12:54.955441 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:12:54.955452 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:12:54.955463 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 00:12:54.955474 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 00:12:54.955498 kernel: Console: colour VGA+ 80x25 Sep 12 00:12:54.955508 kernel: printk: legacy console [ttyS0] enabled Sep 12 00:12:54.955520 kernel: ACPI: Core revision 20240827 Sep 12 00:12:54.955532 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 00:12:54.955545 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 00:12:54.955567 kernel: x2apic enabled Sep 12 00:12:54.955578 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 00:12:54.955592 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 00:12:54.955604 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 00:12:54.955618 kernel: kvm-guest: setup PV IPIs Sep 12 00:12:54.955629 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 00:12:54.955641 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 00:12:54.955653 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 00:12:54.955664 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 00:12:54.955675 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 00:12:54.955687 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 00:12:54.955699 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 00:12:54.955712 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 00:12:54.955724 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 00:12:54.955735 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 00:12:54.955747 kernel: active return thunk: retbleed_return_thunk Sep 12 00:12:54.955758 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 00:12:54.955769 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 00:12:54.955781 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 00:12:54.955792 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 00:12:54.955804 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 00:12:54.955819 kernel: active return thunk: srso_return_thunk Sep 12 00:12:54.955830 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 00:12:54.955841 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 00:12:54.955853 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 00:12:54.955864 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 00:12:54.955875 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 00:12:54.955887 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 00:12:54.955898 kernel: Freeing SMP alternatives memory: 32K Sep 12 00:12:54.955909 kernel: pid_max: default: 32768 minimum: 301 Sep 12 00:12:54.955923 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 00:12:54.955934 kernel: landlock: Up and running. Sep 12 00:12:54.955944 kernel: SELinux: Initializing. Sep 12 00:12:54.955957 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 00:12:54.955968 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 00:12:54.955979 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 00:12:54.955989 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 00:12:54.956000 kernel: ... version: 0 Sep 12 00:12:54.956010 kernel: ... bit width: 48 Sep 12 00:12:54.956024 kernel: ... generic registers: 6 Sep 12 00:12:54.956034 kernel: ... value mask: 0000ffffffffffff Sep 12 00:12:54.956060 kernel: ... max period: 00007fffffffffff Sep 12 00:12:54.956071 kernel: ... fixed-purpose events: 0 Sep 12 00:12:54.956093 kernel: ... event mask: 000000000000003f Sep 12 00:12:54.956114 kernel: signal: max sigframe size: 1776 Sep 12 00:12:54.956134 kernel: rcu: Hierarchical SRCU implementation. Sep 12 00:12:54.956145 kernel: rcu: Max phase no-delay instances is 400. Sep 12 00:12:54.956156 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 00:12:54.956172 kernel: smp: Bringing up secondary CPUs ... Sep 12 00:12:54.956183 kernel: smpboot: x86: Booting SMP configuration: Sep 12 00:12:54.956193 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 00:12:54.956204 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 00:12:54.956215 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 00:12:54.956232 kernel: Memory: 2428920K/2571752K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54048K init, 2916K bss, 136904K reserved, 0K cma-reserved) Sep 12 00:12:54.956243 kernel: devtmpfs: initialized Sep 12 00:12:54.956254 kernel: x86/mm: Memory block size: 128MB Sep 12 00:12:54.956265 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 00:12:54.956282 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 00:12:54.956293 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 00:12:54.956304 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 00:12:54.956315 kernel: audit: initializing netlink subsys (disabled) Sep 12 00:12:54.956327 kernel: audit: type=2000 audit(1757635971.700:1): state=initialized audit_enabled=0 res=1 Sep 12 00:12:54.956338 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 00:12:54.956349 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 00:12:54.956360 kernel: cpuidle: using governor menu Sep 12 00:12:54.956371 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 00:12:54.956385 kernel: dca service started, version 1.12.1 Sep 12 00:12:54.956397 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 12 00:12:54.956408 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 00:12:54.956419 kernel: PCI: Using configuration type 1 for base access Sep 12 00:12:54.956430 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 00:12:54.956442 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 00:12:54.956453 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 00:12:54.956464 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 00:12:54.956476 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 00:12:54.956491 kernel: ACPI: Added _OSI(Module Device) Sep 12 00:12:54.956502 kernel: ACPI: Added _OSI(Processor Device) Sep 12 00:12:54.956513 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 00:12:54.956524 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 00:12:54.956535 kernel: ACPI: Interpreter enabled Sep 12 00:12:54.956546 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 00:12:54.956568 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 00:12:54.956580 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 00:12:54.956591 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 00:12:54.956605 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 00:12:54.956617 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 00:12:54.956885 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 00:12:54.957072 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 00:12:54.957234 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 00:12:54.957249 kernel: PCI host bridge to bus 0000:00 Sep 12 00:12:54.957471 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 00:12:54.957646 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 00:12:54.957790 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 00:12:54.957941 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 00:12:54.958116 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 00:12:54.958258 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 00:12:54.958398 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 00:12:54.958606 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 00:12:54.958790 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 00:12:54.958947 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 12 00:12:54.959132 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 12 00:12:54.959290 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 12 00:12:54.959446 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 00:12:54.959630 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 00:12:54.959795 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 12 00:12:54.959951 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 12 00:12:54.960129 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 00:12:54.960305 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 00:12:54.960464 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 12 00:12:54.960632 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 12 00:12:54.960790 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 00:12:54.960975 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 00:12:54.961164 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 12 00:12:54.961342 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 12 00:12:54.961506 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 00:12:54.961675 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 12 00:12:54.961868 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 00:12:54.962032 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 00:12:54.962238 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 00:12:54.962398 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 12 00:12:54.962563 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 12 00:12:54.962738 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 00:12:54.962896 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 12 00:12:54.962913 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 00:12:54.962929 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 00:12:54.962940 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 00:12:54.962952 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 00:12:54.962963 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 00:12:54.962974 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 00:12:54.962985 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 00:12:54.962996 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 00:12:54.963007 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 00:12:54.963018 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 00:12:54.963032 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 00:12:54.963063 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 00:12:54.963075 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 00:12:54.963086 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 00:12:54.963097 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 00:12:54.963108 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 00:12:54.963120 kernel: iommu: Default domain type: Translated Sep 12 00:12:54.963131 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 00:12:54.963142 kernel: PCI: Using ACPI for IRQ routing Sep 12 00:12:54.963157 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 00:12:54.963169 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 00:12:54.963180 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 00:12:54.963341 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 00:12:54.963499 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 00:12:54.963667 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 00:12:54.963684 kernel: vgaarb: loaded Sep 12 00:12:54.963695 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 00:12:54.963711 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 00:12:54.963723 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 00:12:54.963734 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 00:12:54.963745 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 00:12:54.963757 kernel: pnp: PnP ACPI init Sep 12 00:12:54.963942 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 00:12:54.963960 kernel: pnp: PnP ACPI: found 6 devices Sep 12 00:12:54.963972 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 00:12:54.963983 kernel: NET: Registered PF_INET protocol family Sep 12 00:12:54.963999 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 00:12:54.964010 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 00:12:54.964022 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 00:12:54.964033 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 00:12:54.964066 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 00:12:54.964078 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 00:12:54.964090 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 00:12:54.964101 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 00:12:54.964116 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 00:12:54.964128 kernel: NET: Registered PF_XDP protocol family Sep 12 00:12:54.964284 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 00:12:54.964428 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 00:12:54.964582 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 00:12:54.964735 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 00:12:54.964878 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 00:12:54.965020 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 00:12:54.965036 kernel: PCI: CLS 0 bytes, default 64 Sep 12 00:12:54.965084 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 00:12:54.965114 kernel: Initialise system trusted keyrings Sep 12 00:12:54.965126 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 00:12:54.965138 kernel: Key type asymmetric registered Sep 12 00:12:54.965149 kernel: Asymmetric key parser 'x509' registered Sep 12 00:12:54.965160 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 00:12:54.965172 kernel: io scheduler mq-deadline registered Sep 12 00:12:54.965183 kernel: io scheduler kyber registered Sep 12 00:12:54.965194 kernel: io scheduler bfq registered Sep 12 00:12:54.965210 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 00:12:54.965222 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 00:12:54.965234 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 00:12:54.965245 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 00:12:54.965257 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 00:12:54.965268 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 00:12:54.965280 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 00:12:54.965291 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 00:12:54.965302 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 00:12:54.965486 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 00:12:54.965650 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 00:12:54.965666 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 00:12:54.965812 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T00:12:54 UTC (1757635974) Sep 12 00:12:54.965958 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 00:12:54.965974 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 00:12:54.965985 kernel: NET: Registered PF_INET6 protocol family Sep 12 00:12:54.965997 kernel: hpet: Lost 1 RTC interrupts Sep 12 00:12:54.966012 kernel: Segment Routing with IPv6 Sep 12 00:12:54.966024 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 00:12:54.966035 kernel: NET: Registered PF_PACKET protocol family Sep 12 00:12:54.966065 kernel: Key type dns_resolver registered Sep 12 00:12:54.966077 kernel: IPI shorthand broadcast: enabled Sep 12 00:12:54.966089 kernel: sched_clock: Marking stable (3450003051, 128259177)->(3618783794, -40521566) Sep 12 00:12:54.966101 kernel: registered taskstats version 1 Sep 12 00:12:54.966112 kernel: Loading compiled-in X.509 certificates Sep 12 00:12:54.966124 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 7f0ac4b747edc7786b3c2c5a8c3072fe759c894b' Sep 12 00:12:54.966138 kernel: Demotion targets for Node 0: null Sep 12 00:12:54.966150 kernel: Key type .fscrypt registered Sep 12 00:12:54.966161 kernel: Key type fscrypt-provisioning registered Sep 12 00:12:54.966173 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 00:12:54.966184 kernel: ima: Allocated hash algorithm: sha1 Sep 12 00:12:54.966196 kernel: ima: No architecture policies found Sep 12 00:12:54.966207 kernel: clk: Disabling unused clocks Sep 12 00:12:54.966218 kernel: Warning: unable to open an initial console. Sep 12 00:12:54.966233 kernel: Freeing unused kernel image (initmem) memory: 54048K Sep 12 00:12:54.966244 kernel: Write protecting the kernel read-only data: 24576k Sep 12 00:12:54.966256 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 00:12:54.966267 kernel: Run /init as init process Sep 12 00:12:54.966278 kernel: with arguments: Sep 12 00:12:54.966289 kernel: /init Sep 12 00:12:54.966300 kernel: with environment: Sep 12 00:12:54.966311 kernel: HOME=/ Sep 12 00:12:54.966322 kernel: TERM=linux Sep 12 00:12:54.966333 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 00:12:54.966369 systemd[1]: Successfully made /usr/ read-only. Sep 12 00:12:54.966388 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 00:12:54.966401 systemd[1]: Detected virtualization kvm. Sep 12 00:12:54.966434 systemd[1]: Detected architecture x86-64. Sep 12 00:12:54.966446 systemd[1]: Running in initrd. Sep 12 00:12:54.966462 systemd[1]: No hostname configured, using default hostname. Sep 12 00:12:54.966479 systemd[1]: Hostname set to . Sep 12 00:12:54.966490 systemd[1]: Initializing machine ID from VM UUID. Sep 12 00:12:54.966501 systemd[1]: Queued start job for default target initrd.target. Sep 12 00:12:54.966514 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:12:54.966523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:12:54.966532 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 00:12:54.966541 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 00:12:54.966558 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 00:12:54.966572 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 00:12:54.966582 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 00:12:54.966591 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 00:12:54.966600 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:12:54.966608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:12:54.966617 systemd[1]: Reached target paths.target - Path Units. Sep 12 00:12:54.966626 systemd[1]: Reached target slices.target - Slice Units. Sep 12 00:12:54.966635 systemd[1]: Reached target swap.target - Swaps. Sep 12 00:12:54.966646 systemd[1]: Reached target timers.target - Timer Units. Sep 12 00:12:54.966655 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 00:12:54.966663 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 00:12:54.966672 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 00:12:54.966681 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 00:12:54.966689 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:12:54.966698 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 00:12:54.966707 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:12:54.966717 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 00:12:54.966726 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 00:12:54.966735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 00:12:54.966746 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 00:12:54.966755 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 00:12:54.966766 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 00:12:54.966775 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 00:12:54.966783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 00:12:54.966792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:12:54.966801 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 00:12:54.966810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:12:54.966821 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 00:12:54.966833 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 00:12:54.966883 systemd-journald[220]: Collecting audit messages is disabled. Sep 12 00:12:54.966917 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 00:12:54.966930 systemd-journald[220]: Journal started Sep 12 00:12:54.966960 systemd-journald[220]: Runtime Journal (/run/log/journal/1ea95097a4984081b1a0e62acd00cd63) is 6M, max 48.6M, 42.5M free. Sep 12 00:12:54.954543 systemd-modules-load[221]: Inserted module 'overlay' Sep 12 00:12:55.002978 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 00:12:55.003010 kernel: Bridge firewalling registered Sep 12 00:12:54.984759 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 12 00:12:55.005826 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 00:12:55.006301 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 00:12:55.008606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:12:55.014147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 00:12:55.017621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 00:12:55.023790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 00:12:55.024814 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 00:12:55.035189 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 00:12:55.035818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:12:55.036528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:12:55.040945 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:12:55.044572 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 00:12:55.067331 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 00:12:55.080352 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 00:12:55.116897 systemd-resolved[253]: Positive Trust Anchors: Sep 12 00:12:55.116916 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 00:12:55.116955 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 00:12:55.120523 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 12 00:12:55.121875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 00:12:55.135090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:12:55.151450 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7794b6bf71a37449b8ef0617d533e34208c88beb959bf84503da9899186bdb34 Sep 12 00:12:55.258102 kernel: SCSI subsystem initialized Sep 12 00:12:55.270100 kernel: Loading iSCSI transport class v2.0-870. Sep 12 00:12:55.283084 kernel: iscsi: registered transport (tcp) Sep 12 00:12:55.307146 kernel: iscsi: registered transport (qla4xxx) Sep 12 00:12:55.307248 kernel: QLogic iSCSI HBA Driver Sep 12 00:12:55.331084 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 00:12:55.352271 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:12:55.353284 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 00:12:55.419985 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 00:12:55.421981 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 00:12:55.494118 kernel: raid6: avx2x4 gen() 21268 MB/s Sep 12 00:12:55.511175 kernel: raid6: avx2x2 gen() 20893 MB/s Sep 12 00:12:55.528462 kernel: raid6: avx2x1 gen() 17424 MB/s Sep 12 00:12:55.528568 kernel: raid6: using algorithm avx2x4 gen() 21268 MB/s Sep 12 00:12:55.546616 kernel: raid6: .... xor() 5370 MB/s, rmw enabled Sep 12 00:12:55.546759 kernel: raid6: using avx2x2 recovery algorithm Sep 12 00:12:55.572827 kernel: xor: automatically using best checksumming function avx Sep 12 00:12:55.780130 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 00:12:55.793454 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 00:12:55.795631 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:12:55.835130 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 12 00:12:55.841743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:12:55.846399 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 00:12:55.897386 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Sep 12 00:12:55.935409 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 00:12:55.943661 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 00:12:56.033529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:12:56.038260 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 00:12:56.071080 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 00:12:56.074276 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 00:12:56.079957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 00:12:56.080019 kernel: GPT:9289727 != 19775487 Sep 12 00:12:56.080060 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 00:12:56.080083 kernel: GPT:9289727 != 19775487 Sep 12 00:12:56.080101 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 00:12:56.080122 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:12:56.096067 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 00:12:56.101063 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 00:12:56.113067 kernel: libata version 3.00 loaded. Sep 12 00:12:56.117079 kernel: AES CTR mode by8 optimization enabled Sep 12 00:12:56.118788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 00:12:56.119117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:12:56.125793 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:12:56.133061 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 00:12:56.133463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:12:56.140730 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 00:12:56.136577 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:12:56.145852 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 00:12:56.146122 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 00:12:56.146271 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 00:12:56.160300 kernel: scsi host0: ahci Sep 12 00:12:56.165413 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 00:12:56.171073 kernel: scsi host1: ahci Sep 12 00:12:56.190064 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 00:12:56.223131 kernel: scsi host2: ahci Sep 12 00:12:56.225075 kernel: scsi host3: ahci Sep 12 00:12:56.226103 kernel: scsi host4: ahci Sep 12 00:12:56.227085 kernel: scsi host5: ahci Sep 12 00:12:56.227352 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 12 00:12:56.229465 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 12 00:12:56.229493 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 12 00:12:56.229504 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 12 00:12:56.229540 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 12 00:12:56.229555 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 12 00:12:56.229839 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 00:12:56.271701 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 00:12:56.283444 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 00:12:56.283999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:12:56.307792 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 00:12:56.539069 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 00:12:56.539167 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 00:12:56.539181 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 00:12:56.539194 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 00:12:56.539737 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 00:12:56.540534 kernel: ata3.00: applying bridge limits Sep 12 00:12:56.542075 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 00:12:56.543089 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 00:12:56.543115 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 00:12:56.544204 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 00:12:56.545099 kernel: ata3.00: configured for UDMA/100 Sep 12 00:12:56.546082 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 00:12:56.558106 disk-uuid[635]: Primary Header is updated. Sep 12 00:12:56.558106 disk-uuid[635]: Secondary Entries is updated. Sep 12 00:12:56.558106 disk-uuid[635]: Secondary Header is updated. Sep 12 00:12:56.563105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:12:56.571102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:12:56.609090 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 00:12:56.609419 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 00:12:56.628130 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 00:12:57.010095 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 00:12:57.010924 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 00:12:57.013992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:12:57.014458 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 00:12:57.021394 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 00:12:57.061859 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 00:12:57.656133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:12:57.656836 disk-uuid[636]: The operation has completed successfully. Sep 12 00:12:57.698697 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 00:12:57.698827 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 00:12:57.730002 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 00:12:57.760966 sh[665]: Success Sep 12 00:12:57.781812 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 00:12:57.781858 kernel: device-mapper: uevent: version 1.0.3 Sep 12 00:12:57.781881 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 00:12:57.793134 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 00:12:57.824288 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 00:12:57.828283 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 00:12:57.846082 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 00:12:57.855833 kernel: BTRFS: device fsid ec8d3ca5-0acc-4472-a648-2b3bd2a05eb0 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (677) Sep 12 00:12:57.855867 kernel: BTRFS info (device dm-0): first mount of filesystem ec8d3ca5-0acc-4472-a648-2b3bd2a05eb0 Sep 12 00:12:57.856914 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:12:57.863129 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 00:12:57.863173 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 00:12:57.864957 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 00:12:57.867273 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 00:12:57.869661 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 00:12:57.872540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 00:12:57.875316 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 00:12:57.949097 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 12 00:12:57.949206 kernel: BTRFS info (device vda6): first mount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:12:57.949223 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:12:57.953281 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:12:57.953363 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:12:57.959095 kernel: BTRFS info (device vda6): last unmount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:12:57.960120 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 00:12:57.963505 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 00:12:58.013259 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 00:12:58.017610 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 00:12:58.070874 systemd-networkd[849]: lo: Link UP Sep 12 00:12:58.070884 systemd-networkd[849]: lo: Gained carrier Sep 12 00:12:58.073428 systemd-networkd[849]: Enumeration completed Sep 12 00:12:58.073552 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 00:12:58.074356 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:12:58.074361 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 00:12:58.074854 systemd-networkd[849]: eth0: Link UP Sep 12 00:12:58.075670 systemd-networkd[849]: eth0: Gained carrier Sep 12 00:12:58.075680 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:12:58.134946 systemd[1]: Reached target network.target - Network. Sep 12 00:12:58.154616 ignition[803]: Ignition 2.21.0 Sep 12 00:12:58.154633 ignition[803]: Stage: fetch-offline Sep 12 00:12:58.154674 ignition[803]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:12:58.154683 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:12:58.155176 ignition[803]: parsed url from cmdline: "" Sep 12 00:12:58.155183 ignition[803]: no config URL provided Sep 12 00:12:58.155191 ignition[803]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 00:12:58.155206 ignition[803]: no config at "/usr/lib/ignition/user.ign" Sep 12 00:12:58.161250 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 00:12:58.155247 ignition[803]: op(1): [started] loading QEMU firmware config module Sep 12 00:12:58.155257 ignition[803]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 00:12:58.173781 ignition[803]: op(1): [finished] loading QEMU firmware config module Sep 12 00:12:58.217258 ignition[803]: parsing config with SHA512: 009a79102bc184575f56bfaa43a888d8da1b197e369755a33b0f89e25b49f572a654afde2a39d5ccc42455095075c807c904d22fd8b7722ab127c19754f3e5e0 Sep 12 00:12:58.221244 unknown[803]: fetched base config from "system" Sep 12 00:12:58.221258 unknown[803]: fetched user config from "qemu" Sep 12 00:12:58.221571 ignition[803]: fetch-offline: fetch-offline passed Sep 12 00:12:58.221625 ignition[803]: Ignition finished successfully Sep 12 00:12:58.225376 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 00:12:58.225751 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 00:12:58.226875 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 00:12:58.268453 ignition[862]: Ignition 2.21.0 Sep 12 00:12:58.268477 ignition[862]: Stage: kargs Sep 12 00:12:58.268603 ignition[862]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:12:58.268613 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:12:58.270004 ignition[862]: kargs: kargs passed Sep 12 00:12:58.270215 ignition[862]: Ignition finished successfully Sep 12 00:12:58.274810 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 00:12:58.275949 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 00:12:58.325513 ignition[870]: Ignition 2.21.0 Sep 12 00:12:58.325527 ignition[870]: Stage: disks Sep 12 00:12:58.325679 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:12:58.325690 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:12:58.331186 ignition[870]: disks: disks passed Sep 12 00:12:58.331268 ignition[870]: Ignition finished successfully Sep 12 00:12:58.335450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 00:12:58.336781 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 00:12:58.338886 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 00:12:58.340355 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 00:12:58.342911 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 00:12:58.345098 systemd[1]: Reached target basic.target - Basic System. Sep 12 00:12:58.347997 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 00:12:58.385860 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 00:12:58.449696 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 00:12:58.451290 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 00:12:58.571079 kernel: EXT4-fs (vda9): mounted filesystem 2b0516a2-9b75-4ad7-aa6a-616021c6ba5f r/w with ordered data mode. Quota mode: none. Sep 12 00:12:58.572148 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 00:12:58.572867 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 00:12:58.575577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 00:12:58.578201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 00:12:58.578592 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 00:12:58.578640 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 00:12:58.578668 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 00:12:58.593506 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 00:12:58.595175 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 00:12:58.599091 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Sep 12 00:12:58.601592 kernel: BTRFS info (device vda6): first mount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:12:58.601650 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:12:58.605067 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:12:58.605097 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:12:58.607801 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 00:12:58.639925 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 00:12:58.645268 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 12 00:12:58.649816 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 00:12:58.654321 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 00:12:58.757230 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 00:12:58.760181 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 00:12:58.762033 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 00:12:58.784144 kernel: BTRFS info (device vda6): last unmount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:12:58.798668 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 00:12:58.815772 ignition[1002]: INFO : Ignition 2.21.0 Sep 12 00:12:58.815772 ignition[1002]: INFO : Stage: mount Sep 12 00:12:58.817716 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:12:58.817716 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:12:58.821733 ignition[1002]: INFO : mount: mount passed Sep 12 00:12:58.822663 ignition[1002]: INFO : Ignition finished successfully Sep 12 00:12:58.826181 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 00:12:58.828462 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 00:12:58.853960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 00:12:58.857713 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 00:12:58.882070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 12 00:12:58.882145 kernel: BTRFS info (device vda6): first mount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:12:58.884095 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:12:58.887292 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:12:58.887310 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:12:58.889144 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 00:12:58.928872 ignition[1032]: INFO : Ignition 2.21.0 Sep 12 00:12:58.928872 ignition[1032]: INFO : Stage: files Sep 12 00:12:58.930834 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:12:58.930834 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:12:58.933407 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 12 00:12:58.935744 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 00:12:58.935744 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 00:12:58.940027 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 00:12:58.941548 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 00:12:58.943252 unknown[1032]: wrote ssh authorized keys file for user: core Sep 12 00:12:58.944406 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 00:12:58.946721 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 00:12:58.948794 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 00:12:59.002293 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 00:12:59.538361 systemd-networkd[849]: eth0: Gained IPv6LL Sep 12 00:13:00.285946 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 00:13:00.288961 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 00:13:00.288961 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 00:13:00.288961 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 00:13:00.288961 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 00:13:00.288961 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 00:13:00.288961 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 00:13:00.288961 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 00:13:00.288961 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 00:13:00.303634 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 00:13:00.303634 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 00:13:00.303634 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:13:00.303634 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:13:00.303634 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:13:00.303634 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 00:13:00.625587 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 00:13:01.047866 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:13:01.047866 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 00:13:01.052768 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 00:13:01.055069 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 00:13:01.055069 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 00:13:01.055069 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 00:13:01.055069 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 00:13:01.055069 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 00:13:01.055069 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 00:13:01.055069 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 00:13:01.083853 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 00:13:01.091578 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 00:13:01.094076 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 00:13:01.094076 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 00:13:01.094076 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 00:13:01.094076 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 00:13:01.094076 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 00:13:01.094076 ignition[1032]: INFO : files: files passed Sep 12 00:13:01.094076 ignition[1032]: INFO : Ignition finished successfully Sep 12 00:13:01.095751 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 00:13:01.097738 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 00:13:01.101170 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 00:13:01.132739 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 00:13:01.132923 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 00:13:01.137292 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 00:13:01.141520 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:13:01.143344 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:13:01.145010 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:13:01.144673 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 00:13:01.146562 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 00:13:01.149140 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 00:13:01.211722 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 00:13:01.212892 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 00:13:01.216220 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 00:13:01.216344 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 00:13:01.219257 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 00:13:01.220278 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 00:13:01.253930 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 00:13:01.258709 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 00:13:01.284076 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:13:01.284291 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:13:01.287833 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 00:13:01.290177 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 00:13:01.290343 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 00:13:01.293131 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 00:13:01.295499 systemd[1]: Stopped target basic.target - Basic System. Sep 12 00:13:01.297449 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 00:13:01.299317 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 00:13:01.301514 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 00:13:01.302615 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 00:13:01.302930 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 00:13:01.303419 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 00:13:01.303754 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 00:13:01.304076 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 00:13:01.304410 systemd[1]: Stopped target swap.target - Swaps. Sep 12 00:13:01.304670 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 00:13:01.304832 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 00:13:01.318251 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:13:01.319435 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:13:01.322069 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 00:13:01.324236 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:13:01.324392 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 00:13:01.324550 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 00:13:01.329761 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 00:13:01.329941 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 00:13:01.332424 systemd[1]: Stopped target paths.target - Path Units. Sep 12 00:13:01.334553 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 00:13:01.338126 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:13:01.338310 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 00:13:01.341483 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 00:13:01.341799 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 00:13:01.341911 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 00:13:01.345648 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 00:13:01.345806 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 00:13:01.348212 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 00:13:01.348389 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 00:13:01.352525 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 00:13:01.352675 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 00:13:01.356605 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 00:13:01.357720 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 00:13:01.360227 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 00:13:01.360370 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:13:01.363218 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 00:13:01.363431 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 00:13:01.370898 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 00:13:01.385251 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 00:13:01.405065 ignition[1088]: INFO : Ignition 2.21.0 Sep 12 00:13:01.405065 ignition[1088]: INFO : Stage: umount Sep 12 00:13:01.406999 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:13:01.406999 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:13:01.406999 ignition[1088]: INFO : umount: umount passed Sep 12 00:13:01.406999 ignition[1088]: INFO : Ignition finished successfully Sep 12 00:13:01.411033 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 00:13:01.411207 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 00:13:01.413271 systemd[1]: Stopped target network.target - Network. Sep 12 00:13:01.414912 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 00:13:01.414975 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 00:13:01.416659 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 00:13:01.416709 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 00:13:01.417659 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 00:13:01.417723 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 00:13:01.417965 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 00:13:01.418015 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 00:13:01.422141 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 00:13:01.423253 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 00:13:01.434414 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 00:13:01.434631 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 00:13:01.439529 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 00:13:01.439839 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 00:13:01.439889 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:13:01.444359 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:13:01.444688 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 00:13:01.444816 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 00:13:01.450025 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 00:13:01.451546 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 00:13:01.454285 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 00:13:01.454351 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:13:01.458134 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 00:13:01.459236 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 00:13:01.459310 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 00:13:01.461952 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 00:13:01.462017 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:13:01.465866 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 00:13:01.465935 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 00:13:01.467005 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:13:01.469006 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 00:13:01.491943 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 00:13:01.492272 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:13:01.494203 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 00:13:01.494271 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 00:13:01.494879 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 00:13:01.494928 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:13:01.495401 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 00:13:01.495472 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 00:13:01.496096 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 00:13:01.496159 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 00:13:01.503947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 00:13:01.504026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 00:13:01.509392 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 00:13:01.511320 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 00:13:01.511447 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:13:01.514760 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 00:13:01.514817 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:13:01.518589 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 00:13:01.518648 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 00:13:01.522726 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 00:13:01.522792 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:13:01.525839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 00:13:01.525947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:13:01.531801 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 00:13:01.531916 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 00:13:01.531982 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 00:13:01.532038 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 00:13:01.532125 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:13:01.532943 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 00:13:01.533125 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 00:13:01.534214 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 00:13:01.534325 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 00:13:01.536659 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 00:13:01.536807 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 00:13:01.542392 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 00:13:01.543136 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 00:13:01.543225 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 00:13:01.544701 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 00:13:01.562523 systemd[1]: Switching root. Sep 12 00:13:01.604255 systemd-journald[220]: Journal stopped Sep 12 00:13:02.998408 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 12 00:13:02.998493 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 00:13:02.998521 kernel: SELinux: policy capability open_perms=1 Sep 12 00:13:02.998545 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 00:13:02.998561 kernel: SELinux: policy capability always_check_network=0 Sep 12 00:13:02.998577 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 00:13:02.998593 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 00:13:02.998615 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 00:13:02.998632 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 00:13:02.998647 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 00:13:02.998663 kernel: audit: type=1403 audit(1757635982.145:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 00:13:02.998691 systemd[1]: Successfully loaded SELinux policy in 61.820ms. Sep 12 00:13:02.998730 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.595ms. Sep 12 00:13:02.998749 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 00:13:02.998773 systemd[1]: Detected virtualization kvm. Sep 12 00:13:02.998790 systemd[1]: Detected architecture x86-64. Sep 12 00:13:02.998806 systemd[1]: Detected first boot. Sep 12 00:13:02.998828 systemd[1]: Initializing machine ID from VM UUID. Sep 12 00:13:02.998844 zram_generator::config[1133]: No configuration found. Sep 12 00:13:02.998869 kernel: Guest personality initialized and is inactive Sep 12 00:13:02.998884 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 00:13:02.998900 kernel: Initialized host personality Sep 12 00:13:02.998915 kernel: NET: Registered PF_VSOCK protocol family Sep 12 00:13:02.998939 systemd[1]: Populated /etc with preset unit settings. Sep 12 00:13:02.998956 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 00:13:02.998973 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 00:13:02.998989 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 00:13:02.999006 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 00:13:02.999030 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 00:13:02.999066 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 00:13:02.999084 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 00:13:02.999113 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 00:13:02.999146 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 00:13:02.999165 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 00:13:02.999182 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 00:13:02.999199 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 00:13:02.999215 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:13:02.999243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:13:02.999260 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 00:13:02.999277 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 00:13:02.999305 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 00:13:02.999323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 00:13:02.999351 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 00:13:02.999367 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:13:02.999391 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:13:02.999408 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 00:13:02.999423 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 00:13:02.999438 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 00:13:02.999452 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 00:13:02.999467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:13:02.999482 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 00:13:02.999496 systemd[1]: Reached target slices.target - Slice Units. Sep 12 00:13:02.999510 systemd[1]: Reached target swap.target - Swaps. Sep 12 00:13:02.999527 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 00:13:02.999548 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 00:13:02.999564 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 00:13:02.999580 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:13:02.999596 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 00:13:02.999613 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:13:02.999630 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 00:13:02.999647 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 00:13:02.999663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 00:13:02.999679 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 00:13:02.999705 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:13:02.999722 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 00:13:02.999738 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 00:13:02.999753 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 00:13:02.999769 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 00:13:02.999784 systemd[1]: Reached target machines.target - Containers. Sep 12 00:13:02.999798 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 00:13:02.999814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:13:02.999836 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 00:13:02.999851 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 00:13:02.999865 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:13:02.999882 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 00:13:02.999896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:13:02.999911 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 00:13:02.999927 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:13:02.999943 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 00:13:02.999968 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 00:13:02.999985 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 00:13:03.000001 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 00:13:03.000017 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 00:13:03.000035 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:13:03.000075 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 00:13:03.000095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 00:13:03.000111 kernel: loop: module loaded Sep 12 00:13:03.000127 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 00:13:03.000154 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 00:13:03.000171 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 00:13:03.000190 kernel: fuse: init (API version 7.41) Sep 12 00:13:03.000209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 00:13:03.000256 systemd-journald[1197]: Collecting audit messages is disabled. Sep 12 00:13:03.000294 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 00:13:03.000312 systemd[1]: Stopped verity-setup.service. Sep 12 00:13:03.000345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:13:03.000370 systemd-journald[1197]: Journal started Sep 12 00:13:03.000406 systemd-journald[1197]: Runtime Journal (/run/log/journal/1ea95097a4984081b1a0e62acd00cd63) is 6M, max 48.6M, 42.5M free. Sep 12 00:13:02.698502 systemd[1]: Queued start job for default target multi-user.target. Sep 12 00:13:02.722488 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 00:13:02.722989 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 00:13:03.008722 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 00:13:03.009905 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 00:13:03.011171 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 00:13:03.012543 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 00:13:03.013720 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 00:13:03.014981 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 00:13:03.016452 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 00:13:03.018091 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:13:03.019805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 00:13:03.021135 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 00:13:03.022951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:13:03.023308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:13:03.024876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:13:03.025158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:13:03.026860 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 00:13:03.027135 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 00:13:03.028973 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:13:03.029357 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:13:03.030914 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 00:13:03.032494 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:13:03.034188 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 00:13:03.035870 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 00:13:03.046065 kernel: ACPI: bus type drm_connector registered Sep 12 00:13:03.047742 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 00:13:03.048677 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 00:13:03.093503 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 00:13:03.096296 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 00:13:03.099230 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 00:13:03.100405 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 00:13:03.100452 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 00:13:03.102755 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 00:13:03.107314 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 00:13:03.108592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:13:03.113729 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 00:13:03.116262 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 00:13:03.117572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 00:13:03.118656 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 00:13:03.119990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 00:13:03.121786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 00:13:03.124671 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 00:13:03.128123 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 00:13:03.132960 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:13:03.133390 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 00:13:03.133776 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 00:13:03.145007 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 00:13:03.146475 systemd-journald[1197]: Time spent on flushing to /var/log/journal/1ea95097a4984081b1a0e62acd00cd63 is 36.767ms for 990 entries. Sep 12 00:13:03.146475 systemd-journald[1197]: System Journal (/var/log/journal/1ea95097a4984081b1a0e62acd00cd63) is 8M, max 195.6M, 187.6M free. Sep 12 00:13:03.203546 systemd-journald[1197]: Received client request to flush runtime journal. Sep 12 00:13:03.203625 kernel: loop0: detected capacity change from 0 to 111000 Sep 12 00:13:03.203666 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 00:13:03.146970 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 00:13:03.156217 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 00:13:03.182881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:13:03.202093 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Sep 12 00:13:03.202111 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Sep 12 00:13:03.205227 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 00:13:03.209688 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 00:13:03.213366 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 00:13:03.220436 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 00:13:03.222279 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 00:13:03.225564 kernel: loop1: detected capacity change from 0 to 224512 Sep 12 00:13:03.251507 kernel: loop2: detected capacity change from 0 to 128016 Sep 12 00:13:03.271920 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 00:13:03.274874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 00:13:03.279072 kernel: loop3: detected capacity change from 0 to 111000 Sep 12 00:13:03.293093 kernel: loop4: detected capacity change from 0 to 224512 Sep 12 00:13:03.305105 kernel: loop5: detected capacity change from 0 to 128016 Sep 12 00:13:03.310355 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 12 00:13:03.310385 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 12 00:13:03.317170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:13:03.318141 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 00:13:03.318762 (sd-merge)[1277]: Merged extensions into '/usr'. Sep 12 00:13:03.323566 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 00:13:03.323586 systemd[1]: Reloading... Sep 12 00:13:03.419093 zram_generator::config[1308]: No configuration found. Sep 12 00:13:03.620001 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 00:13:03.636344 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 00:13:03.636700 systemd[1]: Reloading finished in 312 ms. Sep 12 00:13:03.671632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 00:13:03.696278 systemd[1]: Starting ensure-sysext.service... Sep 12 00:13:03.706279 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 00:13:03.777834 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 00:13:03.777928 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 00:13:03.779451 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 00:13:03.780036 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 00:13:03.781656 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 00:13:03.782161 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Sep 12 00:13:03.782314 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Sep 12 00:13:03.788877 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Sep 12 00:13:03.788911 systemd[1]: Reloading... Sep 12 00:13:03.790735 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 00:13:03.790750 systemd-tmpfiles[1342]: Skipping /boot Sep 12 00:13:03.809389 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 00:13:03.809408 systemd-tmpfiles[1342]: Skipping /boot Sep 12 00:13:03.866092 zram_generator::config[1373]: No configuration found. Sep 12 00:13:04.098708 systemd[1]: Reloading finished in 309 ms. Sep 12 00:13:04.123836 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 00:13:04.125787 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 00:13:04.148601 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:13:04.164551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:13:04.167623 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 00:13:04.170274 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 00:13:04.177224 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 00:13:04.180933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:13:04.185325 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 00:13:04.189992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:13:04.190204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:13:04.194246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:13:04.212845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:13:04.240741 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:13:04.242511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:13:04.242672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:13:04.246541 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 00:13:04.269708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:13:04.271790 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 00:13:04.273961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:13:04.279443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:13:04.289580 systemd-udevd[1414]: Using default interface naming scheme 'v255'. Sep 12 00:13:04.289711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:13:04.291170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:13:04.293637 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:13:04.294608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:13:04.296810 augenrules[1439]: No rules Sep 12 00:13:04.299433 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:13:04.299759 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:13:04.307487 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 00:13:04.316259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:13:04.317416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:13:04.319454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:13:04.324366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:13:04.327903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:13:04.329352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:13:04.329506 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:13:04.333130 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 00:13:04.334408 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 00:13:04.334519 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:13:04.336081 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:13:04.340488 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 00:13:04.347697 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 00:13:04.351259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:13:04.351508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:13:04.353228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:13:04.353471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:13:04.355223 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:13:04.355469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:13:04.365533 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 00:13:04.389238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:13:04.392570 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:13:04.393901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:13:04.396257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:13:04.398596 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 00:13:04.400806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:13:04.405159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:13:04.407235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:13:04.407423 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:13:04.415196 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 00:13:04.416507 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 00:13:04.416643 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:13:04.424497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:13:04.424779 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:13:04.432028 systemd[1]: Finished ensure-sysext.service. Sep 12 00:13:04.433531 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:13:04.433836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:13:04.436098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:13:04.436398 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:13:04.438216 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 00:13:04.438466 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 00:13:04.443861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 00:13:04.443988 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 00:13:04.446771 augenrules[1490]: /sbin/augenrules: No change Sep 12 00:13:04.448083 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 00:13:04.461974 augenrules[1520]: No rules Sep 12 00:13:04.464005 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:13:04.469326 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:13:04.508700 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 00:13:04.559835 systemd-resolved[1413]: Positive Trust Anchors: Sep 12 00:13:04.560218 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 00:13:04.560433 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 00:13:04.567034 systemd-resolved[1413]: Defaulting to hostname 'linux'. Sep 12 00:13:04.568638 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 00:13:04.571356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:13:04.577740 systemd-networkd[1496]: lo: Link UP Sep 12 00:13:04.577758 systemd-networkd[1496]: lo: Gained carrier Sep 12 00:13:04.582873 systemd-networkd[1496]: Enumeration completed Sep 12 00:13:04.582978 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 00:13:04.584511 systemd[1]: Reached target network.target - Network. Sep 12 00:13:04.588236 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 00:13:04.592367 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 00:13:04.596249 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:13:04.596264 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 00:13:04.597029 systemd-networkd[1496]: eth0: Link UP Sep 12 00:13:04.597332 systemd-networkd[1496]: eth0: Gained carrier Sep 12 00:13:04.597366 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:13:04.608018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 00:13:04.608125 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 00:13:04.624177 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 00:13:04.635064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 12 00:13:04.635124 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 00:13:04.636993 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 00:13:04.641880 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 00:13:04.642066 kernel: ACPI: button: Power Button [PWRF] Sep 12 00:13:05.292652 systemd-timesyncd[1513]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 00:13:05.292723 systemd-timesyncd[1513]: Initial clock synchronization to Fri 2025-09-12 00:13:05.292537 UTC. Sep 12 00:13:05.294756 systemd-resolved[1413]: Clock change detected. Flushing caches. Sep 12 00:13:05.295402 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 00:13:05.296618 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 00:13:05.299553 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 00:13:05.301866 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 00:13:05.303179 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 00:13:05.304531 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 00:13:05.304567 systemd[1]: Reached target paths.target - Path Units. Sep 12 00:13:05.305562 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 00:13:05.307059 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 00:13:05.308331 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 00:13:05.309809 systemd[1]: Reached target timers.target - Timer Units. Sep 12 00:13:05.312031 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 00:13:05.315112 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 00:13:05.320028 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 00:13:05.321919 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 00:13:05.323284 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 00:13:05.327917 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 00:13:05.329647 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 00:13:05.331861 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 00:13:05.334414 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 00:13:05.334672 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 00:13:05.338384 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 00:13:05.338229 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 00:13:05.339803 systemd[1]: Reached target basic.target - Basic System. Sep 12 00:13:05.340921 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 00:13:05.340950 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 00:13:05.344443 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 00:13:05.347638 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 00:13:05.349797 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 00:13:05.352041 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 00:13:05.354522 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 00:13:05.355775 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 00:13:05.357019 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 00:13:05.376924 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 00:13:05.390550 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 00:13:05.392952 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 00:13:05.399667 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 00:13:05.400666 jq[1552]: false Sep 12 00:13:05.406634 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 00:13:05.408706 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 00:13:05.409280 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 00:13:05.410301 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 00:13:05.412847 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 00:13:05.416250 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 00:13:05.416545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 00:13:05.432192 jq[1571]: true Sep 12 00:13:05.438326 update_engine[1569]: I20250912 00:13:05.438227 1569 main.cc:92] Flatcar Update Engine starting Sep 12 00:13:05.447142 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 00:13:05.450879 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 00:13:05.453779 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 00:13:05.495893 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 00:13:05.496194 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 00:13:05.518174 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 00:13:05.522401 tar[1574]: linux-amd64/LICENSE Sep 12 00:13:05.522401 tar[1574]: linux-amd64/helm Sep 12 00:13:05.917863 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Sep 12 00:13:05.916040 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Sep 12 00:13:05.914834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:13:05.923252 extend-filesystems[1553]: Found /dev/vda6 Sep 12 00:13:05.934207 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Sep 12 00:13:05.935376 oslogin_cache_refresh[1554]: Failure getting users, quitting Sep 12 00:13:05.936011 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 00:13:05.936011 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Sep 12 00:13:05.935446 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 00:13:05.935519 oslogin_cache_refresh[1554]: Refreshing group entry cache Sep 12 00:13:05.941602 extend-filesystems[1553]: Found /dev/vda9 Sep 12 00:13:05.942493 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Sep 12 00:13:05.942493 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 00:13:05.942297 oslogin_cache_refresh[1554]: Failure getting groups, quitting Sep 12 00:13:05.942309 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 00:13:05.947086 extend-filesystems[1553]: Checking size of /dev/vda9 Sep 12 00:13:05.948454 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 00:13:05.948745 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 00:13:05.972170 kernel: kvm_amd: TSC scaling supported Sep 12 00:13:05.972268 kernel: kvm_amd: Nested Virtualization enabled Sep 12 00:13:05.972291 kernel: kvm_amd: Nested Paging enabled Sep 12 00:13:05.972304 kernel: kvm_amd: LBR virtualization supported Sep 12 00:13:05.973456 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 00:13:05.973500 kernel: kvm_amd: Virtual GIF supported Sep 12 00:13:06.020404 extend-filesystems[1553]: Resized partition /dev/vda9 Sep 12 00:13:06.022335 systemd-logind[1568]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 00:13:06.022382 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 00:13:06.025462 extend-filesystems[1612]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 00:13:06.029634 systemd-logind[1568]: New seat seat0. Sep 12 00:13:06.031777 dbus-daemon[1549]: [system] SELinux support is enabled Sep 12 00:13:06.032113 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 00:13:06.037021 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 00:13:06.038448 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 00:13:06.039416 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 00:13:06.039436 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 00:13:06.040900 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 00:13:06.045494 jq[1584]: true Sep 12 00:13:06.060409 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 00:13:06.063307 systemd[1]: Started update-engine.service - Update Engine. Sep 12 00:13:06.066696 update_engine[1569]: I20250912 00:13:06.066602 1569 update_check_scheduler.cc:74] Next update check in 7m27s Sep 12 00:13:06.072315 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 00:13:06.185958 kernel: EDAC MC: Ver: 3.0.0 Sep 12 00:13:06.194034 containerd[1586]: time="2025-09-12T00:13:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 00:13:06.194034 containerd[1586]: time="2025-09-12T00:13:06.193274285Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 00:13:06.199387 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 00:13:06.209275 containerd[1586]: time="2025-09-12T00:13:06.208683238Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.13µs" Sep 12 00:13:06.209275 containerd[1586]: time="2025-09-12T00:13:06.208732661Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 00:13:06.209275 containerd[1586]: time="2025-09-12T00:13:06.208756967Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 00:13:06.231799 containerd[1586]: time="2025-09-12T00:13:06.231272840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 00:13:06.231799 containerd[1586]: time="2025-09-12T00:13:06.231564577Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 00:13:06.231799 containerd[1586]: time="2025-09-12T00:13:06.231677088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 00:13:06.231799 containerd[1586]: time="2025-09-12T00:13:06.231856485Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 00:13:06.231799 containerd[1586]: time="2025-09-12T00:13:06.231870912Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 00:13:06.232590 containerd[1586]: time="2025-09-12T00:13:06.232517134Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 00:13:06.232590 containerd[1586]: time="2025-09-12T00:13:06.232563090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 00:13:06.232590 containerd[1586]: time="2025-09-12T00:13:06.232583899Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 00:13:06.232590 containerd[1586]: time="2025-09-12T00:13:06.232597404Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 00:13:06.233307 containerd[1586]: time="2025-09-12T00:13:06.233270627Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 00:13:06.233742 containerd[1586]: time="2025-09-12T00:13:06.233702407Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 00:13:06.233832 containerd[1586]: time="2025-09-12T00:13:06.233786465Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 00:13:06.233832 containerd[1586]: time="2025-09-12T00:13:06.233811482Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 00:13:06.233916 containerd[1586]: time="2025-09-12T00:13:06.233903204Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 00:13:06.236244 extend-filesystems[1612]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 00:13:06.236244 extend-filesystems[1612]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 00:13:06.236244 extend-filesystems[1612]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 00:13:06.293216 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Sep 12 00:13:06.294445 tar[1574]: linux-amd64/README.md Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.236904884Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.237088068Z" level=info msg="metadata content store policy set" policy=shared Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249663395Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249765988Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249795944Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249833514Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249871365Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249909477Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249928392Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249962226Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.249998804Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.250035904Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.250057034Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.250106517Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 00:13:06.294517 containerd[1586]: time="2025-09-12T00:13:06.250299138Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 00:13:06.294891 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Sep 12 00:13:06.295085 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 00:13:06.241566 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250377886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250403474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250433309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250446454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250456192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250470098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250491749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250538647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250558645Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250605543Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250810998Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250832298Z" level=info msg="Start snapshots syncer" Sep 12 00:13:06.296553 containerd[1586]: time="2025-09-12T00:13:06.250875840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 00:13:06.242396 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 00:13:06.297279 containerd[1586]: time="2025-09-12T00:13:06.251247807Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 00:13:06.297279 containerd[1586]: time="2025-09-12T00:13:06.251312398Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 00:13:06.279003 locksmithd[1615]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251443454Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251559452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251586262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251597633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251620116Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251637348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251647557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251658638Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251680228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251690808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251701418Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251736744Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251750480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 00:13:06.297881 containerd[1586]: time="2025-09-12T00:13:06.251758635Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.251768053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.251775627Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.251790024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.251821463Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.251846751Z" level=info msg="runtime interface created" Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.251853203Z" level=info msg="created NRI interface" Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.251870716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.251882678Z" level=info msg="Connect containerd service" Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.252252882Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 00:13:06.298264 containerd[1586]: time="2025-09-12T00:13:06.254498204Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 00:13:06.362352 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 00:13:06.366728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:13:06.368929 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 00:13:06.382963 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 00:13:06.385014 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 00:13:06.389868 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 00:13:06.492153 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 00:13:06.492582 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 00:13:06.496044 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 00:13:06.510322 containerd[1586]: time="2025-09-12T00:13:06.510226507Z" level=info msg="Start subscribing containerd event" Sep 12 00:13:06.510511 containerd[1586]: time="2025-09-12T00:13:06.510413167Z" level=info msg="Start recovering state" Sep 12 00:13:06.510776 containerd[1586]: time="2025-09-12T00:13:06.510746703Z" level=info msg="Start event monitor" Sep 12 00:13:06.511417 containerd[1586]: time="2025-09-12T00:13:06.510782881Z" level=info msg="Start cni network conf syncer for default" Sep 12 00:13:06.511417 containerd[1586]: time="2025-09-12T00:13:06.510804061Z" level=info msg="Start streaming server" Sep 12 00:13:06.511417 containerd[1586]: time="2025-09-12T00:13:06.510821423Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 00:13:06.511417 containerd[1586]: time="2025-09-12T00:13:06.510831612Z" level=info msg="runtime interface starting up..." Sep 12 00:13:06.511417 containerd[1586]: time="2025-09-12T00:13:06.510840048Z" level=info msg="starting plugins..." Sep 12 00:13:06.511417 containerd[1586]: time="2025-09-12T00:13:06.510836211Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 00:13:06.511417 containerd[1586]: time="2025-09-12T00:13:06.510859855Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 00:13:06.511417 containerd[1586]: time="2025-09-12T00:13:06.510940777Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 00:13:06.511266 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 00:13:06.511880 containerd[1586]: time="2025-09-12T00:13:06.511820407Z" level=info msg="containerd successfully booted in 0.333508s" Sep 12 00:13:06.522138 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 00:13:06.525614 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 00:13:06.528538 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 00:13:06.530123 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 00:13:06.715914 systemd-networkd[1496]: eth0: Gained IPv6LL Sep 12 00:13:06.719923 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 00:13:06.721755 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 00:13:06.724529 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 00:13:06.727508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:13:06.730241 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 00:13:06.761336 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 00:13:06.761707 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 00:13:06.776700 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 00:13:06.779912 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 00:13:08.018297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:13:08.020517 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 00:13:08.022238 systemd[1]: Startup finished in 3.515s (kernel) + 7.480s (initrd) + 5.288s (userspace) = 16.284s. Sep 12 00:13:08.054800 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 00:13:08.571290 kubelet[1702]: E0912 00:13:08.571211 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 00:13:08.575164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 00:13:08.575392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 00:13:08.575804 systemd[1]: kubelet.service: Consumed 1.586s CPU time, 264.5M memory peak. Sep 12 00:13:09.711958 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 00:13:09.713307 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). Sep 12 00:13:09.785802 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:13:09.788038 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:13:09.796322 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 00:13:09.797733 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 00:13:09.805241 systemd-logind[1568]: New session 1 of user core. Sep 12 00:13:09.821887 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 00:13:09.825288 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 00:13:09.851783 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 00:13:09.854853 systemd-logind[1568]: New session c1 of user core. Sep 12 00:13:10.013555 systemd[1720]: Queued start job for default target default.target. Sep 12 00:13:10.024702 systemd[1720]: Created slice app.slice - User Application Slice. Sep 12 00:13:10.024728 systemd[1720]: Reached target paths.target - Paths. Sep 12 00:13:10.024775 systemd[1720]: Reached target timers.target - Timers. Sep 12 00:13:10.026449 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 00:13:10.039259 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 00:13:10.039458 systemd[1720]: Reached target sockets.target - Sockets. Sep 12 00:13:10.039526 systemd[1720]: Reached target basic.target - Basic System. Sep 12 00:13:10.039583 systemd[1720]: Reached target default.target - Main User Target. Sep 12 00:13:10.039631 systemd[1720]: Startup finished in 176ms. Sep 12 00:13:10.039893 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 00:13:10.041792 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 00:13:10.109496 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:57192.service - OpenSSH per-connection server daemon (10.0.0.1:57192). Sep 12 00:13:10.169559 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 57192 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:13:10.171058 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:13:10.175489 systemd-logind[1568]: New session 2 of user core. Sep 12 00:13:10.185504 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 00:13:10.241110 sshd[1734]: Connection closed by 10.0.0.1 port 57192 Sep 12 00:13:10.241537 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 12 00:13:10.252353 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:57192.service: Deactivated successfully. Sep 12 00:13:10.254936 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 00:13:10.255932 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. Sep 12 00:13:10.260088 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:57204.service - OpenSSH per-connection server daemon (10.0.0.1:57204). Sep 12 00:13:10.260917 systemd-logind[1568]: Removed session 2. Sep 12 00:13:10.316941 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 57204 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:13:10.318955 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:13:10.323968 systemd-logind[1568]: New session 3 of user core. Sep 12 00:13:10.334557 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 00:13:10.386032 sshd[1743]: Connection closed by 10.0.0.1 port 57204 Sep 12 00:13:10.386488 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 12 00:13:10.405286 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:57204.service: Deactivated successfully. Sep 12 00:13:10.407175 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 00:13:10.407935 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Sep 12 00:13:10.410488 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:57216.service - OpenSSH per-connection server daemon (10.0.0.1:57216). Sep 12 00:13:10.411013 systemd-logind[1568]: Removed session 3. Sep 12 00:13:10.451810 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 57216 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:13:10.453309 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:13:10.458326 systemd-logind[1568]: New session 4 of user core. Sep 12 00:13:10.473500 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 00:13:10.528804 sshd[1752]: Connection closed by 10.0.0.1 port 57216 Sep 12 00:13:10.529200 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 12 00:13:10.544750 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:57216.service: Deactivated successfully. Sep 12 00:13:10.546491 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 00:13:10.547350 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Sep 12 00:13:10.549957 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:57218.service - OpenSSH per-connection server daemon (10.0.0.1:57218). Sep 12 00:13:10.550738 systemd-logind[1568]: Removed session 4. Sep 12 00:13:10.600663 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 57218 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:13:10.602249 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:13:10.607514 systemd-logind[1568]: New session 5 of user core. Sep 12 00:13:10.621740 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 00:13:10.683707 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 00:13:10.684123 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:13:10.703990 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 12 00:13:10.706098 sshd[1761]: Connection closed by 10.0.0.1 port 57218 Sep 12 00:13:10.706735 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Sep 12 00:13:10.726598 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:57218.service: Deactivated successfully. Sep 12 00:13:10.728434 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 00:13:10.729385 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Sep 12 00:13:10.732555 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:57224.service - OpenSSH per-connection server daemon (10.0.0.1:57224). Sep 12 00:13:10.733162 systemd-logind[1568]: Removed session 5. Sep 12 00:13:10.800859 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 57224 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:13:10.802255 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:13:10.806717 systemd-logind[1568]: New session 6 of user core. Sep 12 00:13:10.817501 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 00:13:10.872433 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 00:13:10.872784 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:13:11.098810 sudo[1773]: pam_unix(sudo:session): session closed for user root Sep 12 00:13:11.106884 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 00:13:11.107206 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:13:11.118619 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:13:11.163281 augenrules[1795]: No rules Sep 12 00:13:11.165298 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:13:11.165620 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:13:11.167052 sudo[1772]: pam_unix(sudo:session): session closed for user root Sep 12 00:13:11.168997 sshd[1771]: Connection closed by 10.0.0.1 port 57224 Sep 12 00:13:11.169340 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Sep 12 00:13:11.182214 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:57224.service: Deactivated successfully. Sep 12 00:13:11.184258 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 00:13:11.185216 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Sep 12 00:13:11.188392 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:57240.service - OpenSSH per-connection server daemon (10.0.0.1:57240). Sep 12 00:13:11.189078 systemd-logind[1568]: Removed session 6. Sep 12 00:13:11.244679 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 57240 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:13:11.246253 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:13:11.251800 systemd-logind[1568]: New session 7 of user core. Sep 12 00:13:11.261582 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 00:13:11.317344 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 00:13:11.317782 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:13:11.644869 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 00:13:11.666774 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 00:13:11.931464 dockerd[1828]: time="2025-09-12T00:13:11.931250749Z" level=info msg="Starting up" Sep 12 00:13:11.932383 dockerd[1828]: time="2025-09-12T00:13:11.932213064Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 00:13:11.946577 dockerd[1828]: time="2025-09-12T00:13:11.946504962Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 00:13:12.412883 dockerd[1828]: time="2025-09-12T00:13:12.412666508Z" level=info msg="Loading containers: start." Sep 12 00:13:12.425450 kernel: Initializing XFRM netlink socket Sep 12 00:13:12.971701 systemd-networkd[1496]: docker0: Link UP Sep 12 00:13:12.978394 dockerd[1828]: time="2025-09-12T00:13:12.978320833Z" level=info msg="Loading containers: done." Sep 12 00:13:12.994537 dockerd[1828]: time="2025-09-12T00:13:12.994477138Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 00:13:12.994728 dockerd[1828]: time="2025-09-12T00:13:12.994577266Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 00:13:12.994728 dockerd[1828]: time="2025-09-12T00:13:12.994657116Z" level=info msg="Initializing buildkit" Sep 12 00:13:13.027417 dockerd[1828]: time="2025-09-12T00:13:13.027337154Z" level=info msg="Completed buildkit initialization" Sep 12 00:13:13.033037 dockerd[1828]: time="2025-09-12T00:13:13.033002461Z" level=info msg="Daemon has completed initialization" Sep 12 00:13:13.033113 dockerd[1828]: time="2025-09-12T00:13:13.033059548Z" level=info msg="API listen on /run/docker.sock" Sep 12 00:13:13.033307 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 00:13:14.999582 containerd[1586]: time="2025-09-12T00:13:14.999435657Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 00:13:15.655996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410355053.mount: Deactivated successfully. Sep 12 00:13:16.720628 containerd[1586]: time="2025-09-12T00:13:16.720533585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:16.721262 containerd[1586]: time="2025-09-12T00:13:16.721200987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 00:13:16.722602 containerd[1586]: time="2025-09-12T00:13:16.722572319Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:16.725709 containerd[1586]: time="2025-09-12T00:13:16.725641245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:16.726810 containerd[1586]: time="2025-09-12T00:13:16.726766817Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.727228858s" Sep 12 00:13:16.726863 containerd[1586]: time="2025-09-12T00:13:16.726814095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 00:13:16.727610 containerd[1586]: time="2025-09-12T00:13:16.727546830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 00:13:18.614209 containerd[1586]: time="2025-09-12T00:13:18.614095295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:18.615387 containerd[1586]: time="2025-09-12T00:13:18.615332466Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 00:13:18.617070 containerd[1586]: time="2025-09-12T00:13:18.617024941Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:18.620528 containerd[1586]: time="2025-09-12T00:13:18.620444364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:18.621370 containerd[1586]: time="2025-09-12T00:13:18.621300360Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.893702204s" Sep 12 00:13:18.621370 containerd[1586]: time="2025-09-12T00:13:18.621339844Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 00:13:18.622218 containerd[1586]: time="2025-09-12T00:13:18.621853227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 00:13:18.826108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 00:13:18.828734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:13:19.253278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:13:19.275719 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 00:13:19.431195 kubelet[2119]: E0912 00:13:19.431110 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 00:13:19.437752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 00:13:19.437954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 00:13:19.438465 systemd[1]: kubelet.service: Consumed 314ms CPU time, 110.6M memory peak. Sep 12 00:13:21.098307 containerd[1586]: time="2025-09-12T00:13:21.098217489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:21.099226 containerd[1586]: time="2025-09-12T00:13:21.099106777Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 00:13:21.102066 containerd[1586]: time="2025-09-12T00:13:21.101792655Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:21.104910 containerd[1586]: time="2025-09-12T00:13:21.104851392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:21.105818 containerd[1586]: time="2025-09-12T00:13:21.105781156Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.483802003s" Sep 12 00:13:21.105818 containerd[1586]: time="2025-09-12T00:13:21.105812374Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 00:13:21.106509 containerd[1586]: time="2025-09-12T00:13:21.106464438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 00:13:22.600136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125017052.mount: Deactivated successfully. Sep 12 00:13:23.392641 containerd[1586]: time="2025-09-12T00:13:23.392557785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:23.477009 containerd[1586]: time="2025-09-12T00:13:23.476926216Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 00:13:23.521315 containerd[1586]: time="2025-09-12T00:13:23.521237842Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:23.527628 containerd[1586]: time="2025-09-12T00:13:23.527555082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:23.528197 containerd[1586]: time="2025-09-12T00:13:23.528148645Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.421630176s" Sep 12 00:13:23.528197 containerd[1586]: time="2025-09-12T00:13:23.528190804Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 00:13:23.528720 containerd[1586]: time="2025-09-12T00:13:23.528667578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 00:13:24.091579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526166218.mount: Deactivated successfully. Sep 12 00:13:26.062181 containerd[1586]: time="2025-09-12T00:13:26.062097158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:26.063126 containerd[1586]: time="2025-09-12T00:13:26.063061948Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 00:13:26.064321 containerd[1586]: time="2025-09-12T00:13:26.064273290Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:26.067466 containerd[1586]: time="2025-09-12T00:13:26.067423038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:26.068876 containerd[1586]: time="2025-09-12T00:13:26.068809699Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.540102786s" Sep 12 00:13:26.068876 containerd[1586]: time="2025-09-12T00:13:26.068863359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 00:13:26.070055 containerd[1586]: time="2025-09-12T00:13:26.069794847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 00:13:26.636726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531141958.mount: Deactivated successfully. Sep 12 00:13:26.643650 containerd[1586]: time="2025-09-12T00:13:26.643600963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:13:26.644450 containerd[1586]: time="2025-09-12T00:13:26.644386366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 00:13:26.645792 containerd[1586]: time="2025-09-12T00:13:26.645709037Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:13:26.648104 containerd[1586]: time="2025-09-12T00:13:26.648069404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:13:26.648844 containerd[1586]: time="2025-09-12T00:13:26.648809222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 578.974601ms" Sep 12 00:13:26.648889 containerd[1586]: time="2025-09-12T00:13:26.648844318Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 00:13:26.649389 containerd[1586]: time="2025-09-12T00:13:26.649345408Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 00:13:27.410003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928722344.mount: Deactivated successfully. Sep 12 00:13:29.688773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 00:13:29.691186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:13:29.973659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:13:29.992725 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 00:13:30.391997 kubelet[2255]: E0912 00:13:30.391804 2255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 00:13:30.398194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 00:13:30.398908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 00:13:30.399541 systemd[1]: kubelet.service: Consumed 390ms CPU time, 110.1M memory peak. Sep 12 00:13:31.160004 containerd[1586]: time="2025-09-12T00:13:31.159899819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:31.161233 containerd[1586]: time="2025-09-12T00:13:31.161165533Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 00:13:31.162559 containerd[1586]: time="2025-09-12T00:13:31.162499675Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:31.166018 containerd[1586]: time="2025-09-12T00:13:31.165981386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:31.167334 containerd[1586]: time="2025-09-12T00:13:31.167271466Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.517881444s" Sep 12 00:13:31.167334 containerd[1586]: time="2025-09-12T00:13:31.167330356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 00:13:34.094265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:13:34.094442 systemd[1]: kubelet.service: Consumed 390ms CPU time, 110.1M memory peak. Sep 12 00:13:34.096673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:13:34.121077 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-7.scope)... Sep 12 00:13:34.121110 systemd[1]: Reloading... Sep 12 00:13:34.215395 zram_generator::config[2344]: No configuration found. Sep 12 00:13:34.927039 systemd[1]: Reloading finished in 805 ms. Sep 12 00:13:35.001266 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 00:13:35.001408 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 00:13:35.001754 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:13:35.001822 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.3M memory peak. Sep 12 00:13:35.003673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:13:35.178217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:13:35.188716 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 00:13:35.252640 kubelet[2386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:13:35.252640 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 00:13:35.252640 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:13:35.253187 kubelet[2386]: I0912 00:13:35.252826 2386 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 00:13:35.555139 kubelet[2386]: I0912 00:13:35.554981 2386 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 00:13:35.555139 kubelet[2386]: I0912 00:13:35.555018 2386 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 00:13:35.555450 kubelet[2386]: I0912 00:13:35.555388 2386 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 00:13:35.585702 kubelet[2386]: E0912 00:13:35.585638 2386 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:35.590863 kubelet[2386]: I0912 00:13:35.590820 2386 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 00:13:35.601858 kubelet[2386]: I0912 00:13:35.601811 2386 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 00:13:35.608970 kubelet[2386]: I0912 00:13:35.608924 2386 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 00:13:35.612320 kubelet[2386]: I0912 00:13:35.612251 2386 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 00:13:35.612593 kubelet[2386]: I0912 00:13:35.612305 2386 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 00:13:35.612738 kubelet[2386]: I0912 00:13:35.612600 2386 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 00:13:35.612738 kubelet[2386]: I0912 00:13:35.612611 2386 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 00:13:35.612834 kubelet[2386]: I0912 00:13:35.612813 2386 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:13:35.617240 kubelet[2386]: I0912 00:13:35.617178 2386 kubelet.go:446] "Attempting to sync node with API server" Sep 12 00:13:35.617283 kubelet[2386]: I0912 00:13:35.617256 2386 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 00:13:35.617554 kubelet[2386]: I0912 00:13:35.617310 2386 kubelet.go:352] "Adding apiserver pod source" Sep 12 00:13:35.617554 kubelet[2386]: I0912 00:13:35.617346 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 00:13:35.621086 kubelet[2386]: I0912 00:13:35.621033 2386 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 00:13:35.621373 kubelet[2386]: W0912 00:13:35.621305 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:35.621458 kubelet[2386]: E0912 00:13:35.621402 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:35.621599 kubelet[2386]: I0912 00:13:35.621564 2386 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 00:13:35.621694 kubelet[2386]: W0912 00:13:35.621657 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:35.621736 kubelet[2386]: E0912 00:13:35.621697 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:35.622807 kubelet[2386]: W0912 00:13:35.622749 2386 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 00:13:35.625308 kubelet[2386]: I0912 00:13:35.625270 2386 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 00:13:35.625384 kubelet[2386]: I0912 00:13:35.625327 2386 server.go:1287] "Started kubelet" Sep 12 00:13:35.627285 kubelet[2386]: I0912 00:13:35.626683 2386 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 00:13:35.629497 kubelet[2386]: I0912 00:13:35.627802 2386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 00:13:35.629497 kubelet[2386]: I0912 00:13:35.628297 2386 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 00:13:35.629497 kubelet[2386]: I0912 00:13:35.629177 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 00:13:35.630327 kubelet[2386]: I0912 00:13:35.630297 2386 server.go:479] "Adding debug handlers to kubelet server" Sep 12 00:13:35.631320 kubelet[2386]: E0912 00:13:35.631267 2386 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 00:13:35.631580 kubelet[2386]: I0912 00:13:35.631558 2386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 00:13:35.636391 kubelet[2386]: E0912 00:13:35.636307 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:35.636603 kubelet[2386]: I0912 00:13:35.636570 2386 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 00:13:35.638037 kubelet[2386]: W0912 00:13:35.637909 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:35.638037 kubelet[2386]: E0912 00:13:35.637976 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:35.638348 kubelet[2386]: E0912 00:13:35.638307 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Sep 12 00:13:35.638661 kubelet[2386]: I0912 00:13:35.638632 2386 factory.go:221] Registration of the systemd container factory successfully Sep 12 00:13:35.638754 kubelet[2386]: I0912 00:13:35.638732 2386 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 00:13:35.640395 kubelet[2386]: I0912 00:13:35.639572 2386 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 00:13:35.641971 kubelet[2386]: I0912 00:13:35.641928 2386 reconciler.go:26] "Reconciler: start to sync state" Sep 12 00:13:35.643658 kubelet[2386]: E0912 00:13:35.634824 2386 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186460a4d3c74d40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 00:13:35.62529312 +0000 UTC m=+0.432603752,LastTimestamp:2025-09-12 00:13:35.62529312 +0000 UTC m=+0.432603752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 00:13:35.646668 kubelet[2386]: I0912 00:13:35.646642 2386 factory.go:221] Registration of the containerd container factory successfully Sep 12 00:13:35.658945 kubelet[2386]: I0912 00:13:35.658860 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 00:13:35.660961 kubelet[2386]: I0912 00:13:35.660555 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 00:13:35.660961 kubelet[2386]: I0912 00:13:35.660601 2386 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 00:13:35.660961 kubelet[2386]: I0912 00:13:35.660631 2386 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 00:13:35.660961 kubelet[2386]: I0912 00:13:35.660641 2386 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 00:13:35.660961 kubelet[2386]: E0912 00:13:35.660713 2386 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 00:13:35.663234 kubelet[2386]: W0912 00:13:35.663197 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:35.663311 kubelet[2386]: E0912 00:13:35.663237 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:35.667308 kubelet[2386]: I0912 00:13:35.667280 2386 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 00:13:35.667308 kubelet[2386]: I0912 00:13:35.667300 2386 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 00:13:35.667533 kubelet[2386]: I0912 00:13:35.667387 2386 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:13:35.737798 kubelet[2386]: E0912 00:13:35.737703 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:35.761178 kubelet[2386]: E0912 00:13:35.761050 2386 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 00:13:35.838705 kubelet[2386]: E0912 00:13:35.838562 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:35.839109 kubelet[2386]: E0912 00:13:35.839058 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Sep 12 00:13:35.939095 kubelet[2386]: E0912 00:13:35.939035 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:35.961657 kubelet[2386]: E0912 00:13:35.961591 2386 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 00:13:36.039319 kubelet[2386]: E0912 00:13:36.039218 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.139961 kubelet[2386]: E0912 00:13:36.139804 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.240097 kubelet[2386]: E0912 00:13:36.240046 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.240265 kubelet[2386]: E0912 00:13:36.240149 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Sep 12 00:13:36.340911 kubelet[2386]: E0912 00:13:36.340849 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.362233 kubelet[2386]: E0912 00:13:36.362191 2386 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 00:13:36.441100 kubelet[2386]: E0912 00:13:36.440943 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.542021 kubelet[2386]: E0912 00:13:36.541945 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.642975 kubelet[2386]: E0912 00:13:36.642908 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.743565 kubelet[2386]: E0912 00:13:36.743396 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.773264 kubelet[2386]: W0912 00:13:36.773205 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:36.773264 kubelet[2386]: E0912 00:13:36.773254 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:36.782964 kubelet[2386]: W0912 00:13:36.782914 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:36.783034 kubelet[2386]: E0912 00:13:36.782957 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:36.843890 kubelet[2386]: E0912 00:13:36.843809 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:36.878893 kubelet[2386]: W0912 00:13:36.878790 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:36.878893 kubelet[2386]: E0912 00:13:36.878881 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:36.944968 kubelet[2386]: E0912 00:13:36.944892 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:37.041216 kubelet[2386]: E0912 00:13:37.041060 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Sep 12 00:13:37.045164 kubelet[2386]: E0912 00:13:37.045124 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:37.124694 kubelet[2386]: I0912 00:13:37.124597 2386 policy_none.go:49] "None policy: Start" Sep 12 00:13:37.124694 kubelet[2386]: I0912 00:13:37.124646 2386 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 00:13:37.124694 kubelet[2386]: I0912 00:13:37.124666 2386 state_mem.go:35] "Initializing new in-memory state store" Sep 12 00:13:37.145651 kubelet[2386]: E0912 00:13:37.145589 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:37.163232 kubelet[2386]: E0912 00:13:37.163145 2386 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 00:13:37.188253 kubelet[2386]: W0912 00:13:37.188164 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:37.188253 kubelet[2386]: E0912 00:13:37.188250 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:37.246118 kubelet[2386]: E0912 00:13:37.246067 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:37.347264 kubelet[2386]: E0912 00:13:37.347098 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:37.422652 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 00:13:37.435790 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 00:13:37.440014 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 00:13:37.448069 kubelet[2386]: E0912 00:13:37.448033 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:37.450773 kubelet[2386]: I0912 00:13:37.450615 2386 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 00:13:37.451034 kubelet[2386]: I0912 00:13:37.451002 2386 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 00:13:37.451116 kubelet[2386]: I0912 00:13:37.451066 2386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 00:13:37.451584 kubelet[2386]: I0912 00:13:37.451542 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 00:13:37.453055 kubelet[2386]: E0912 00:13:37.453028 2386 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 00:13:37.453168 kubelet[2386]: E0912 00:13:37.453091 2386 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 00:13:37.553597 kubelet[2386]: I0912 00:13:37.553541 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:13:37.554102 kubelet[2386]: E0912 00:13:37.554044 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 12 00:13:37.589619 kubelet[2386]: E0912 00:13:37.589528 2386 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:37.756938 kubelet[2386]: I0912 00:13:37.756886 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:13:37.757412 kubelet[2386]: E0912 00:13:37.757343 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 12 00:13:38.160002 kubelet[2386]: I0912 00:13:38.159840 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:13:38.160464 kubelet[2386]: E0912 00:13:38.160394 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 12 00:13:38.642015 kubelet[2386]: E0912 00:13:38.641921 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="3.2s" Sep 12 00:13:38.774768 systemd[1]: Created slice kubepods-burstable-pod56303a5f7545b6f79914a67c5981c2c8.slice - libcontainer container kubepods-burstable-pod56303a5f7545b6f79914a67c5981c2c8.slice. Sep 12 00:13:38.787613 kubelet[2386]: E0912 00:13:38.787562 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:38.790631 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 00:13:38.793023 kubelet[2386]: E0912 00:13:38.792995 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:38.813308 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 00:13:38.815672 kubelet[2386]: E0912 00:13:38.815618 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:38.860456 kubelet[2386]: I0912 00:13:38.860342 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56303a5f7545b6f79914a67c5981c2c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56303a5f7545b6f79914a67c5981c2c8\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:38.860456 kubelet[2386]: I0912 00:13:38.860430 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56303a5f7545b6f79914a67c5981c2c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56303a5f7545b6f79914a67c5981c2c8\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:38.860675 kubelet[2386]: I0912 00:13:38.860470 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:38.860675 kubelet[2386]: I0912 00:13:38.860589 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:38.860675 kubelet[2386]: I0912 00:13:38.860628 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56303a5f7545b6f79914a67c5981c2c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56303a5f7545b6f79914a67c5981c2c8\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:38.860675 kubelet[2386]: I0912 00:13:38.860652 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:38.860821 kubelet[2386]: I0912 00:13:38.860675 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:38.860821 kubelet[2386]: I0912 00:13:38.860727 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:38.860821 kubelet[2386]: I0912 00:13:38.860756 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 00:13:38.922310 kubelet[2386]: W0912 00:13:38.922137 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:38.922310 kubelet[2386]: E0912 00:13:38.922212 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:38.963655 kubelet[2386]: I0912 00:13:38.963610 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:13:38.964190 kubelet[2386]: E0912 00:13:38.964137 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 12 00:13:39.088299 kubelet[2386]: E0912 00:13:39.088237 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:39.089422 containerd[1586]: time="2025-09-12T00:13:39.089332676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56303a5f7545b6f79914a67c5981c2c8,Namespace:kube-system,Attempt:0,}" Sep 12 00:13:39.094593 kubelet[2386]: E0912 00:13:39.094557 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:39.095106 containerd[1586]: time="2025-09-12T00:13:39.094973508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 00:13:39.117010 kubelet[2386]: E0912 00:13:39.116952 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:39.118147 containerd[1586]: time="2025-09-12T00:13:39.117658399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 00:13:39.129573 containerd[1586]: time="2025-09-12T00:13:39.129500034Z" level=info msg="connecting to shim 003a4206da686d66d7b2747b20744988a06d1732f5ab6a54f73604f968131191" address="unix:///run/containerd/s/06441bd7d26a8885140eb671c54246f13f6ae2fd19d323d105e9256f54a6f964" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:13:39.163697 containerd[1586]: time="2025-09-12T00:13:39.163276395Z" level=info msg="connecting to shim 0bc6cb5f651bc06b1a74324fcf3659ef89b5c5c66eabd075e0273e17ab29425f" address="unix:///run/containerd/s/f36de425de60376c5c33a04b64a0a11084c3202f8b4a7a8fc979e97dd98361c8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:13:39.163850 kubelet[2386]: W0912 00:13:39.163558 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:39.163850 kubelet[2386]: E0912 00:13:39.163643 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:39.171571 containerd[1586]: time="2025-09-12T00:13:39.171513093Z" level=info msg="connecting to shim 84f77ab0d776d13d7b1df02d8e5b5459a89e0d64b445c45cf1957e437097d694" address="unix:///run/containerd/s/ca13b3a72a21e9bea9b81bcdfa9e95934ac1c3572d969c66329418e287a16004" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:13:39.177580 systemd[1]: Started cri-containerd-003a4206da686d66d7b2747b20744988a06d1732f5ab6a54f73604f968131191.scope - libcontainer container 003a4206da686d66d7b2747b20744988a06d1732f5ab6a54f73604f968131191. Sep 12 00:13:39.229617 systemd[1]: Started cri-containerd-84f77ab0d776d13d7b1df02d8e5b5459a89e0d64b445c45cf1957e437097d694.scope - libcontainer container 84f77ab0d776d13d7b1df02d8e5b5459a89e0d64b445c45cf1957e437097d694. Sep 12 00:13:39.244002 systemd[1]: Started cri-containerd-0bc6cb5f651bc06b1a74324fcf3659ef89b5c5c66eabd075e0273e17ab29425f.scope - libcontainer container 0bc6cb5f651bc06b1a74324fcf3659ef89b5c5c66eabd075e0273e17ab29425f. Sep 12 00:13:39.259133 containerd[1586]: time="2025-09-12T00:13:39.259070888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56303a5f7545b6f79914a67c5981c2c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"003a4206da686d66d7b2747b20744988a06d1732f5ab6a54f73604f968131191\"" Sep 12 00:13:39.261088 kubelet[2386]: E0912 00:13:39.260928 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:39.265014 containerd[1586]: time="2025-09-12T00:13:39.264935117Z" level=info msg="CreateContainer within sandbox \"003a4206da686d66d7b2747b20744988a06d1732f5ab6a54f73604f968131191\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 00:13:39.279840 containerd[1586]: time="2025-09-12T00:13:39.279720895Z" level=info msg="Container d15acc3863c632e1ad1cf25200d05641a90c589d11795f6161264454357cf879: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:13:39.287373 containerd[1586]: time="2025-09-12T00:13:39.286933384Z" level=info msg="CreateContainer within sandbox \"003a4206da686d66d7b2747b20744988a06d1732f5ab6a54f73604f968131191\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d15acc3863c632e1ad1cf25200d05641a90c589d11795f6161264454357cf879\"" Sep 12 00:13:39.288034 containerd[1586]: time="2025-09-12T00:13:39.288000435Z" level=info msg="StartContainer for \"d15acc3863c632e1ad1cf25200d05641a90c589d11795f6161264454357cf879\"" Sep 12 00:13:39.290089 containerd[1586]: time="2025-09-12T00:13:39.290047963Z" level=info msg="connecting to shim d15acc3863c632e1ad1cf25200d05641a90c589d11795f6161264454357cf879" address="unix:///run/containerd/s/06441bd7d26a8885140eb671c54246f13f6ae2fd19d323d105e9256f54a6f964" protocol=ttrpc version=3 Sep 12 00:13:39.292792 kubelet[2386]: W0912 00:13:39.292698 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 12 00:13:39.292848 kubelet[2386]: E0912 00:13:39.292803 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:13:39.322525 systemd[1]: Started cri-containerd-d15acc3863c632e1ad1cf25200d05641a90c589d11795f6161264454357cf879.scope - libcontainer container d15acc3863c632e1ad1cf25200d05641a90c589d11795f6161264454357cf879. Sep 12 00:13:39.329576 containerd[1586]: time="2025-09-12T00:13:39.329449697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bc6cb5f651bc06b1a74324fcf3659ef89b5c5c66eabd075e0273e17ab29425f\"" Sep 12 00:13:39.330024 containerd[1586]: time="2025-09-12T00:13:39.329991332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"84f77ab0d776d13d7b1df02d8e5b5459a89e0d64b445c45cf1957e437097d694\"" Sep 12 00:13:39.330833 kubelet[2386]: E0912 00:13:39.330797 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:39.331793 kubelet[2386]: E0912 00:13:39.331770 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:39.333489 containerd[1586]: time="2025-09-12T00:13:39.333461772Z" level=info msg="CreateContainer within sandbox \"0bc6cb5f651bc06b1a74324fcf3659ef89b5c5c66eabd075e0273e17ab29425f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 00:13:39.334863 containerd[1586]: time="2025-09-12T00:13:39.334181799Z" level=info msg="CreateContainer within sandbox \"84f77ab0d776d13d7b1df02d8e5b5459a89e0d64b445c45cf1957e437097d694\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 00:13:39.349409 containerd[1586]: time="2025-09-12T00:13:39.348705977Z" level=info msg="Container c7caad21f14b4f3f3b2952f982a7ad8f6c781f0395988c5d7a6661956bca4343: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:13:39.352413 containerd[1586]: time="2025-09-12T00:13:39.351978879Z" level=info msg="Container 5e6a98c51a3bf02ba643133cc3c2bd88776f422cb2b70fceb7577091b7782ae8: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:13:39.359298 containerd[1586]: time="2025-09-12T00:13:39.359076939Z" level=info msg="CreateContainer within sandbox \"0bc6cb5f651bc06b1a74324fcf3659ef89b5c5c66eabd075e0273e17ab29425f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c7caad21f14b4f3f3b2952f982a7ad8f6c781f0395988c5d7a6661956bca4343\"" Sep 12 00:13:39.360226 containerd[1586]: time="2025-09-12T00:13:39.360156194Z" level=info msg="StartContainer for \"c7caad21f14b4f3f3b2952f982a7ad8f6c781f0395988c5d7a6661956bca4343\"" Sep 12 00:13:39.361832 containerd[1586]: time="2025-09-12T00:13:39.361795119Z" level=info msg="connecting to shim c7caad21f14b4f3f3b2952f982a7ad8f6c781f0395988c5d7a6661956bca4343" address="unix:///run/containerd/s/f36de425de60376c5c33a04b64a0a11084c3202f8b4a7a8fc979e97dd98361c8" protocol=ttrpc version=3 Sep 12 00:13:39.363983 containerd[1586]: time="2025-09-12T00:13:39.363905466Z" level=info msg="CreateContainer within sandbox \"84f77ab0d776d13d7b1df02d8e5b5459a89e0d64b445c45cf1957e437097d694\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e6a98c51a3bf02ba643133cc3c2bd88776f422cb2b70fceb7577091b7782ae8\"" Sep 12 00:13:39.365390 containerd[1586]: time="2025-09-12T00:13:39.364802393Z" level=info msg="StartContainer for \"5e6a98c51a3bf02ba643133cc3c2bd88776f422cb2b70fceb7577091b7782ae8\"" Sep 12 00:13:39.366830 containerd[1586]: time="2025-09-12T00:13:39.366774576Z" level=info msg="connecting to shim 5e6a98c51a3bf02ba643133cc3c2bd88776f422cb2b70fceb7577091b7782ae8" address="unix:///run/containerd/s/ca13b3a72a21e9bea9b81bcdfa9e95934ac1c3572d969c66329418e287a16004" protocol=ttrpc version=3 Sep 12 00:13:39.408742 systemd[1]: Started cri-containerd-5e6a98c51a3bf02ba643133cc3c2bd88776f422cb2b70fceb7577091b7782ae8.scope - libcontainer container 5e6a98c51a3bf02ba643133cc3c2bd88776f422cb2b70fceb7577091b7782ae8. Sep 12 00:13:39.409629 containerd[1586]: time="2025-09-12T00:13:39.409585963Z" level=info msg="StartContainer for \"d15acc3863c632e1ad1cf25200d05641a90c589d11795f6161264454357cf879\" returns successfully" Sep 12 00:13:39.411280 systemd[1]: Started cri-containerd-c7caad21f14b4f3f3b2952f982a7ad8f6c781f0395988c5d7a6661956bca4343.scope - libcontainer container c7caad21f14b4f3f3b2952f982a7ad8f6c781f0395988c5d7a6661956bca4343. Sep 12 00:13:39.493029 containerd[1586]: time="2025-09-12T00:13:39.492940255Z" level=info msg="StartContainer for \"5e6a98c51a3bf02ba643133cc3c2bd88776f422cb2b70fceb7577091b7782ae8\" returns successfully" Sep 12 00:13:39.508418 containerd[1586]: time="2025-09-12T00:13:39.508327654Z" level=info msg="StartContainer for \"c7caad21f14b4f3f3b2952f982a7ad8f6c781f0395988c5d7a6661956bca4343\" returns successfully" Sep 12 00:13:39.680525 kubelet[2386]: E0912 00:13:39.680484 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:39.681047 kubelet[2386]: E0912 00:13:39.680622 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:39.682503 kubelet[2386]: E0912 00:13:39.681955 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:39.682503 kubelet[2386]: E0912 00:13:39.682118 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:39.686933 kubelet[2386]: E0912 00:13:39.686913 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:39.687226 kubelet[2386]: E0912 00:13:39.687209 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:40.566029 kubelet[2386]: I0912 00:13:40.565985 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:13:40.688907 kubelet[2386]: E0912 00:13:40.688854 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:40.689390 kubelet[2386]: E0912 00:13:40.688975 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:40.690174 kubelet[2386]: E0912 00:13:40.690156 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:40.690268 kubelet[2386]: E0912 00:13:40.690241 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:40.690868 kubelet[2386]: E0912 00:13:40.690811 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:40.691034 kubelet[2386]: E0912 00:13:40.690962 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:41.121450 kubelet[2386]: I0912 00:13:41.121384 2386 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 00:13:41.121450 kubelet[2386]: E0912 00:13:41.121433 2386 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 00:13:41.131254 kubelet[2386]: E0912 00:13:41.131196 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:41.232018 kubelet[2386]: E0912 00:13:41.231954 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:41.333025 kubelet[2386]: E0912 00:13:41.332935 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:41.434106 kubelet[2386]: E0912 00:13:41.433926 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:41.534960 kubelet[2386]: E0912 00:13:41.534884 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:41.635583 kubelet[2386]: E0912 00:13:41.635513 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:41.691420 kubelet[2386]: E0912 00:13:41.691293 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:41.691420 kubelet[2386]: E0912 00:13:41.691353 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:13:41.691829 kubelet[2386]: E0912 00:13:41.691720 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:41.691829 kubelet[2386]: E0912 00:13:41.691810 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:41.735813 kubelet[2386]: E0912 00:13:41.735742 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:41.836471 kubelet[2386]: E0912 00:13:41.836350 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:41.936877 kubelet[2386]: E0912 00:13:41.936804 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:42.037433 kubelet[2386]: E0912 00:13:42.037368 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:42.137712 kubelet[2386]: E0912 00:13:42.137650 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:42.238423 kubelet[2386]: E0912 00:13:42.238340 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:42.339266 kubelet[2386]: E0912 00:13:42.339099 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:42.439883 kubelet[2386]: E0912 00:13:42.439830 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:42.540971 kubelet[2386]: E0912 00:13:42.540915 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:42.625573 kubelet[2386]: I0912 00:13:42.625419 2386 apiserver.go:52] "Watching apiserver" Sep 12 00:13:42.638973 kubelet[2386]: I0912 00:13:42.638907 2386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:13:42.640129 kubelet[2386]: I0912 00:13:42.640077 2386 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 00:13:42.648326 kubelet[2386]: I0912 00:13:42.648269 2386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:42.654265 kubelet[2386]: I0912 00:13:42.654140 2386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:42.659445 kubelet[2386]: E0912 00:13:42.659383 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:42.692194 kubelet[2386]: E0912 00:13:42.692147 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:42.692757 kubelet[2386]: E0912 00:13:42.692462 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:43.172749 systemd[1]: Reload requested from client PID 2660 ('systemctl') (unit session-7.scope)... Sep 12 00:13:43.172767 systemd[1]: Reloading... Sep 12 00:13:43.266471 zram_generator::config[2703]: No configuration found. Sep 12 00:13:43.557852 systemd[1]: Reloading finished in 384 ms. Sep 12 00:13:43.593890 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:13:43.614508 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 00:13:43.614946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:13:43.615038 systemd[1]: kubelet.service: Consumed 996ms CPU time, 132.3M memory peak. Sep 12 00:13:43.617577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:13:43.901873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:13:43.906285 (kubelet)[2748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 00:13:43.959787 kubelet[2748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:13:43.960384 kubelet[2748]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 00:13:43.960384 kubelet[2748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:13:43.960384 kubelet[2748]: I0912 00:13:43.960321 2748 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 00:13:43.968301 kubelet[2748]: I0912 00:13:43.968245 2748 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 00:13:43.968301 kubelet[2748]: I0912 00:13:43.968279 2748 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 00:13:43.968693 kubelet[2748]: I0912 00:13:43.968670 2748 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 00:13:43.970424 kubelet[2748]: I0912 00:13:43.970399 2748 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 00:13:43.975968 kubelet[2748]: I0912 00:13:43.975665 2748 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 00:13:43.986169 kubelet[2748]: I0912 00:13:43.986128 2748 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 00:13:43.991867 kubelet[2748]: I0912 00:13:43.991842 2748 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 00:13:43.992276 kubelet[2748]: I0912 00:13:43.992243 2748 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 00:13:43.992555 kubelet[2748]: I0912 00:13:43.992332 2748 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 00:13:43.992662 kubelet[2748]: I0912 00:13:43.992564 2748 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 00:13:43.992662 kubelet[2748]: I0912 00:13:43.992573 2748 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 00:13:43.992662 kubelet[2748]: I0912 00:13:43.992626 2748 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:13:43.992847 kubelet[2748]: I0912 00:13:43.992831 2748 kubelet.go:446] "Attempting to sync node with API server" Sep 12 00:13:43.992894 kubelet[2748]: I0912 00:13:43.992881 2748 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 00:13:43.992928 kubelet[2748]: I0912 00:13:43.992920 2748 kubelet.go:352] "Adding apiserver pod source" Sep 12 00:13:43.992952 kubelet[2748]: I0912 00:13:43.992932 2748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 00:13:43.995453 kubelet[2748]: I0912 00:13:43.995405 2748 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 00:13:43.995940 kubelet[2748]: I0912 00:13:43.995917 2748 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 00:13:43.996725 kubelet[2748]: I0912 00:13:43.996463 2748 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 00:13:43.996725 kubelet[2748]: I0912 00:13:43.996496 2748 server.go:1287] "Started kubelet" Sep 12 00:13:43.997142 kubelet[2748]: I0912 00:13:43.997088 2748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 00:13:43.997901 kubelet[2748]: I0912 00:13:43.997881 2748 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 00:13:43.997996 kubelet[2748]: I0912 00:13:43.997223 2748 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 00:13:44.000375 kubelet[2748]: I0912 00:13:43.998472 2748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 00:13:44.000375 kubelet[2748]: I0912 00:13:43.999376 2748 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 00:13:44.001074 kubelet[2748]: I0912 00:13:44.001056 2748 server.go:479] "Adding debug handlers to kubelet server" Sep 12 00:13:44.002029 kubelet[2748]: I0912 00:13:44.002001 2748 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 00:13:44.002545 kubelet[2748]: E0912 00:13:44.002514 2748 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:13:44.005638 kubelet[2748]: I0912 00:13:44.004867 2748 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 00:13:44.005638 kubelet[2748]: I0912 00:13:44.005062 2748 reconciler.go:26] "Reconciler: start to sync state" Sep 12 00:13:44.013939 kubelet[2748]: I0912 00:13:44.013781 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 00:13:44.015875 kubelet[2748]: I0912 00:13:44.015841 2748 factory.go:221] Registration of the systemd container factory successfully Sep 12 00:13:44.015988 kubelet[2748]: I0912 00:13:44.015963 2748 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 00:13:44.017488 kubelet[2748]: I0912 00:13:44.017464 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 00:13:44.018258 kubelet[2748]: I0912 00:13:44.018244 2748 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 00:13:44.018342 kubelet[2748]: I0912 00:13:44.018332 2748 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 00:13:44.018744 kubelet[2748]: I0912 00:13:44.018585 2748 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 00:13:44.018744 kubelet[2748]: E0912 00:13:44.018647 2748 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 00:13:44.023135 kubelet[2748]: E0912 00:13:44.023098 2748 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 00:13:44.023297 kubelet[2748]: I0912 00:13:44.023267 2748 factory.go:221] Registration of the containerd container factory successfully Sep 12 00:13:44.079442 kubelet[2748]: I0912 00:13:44.079265 2748 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 00:13:44.079442 kubelet[2748]: I0912 00:13:44.079287 2748 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 00:13:44.079442 kubelet[2748]: I0912 00:13:44.079326 2748 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:13:44.079634 kubelet[2748]: I0912 00:13:44.079586 2748 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 00:13:44.079634 kubelet[2748]: I0912 00:13:44.079599 2748 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 00:13:44.079680 kubelet[2748]: I0912 00:13:44.079641 2748 policy_none.go:49] "None policy: Start" Sep 12 00:13:44.079680 kubelet[2748]: I0912 00:13:44.079652 2748 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 00:13:44.079680 kubelet[2748]: I0912 00:13:44.079668 2748 state_mem.go:35] "Initializing new in-memory state store" Sep 12 00:13:44.079920 kubelet[2748]: I0912 00:13:44.079889 2748 state_mem.go:75] "Updated machine memory state" Sep 12 00:13:44.084299 kubelet[2748]: I0912 00:13:44.084278 2748 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 00:13:44.084534 kubelet[2748]: I0912 00:13:44.084490 2748 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 00:13:44.084534 kubelet[2748]: I0912 00:13:44.084508 2748 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 00:13:44.084709 kubelet[2748]: I0912 00:13:44.084691 2748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 00:13:44.087390 kubelet[2748]: E0912 00:13:44.087288 2748 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 00:13:44.119780 kubelet[2748]: I0912 00:13:44.119710 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:13:44.119780 kubelet[2748]: I0912 00:13:44.119749 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:44.119971 kubelet[2748]: I0912 00:13:44.119755 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:44.128739 kubelet[2748]: E0912 00:13:44.128680 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 00:13:44.131376 kubelet[2748]: E0912 00:13:44.130985 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:44.131376 kubelet[2748]: E0912 00:13:44.131129 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:44.190421 kubelet[2748]: I0912 00:13:44.190267 2748 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:13:44.199585 kubelet[2748]: I0912 00:13:44.199546 2748 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 00:13:44.199801 kubelet[2748]: I0912 00:13:44.199657 2748 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 00:13:44.206190 kubelet[2748]: I0912 00:13:44.206115 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 00:13:44.206190 kubelet[2748]: I0912 00:13:44.206173 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56303a5f7545b6f79914a67c5981c2c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56303a5f7545b6f79914a67c5981c2c8\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:44.206573 kubelet[2748]: I0912 00:13:44.206227 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56303a5f7545b6f79914a67c5981c2c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56303a5f7545b6f79914a67c5981c2c8\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:44.206573 kubelet[2748]: I0912 00:13:44.206404 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:44.206573 kubelet[2748]: I0912 00:13:44.206440 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:44.206573 kubelet[2748]: I0912 00:13:44.206455 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:44.206573 kubelet[2748]: I0912 00:13:44.206469 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:44.206900 kubelet[2748]: I0912 00:13:44.206483 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56303a5f7545b6f79914a67c5981c2c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56303a5f7545b6f79914a67c5981c2c8\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:44.206900 kubelet[2748]: I0912 00:13:44.206505 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:13:44.429115 kubelet[2748]: E0912 00:13:44.429055 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:44.432315 kubelet[2748]: E0912 00:13:44.432276 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:44.432540 kubelet[2748]: E0912 00:13:44.432288 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:44.994335 kubelet[2748]: I0912 00:13:44.994275 2748 apiserver.go:52] "Watching apiserver" Sep 12 00:13:45.005727 kubelet[2748]: I0912 00:13:45.005639 2748 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 00:13:45.018703 kubelet[2748]: I0912 00:13:45.018614 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.018557794 podStartE2EDuration="3.018557794s" podCreationTimestamp="2025-09-12 00:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:13:45.018399984 +0000 UTC m=+1.108374775" watchObservedRunningTime="2025-09-12 00:13:45.018557794 +0000 UTC m=+1.108532585" Sep 12 00:13:45.027205 kubelet[2748]: I0912 00:13:45.027088 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.027032686 podStartE2EDuration="3.027032686s" podCreationTimestamp="2025-09-12 00:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:13:45.026514721 +0000 UTC m=+1.116489522" watchObservedRunningTime="2025-09-12 00:13:45.027032686 +0000 UTC m=+1.117007477" Sep 12 00:13:45.036668 kubelet[2748]: I0912 00:13:45.036592 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.036574185 podStartE2EDuration="3.036574185s" podCreationTimestamp="2025-09-12 00:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:13:45.036426284 +0000 UTC m=+1.126401095" watchObservedRunningTime="2025-09-12 00:13:45.036574185 +0000 UTC m=+1.126548977" Sep 12 00:13:45.037208 kubelet[2748]: E0912 00:13:45.037181 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:45.037252 kubelet[2748]: I0912 00:13:45.037184 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:45.037328 kubelet[2748]: I0912 00:13:45.037292 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:13:45.045329 kubelet[2748]: E0912 00:13:45.045198 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 00:13:45.045524 kubelet[2748]: E0912 00:13:45.045348 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:45.048381 kubelet[2748]: E0912 00:13:45.046006 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 00:13:45.048381 kubelet[2748]: E0912 00:13:45.046173 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:46.039028 kubelet[2748]: E0912 00:13:46.038942 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:46.039567 kubelet[2748]: E0912 00:13:46.039091 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:47.732671 kubelet[2748]: E0912 00:13:47.732610 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:48.220467 kubelet[2748]: I0912 00:13:48.220406 2748 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 00:13:48.221028 containerd[1586]: time="2025-09-12T00:13:48.220981521Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 00:13:48.221498 kubelet[2748]: I0912 00:13:48.221210 2748 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 00:13:48.599337 kubelet[2748]: E0912 00:13:48.599167 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:49.009656 systemd[1]: Created slice kubepods-besteffort-pod924d3409_1f2e_4419_a379_3979bf7c7c64.slice - libcontainer container kubepods-besteffort-pod924d3409_1f2e_4419_a379_3979bf7c7c64.slice. Sep 12 00:13:49.040010 kubelet[2748]: I0912 00:13:49.039804 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/924d3409-1f2e-4419-a379-3979bf7c7c64-kube-proxy\") pod \"kube-proxy-p2lsn\" (UID: \"924d3409-1f2e-4419-a379-3979bf7c7c64\") " pod="kube-system/kube-proxy-p2lsn" Sep 12 00:13:49.040010 kubelet[2748]: I0912 00:13:49.039856 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5z8z\" (UniqueName: \"kubernetes.io/projected/924d3409-1f2e-4419-a379-3979bf7c7c64-kube-api-access-n5z8z\") pod \"kube-proxy-p2lsn\" (UID: \"924d3409-1f2e-4419-a379-3979bf7c7c64\") " pod="kube-system/kube-proxy-p2lsn" Sep 12 00:13:49.040010 kubelet[2748]: I0912 00:13:49.039884 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/924d3409-1f2e-4419-a379-3979bf7c7c64-lib-modules\") pod \"kube-proxy-p2lsn\" (UID: \"924d3409-1f2e-4419-a379-3979bf7c7c64\") " pod="kube-system/kube-proxy-p2lsn" Sep 12 00:13:49.040010 kubelet[2748]: I0912 00:13:49.039907 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/924d3409-1f2e-4419-a379-3979bf7c7c64-xtables-lock\") pod \"kube-proxy-p2lsn\" (UID: \"924d3409-1f2e-4419-a379-3979bf7c7c64\") " pod="kube-system/kube-proxy-p2lsn" Sep 12 00:13:49.201217 kubelet[2748]: W0912 00:13:49.200898 2748 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Sep 12 00:13:49.201217 kubelet[2748]: E0912 00:13:49.200978 2748 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 00:13:49.208586 systemd[1]: Created slice kubepods-besteffort-pod30fbfc27_e61d_4e2f_bea8_616ae046e548.slice - libcontainer container kubepods-besteffort-pod30fbfc27_e61d_4e2f_bea8_616ae046e548.slice. Sep 12 00:13:49.241859 kubelet[2748]: I0912 00:13:49.241763 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30fbfc27-e61d-4e2f-bea8-616ae046e548-var-lib-calico\") pod \"tigera-operator-755d956888-rpg45\" (UID: \"30fbfc27-e61d-4e2f-bea8-616ae046e548\") " pod="tigera-operator/tigera-operator-755d956888-rpg45" Sep 12 00:13:49.242163 kubelet[2748]: I0912 00:13:49.242119 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq6hp\" (UniqueName: \"kubernetes.io/projected/30fbfc27-e61d-4e2f-bea8-616ae046e548-kube-api-access-hq6hp\") pod \"tigera-operator-755d956888-rpg45\" (UID: \"30fbfc27-e61d-4e2f-bea8-616ae046e548\") " pod="tigera-operator/tigera-operator-755d956888-rpg45" Sep 12 00:13:49.325300 kubelet[2748]: E0912 00:13:49.325136 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:49.325997 containerd[1586]: time="2025-09-12T00:13:49.325939300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p2lsn,Uid:924d3409-1f2e-4419-a379-3979bf7c7c64,Namespace:kube-system,Attempt:0,}" Sep 12 00:13:49.474414 containerd[1586]: time="2025-09-12T00:13:49.474316138Z" level=info msg="connecting to shim 073aea5cedfcfb5340edfbcf3785506f9ed69f78ef6a1a60cc2d01a1a681a29d" address="unix:///run/containerd/s/4e1c8401673d87e6f40447fb7dea286d3326ab0817bf1e8984968b9c1b9c26dc" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:13:49.513137 containerd[1586]: time="2025-09-12T00:13:49.512776585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rpg45,Uid:30fbfc27-e61d-4e2f-bea8-616ae046e548,Namespace:tigera-operator,Attempt:0,}" Sep 12 00:13:49.513646 systemd[1]: Started cri-containerd-073aea5cedfcfb5340edfbcf3785506f9ed69f78ef6a1a60cc2d01a1a681a29d.scope - libcontainer container 073aea5cedfcfb5340edfbcf3785506f9ed69f78ef6a1a60cc2d01a1a681a29d. Sep 12 00:13:49.550708 containerd[1586]: time="2025-09-12T00:13:49.550642514Z" level=info msg="connecting to shim 010bc027589d254a477533c874c67c446f8fffe69521a65f2feaed4e8614737c" address="unix:///run/containerd/s/9bff30b50af071b13b4884d75661b01b6d8edffec0ae13283ed2e3b94c3e6e2c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:13:49.560533 containerd[1586]: time="2025-09-12T00:13:49.560464500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p2lsn,Uid:924d3409-1f2e-4419-a379-3979bf7c7c64,Namespace:kube-system,Attempt:0,} returns sandbox id \"073aea5cedfcfb5340edfbcf3785506f9ed69f78ef6a1a60cc2d01a1a681a29d\"" Sep 12 00:13:49.561575 kubelet[2748]: E0912 00:13:49.561540 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:49.566907 containerd[1586]: time="2025-09-12T00:13:49.566836087Z" level=info msg="CreateContainer within sandbox \"073aea5cedfcfb5340edfbcf3785506f9ed69f78ef6a1a60cc2d01a1a681a29d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 00:13:49.595002 containerd[1586]: time="2025-09-12T00:13:49.593437206Z" level=info msg="Container 89acaae85b86b8b5530b63019f04a1c6c2c6652421ee08076d6ca1cb90045cc1: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:13:49.597694 systemd[1]: Started cri-containerd-010bc027589d254a477533c874c67c446f8fffe69521a65f2feaed4e8614737c.scope - libcontainer container 010bc027589d254a477533c874c67c446f8fffe69521a65f2feaed4e8614737c. Sep 12 00:13:49.608084 containerd[1586]: time="2025-09-12T00:13:49.608027590Z" level=info msg="CreateContainer within sandbox \"073aea5cedfcfb5340edfbcf3785506f9ed69f78ef6a1a60cc2d01a1a681a29d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89acaae85b86b8b5530b63019f04a1c6c2c6652421ee08076d6ca1cb90045cc1\"" Sep 12 00:13:49.609873 containerd[1586]: time="2025-09-12T00:13:49.609831829Z" level=info msg="StartContainer for \"89acaae85b86b8b5530b63019f04a1c6c2c6652421ee08076d6ca1cb90045cc1\"" Sep 12 00:13:49.611755 containerd[1586]: time="2025-09-12T00:13:49.611705220Z" level=info msg="connecting to shim 89acaae85b86b8b5530b63019f04a1c6c2c6652421ee08076d6ca1cb90045cc1" address="unix:///run/containerd/s/4e1c8401673d87e6f40447fb7dea286d3326ab0817bf1e8984968b9c1b9c26dc" protocol=ttrpc version=3 Sep 12 00:13:49.639504 systemd[1]: Started cri-containerd-89acaae85b86b8b5530b63019f04a1c6c2c6652421ee08076d6ca1cb90045cc1.scope - libcontainer container 89acaae85b86b8b5530b63019f04a1c6c2c6652421ee08076d6ca1cb90045cc1. Sep 12 00:13:49.672914 containerd[1586]: time="2025-09-12T00:13:49.672849751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rpg45,Uid:30fbfc27-e61d-4e2f-bea8-616ae046e548,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"010bc027589d254a477533c874c67c446f8fffe69521a65f2feaed4e8614737c\"" Sep 12 00:13:49.675223 containerd[1586]: time="2025-09-12T00:13:49.675033059Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 00:13:49.706927 containerd[1586]: time="2025-09-12T00:13:49.706809637Z" level=info msg="StartContainer for \"89acaae85b86b8b5530b63019f04a1c6c2c6652421ee08076d6ca1cb90045cc1\" returns successfully" Sep 12 00:13:50.049265 kubelet[2748]: E0912 00:13:50.049225 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:51.250006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802421406.mount: Deactivated successfully. Sep 12 00:13:51.737663 update_engine[1569]: I20250912 00:13:51.737541 1569 update_attempter.cc:509] Updating boot flags... Sep 12 00:13:52.147582 containerd[1586]: time="2025-09-12T00:13:52.147431781Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:52.148193 containerd[1586]: time="2025-09-12T00:13:52.148162233Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 00:13:52.149227 containerd[1586]: time="2025-09-12T00:13:52.149188044Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:52.152447 containerd[1586]: time="2025-09-12T00:13:52.152395182Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:13:52.153132 containerd[1586]: time="2025-09-12T00:13:52.153090196Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.477985021s" Sep 12 00:13:52.153132 containerd[1586]: time="2025-09-12T00:13:52.153120594Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 00:13:52.155110 containerd[1586]: time="2025-09-12T00:13:52.155072937Z" level=info msg="CreateContainer within sandbox \"010bc027589d254a477533c874c67c446f8fffe69521a65f2feaed4e8614737c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 00:13:52.164541 containerd[1586]: time="2025-09-12T00:13:52.164475657Z" level=info msg="Container 746f862d5017a7267d328f68a0ddc7ce8743cadce516de2069f482a75837239b: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:13:52.168036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685378342.mount: Deactivated successfully. Sep 12 00:13:52.171996 containerd[1586]: time="2025-09-12T00:13:52.171936753Z" level=info msg="CreateContainer within sandbox \"010bc027589d254a477533c874c67c446f8fffe69521a65f2feaed4e8614737c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"746f862d5017a7267d328f68a0ddc7ce8743cadce516de2069f482a75837239b\"" Sep 12 00:13:52.173377 containerd[1586]: time="2025-09-12T00:13:52.172477035Z" level=info msg="StartContainer for \"746f862d5017a7267d328f68a0ddc7ce8743cadce516de2069f482a75837239b\"" Sep 12 00:13:52.173377 containerd[1586]: time="2025-09-12T00:13:52.173273302Z" level=info msg="connecting to shim 746f862d5017a7267d328f68a0ddc7ce8743cadce516de2069f482a75837239b" address="unix:///run/containerd/s/9bff30b50af071b13b4884d75661b01b6d8edffec0ae13283ed2e3b94c3e6e2c" protocol=ttrpc version=3 Sep 12 00:13:52.227702 systemd[1]: Started cri-containerd-746f862d5017a7267d328f68a0ddc7ce8743cadce516de2069f482a75837239b.scope - libcontainer container 746f862d5017a7267d328f68a0ddc7ce8743cadce516de2069f482a75837239b. Sep 12 00:13:52.264975 containerd[1586]: time="2025-09-12T00:13:52.264928724Z" level=info msg="StartContainer for \"746f862d5017a7267d328f68a0ddc7ce8743cadce516de2069f482a75837239b\" returns successfully" Sep 12 00:13:53.066873 kubelet[2748]: I0912 00:13:53.066801 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p2lsn" podStartSLOduration=5.066775798 podStartE2EDuration="5.066775798s" podCreationTimestamp="2025-09-12 00:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:13:50.100621522 +0000 UTC m=+6.190596313" watchObservedRunningTime="2025-09-12 00:13:53.066775798 +0000 UTC m=+9.156750589" Sep 12 00:13:53.282173 kubelet[2748]: E0912 00:13:53.282116 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:53.299108 kubelet[2748]: I0912 00:13:53.299022 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-rpg45" podStartSLOduration=1.819519887 podStartE2EDuration="4.298998246s" podCreationTimestamp="2025-09-12 00:13:49 +0000 UTC" firstStartedPulling="2025-09-12 00:13:49.674452869 +0000 UTC m=+5.764427670" lastFinishedPulling="2025-09-12 00:13:52.153931238 +0000 UTC m=+8.243906029" observedRunningTime="2025-09-12 00:13:53.067033605 +0000 UTC m=+9.157008396" watchObservedRunningTime="2025-09-12 00:13:53.298998246 +0000 UTC m=+9.388973027" Sep 12 00:13:54.057834 kubelet[2748]: E0912 00:13:54.057793 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:58.262292 kubelet[2748]: E0912 00:13:58.262243 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:58.603128 kubelet[2748]: E0912 00:13:58.602974 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:13:58.726765 sudo[1808]: pam_unix(sudo:session): session closed for user root Sep 12 00:13:58.729385 sshd[1807]: Connection closed by 10.0.0.1 port 57240 Sep 12 00:13:58.730223 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Sep 12 00:13:58.736969 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:57240.service: Deactivated successfully. Sep 12 00:13:58.745708 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 00:13:58.745983 systemd[1]: session-7.scope: Consumed 6.351s CPU time, 228.7M memory peak. Sep 12 00:13:58.747566 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Sep 12 00:13:58.749228 systemd-logind[1568]: Removed session 7. Sep 12 00:13:59.070979 kubelet[2748]: E0912 00:13:59.070945 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:03.083462 systemd[1]: Created slice kubepods-besteffort-pod7f71c421_013e_4b3a_8f51_c104b3f26d17.slice - libcontainer container kubepods-besteffort-pod7f71c421_013e_4b3a_8f51_c104b3f26d17.slice. Sep 12 00:14:03.125738 kubelet[2748]: I0912 00:14:03.125687 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7f71c421-013e-4b3a-8f51-c104b3f26d17-typha-certs\") pod \"calico-typha-d79d4dd4b-flbhv\" (UID: \"7f71c421-013e-4b3a-8f51-c104b3f26d17\") " pod="calico-system/calico-typha-d79d4dd4b-flbhv" Sep 12 00:14:03.126326 kubelet[2748]: I0912 00:14:03.126278 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f71c421-013e-4b3a-8f51-c104b3f26d17-tigera-ca-bundle\") pod \"calico-typha-d79d4dd4b-flbhv\" (UID: \"7f71c421-013e-4b3a-8f51-c104b3f26d17\") " pod="calico-system/calico-typha-d79d4dd4b-flbhv" Sep 12 00:14:03.126490 kubelet[2748]: I0912 00:14:03.126454 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f77cp\" (UniqueName: \"kubernetes.io/projected/7f71c421-013e-4b3a-8f51-c104b3f26d17-kube-api-access-f77cp\") pod \"calico-typha-d79d4dd4b-flbhv\" (UID: \"7f71c421-013e-4b3a-8f51-c104b3f26d17\") " pod="calico-system/calico-typha-d79d4dd4b-flbhv" Sep 12 00:14:03.269311 systemd[1]: Created slice kubepods-besteffort-pod09511593_de82_4505_9703_71b0af3ae77b.slice - libcontainer container kubepods-besteffort-pod09511593_de82_4505_9703_71b0af3ae77b.slice. Sep 12 00:14:03.327889 kubelet[2748]: I0912 00:14:03.327795 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-cni-bin-dir\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.327889 kubelet[2748]: I0912 00:14:03.327873 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-cni-log-dir\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.327889 kubelet[2748]: I0912 00:14:03.327900 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-policysync\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328172 kubelet[2748]: I0912 00:14:03.327928 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-lib-modules\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328172 kubelet[2748]: I0912 00:14:03.327954 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-var-lib-calico\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328172 kubelet[2748]: I0912 00:14:03.328009 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkfmc\" (UniqueName: \"kubernetes.io/projected/09511593-de82-4505-9703-71b0af3ae77b-kube-api-access-wkfmc\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328172 kubelet[2748]: I0912 00:14:03.328118 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/09511593-de82-4505-9703-71b0af3ae77b-node-certs\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328301 kubelet[2748]: I0912 00:14:03.328191 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-flexvol-driver-host\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328301 kubelet[2748]: I0912 00:14:03.328219 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09511593-de82-4505-9703-71b0af3ae77b-tigera-ca-bundle\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328301 kubelet[2748]: I0912 00:14:03.328237 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-xtables-lock\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328301 kubelet[2748]: I0912 00:14:03.328265 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-cni-net-dir\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.328301 kubelet[2748]: I0912 00:14:03.328283 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/09511593-de82-4505-9703-71b0af3ae77b-var-run-calico\") pod \"calico-node-tw5hs\" (UID: \"09511593-de82-4505-9703-71b0af3ae77b\") " pod="calico-system/calico-node-tw5hs" Sep 12 00:14:03.392251 kubelet[2748]: E0912 00:14:03.391570 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:03.393148 containerd[1586]: time="2025-09-12T00:14:03.393086688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d79d4dd4b-flbhv,Uid:7f71c421-013e-4b3a-8f51-c104b3f26d17,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:03.432280 kubelet[2748]: E0912 00:14:03.432151 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.432280 kubelet[2748]: W0912 00:14:03.432184 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.432280 kubelet[2748]: E0912 00:14:03.432224 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.442690 kubelet[2748]: E0912 00:14:03.442642 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.442690 kubelet[2748]: W0912 00:14:03.442677 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.443118 kubelet[2748]: E0912 00:14:03.442719 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.443118 kubelet[2748]: E0912 00:14:03.443025 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.443118 kubelet[2748]: W0912 00:14:03.443038 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.443118 kubelet[2748]: E0912 00:14:03.443050 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.443306 containerd[1586]: time="2025-09-12T00:14:03.443254799Z" level=info msg="connecting to shim 7efa2f9a77bfd98031ddc4706d1c37fc220b8701a65c450cbaaff4e71146ee01" address="unix:///run/containerd/s/78dbb56c7e41b7c2f15625a5f0494e1095d0fb6a4d6c336755a9e7f43d9f8632" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:03.471094 systemd[1]: Started cri-containerd-7efa2f9a77bfd98031ddc4706d1c37fc220b8701a65c450cbaaff4e71146ee01.scope - libcontainer container 7efa2f9a77bfd98031ddc4706d1c37fc220b8701a65c450cbaaff4e71146ee01. Sep 12 00:14:03.574940 containerd[1586]: time="2025-09-12T00:14:03.574493139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d79d4dd4b-flbhv,Uid:7f71c421-013e-4b3a-8f51-c104b3f26d17,Namespace:calico-system,Attempt:0,} returns sandbox id \"7efa2f9a77bfd98031ddc4706d1c37fc220b8701a65c450cbaaff4e71146ee01\"" Sep 12 00:14:03.575960 containerd[1586]: time="2025-09-12T00:14:03.575896823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tw5hs,Uid:09511593-de82-4505-9703-71b0af3ae77b,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:03.578019 kubelet[2748]: E0912 00:14:03.577974 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:03.581599 containerd[1586]: time="2025-09-12T00:14:03.581463440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 00:14:03.587266 kubelet[2748]: E0912 00:14:03.587204 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:03.622475 kubelet[2748]: E0912 00:14:03.622432 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.622475 kubelet[2748]: W0912 00:14:03.622459 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.622475 kubelet[2748]: E0912 00:14:03.622483 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.622734 kubelet[2748]: E0912 00:14:03.622718 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.622734 kubelet[2748]: W0912 00:14:03.622728 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.622785 kubelet[2748]: E0912 00:14:03.622737 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.622960 kubelet[2748]: E0912 00:14:03.622944 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.622960 kubelet[2748]: W0912 00:14:03.622954 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.623015 kubelet[2748]: E0912 00:14:03.622962 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.623233 kubelet[2748]: E0912 00:14:03.623218 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.623233 kubelet[2748]: W0912 00:14:03.623229 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.623289 kubelet[2748]: E0912 00:14:03.623238 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.623448 kubelet[2748]: E0912 00:14:03.623436 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.623479 kubelet[2748]: W0912 00:14:03.623448 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.623479 kubelet[2748]: E0912 00:14:03.623456 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.623655 kubelet[2748]: E0912 00:14:03.623643 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.623655 kubelet[2748]: W0912 00:14:03.623653 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.623703 kubelet[2748]: E0912 00:14:03.623660 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.623835 kubelet[2748]: E0912 00:14:03.623824 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.623835 kubelet[2748]: W0912 00:14:03.623833 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.623887 kubelet[2748]: E0912 00:14:03.623841 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.624026 kubelet[2748]: E0912 00:14:03.624014 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.624026 kubelet[2748]: W0912 00:14:03.624025 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.624070 kubelet[2748]: E0912 00:14:03.624033 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.624196 kubelet[2748]: E0912 00:14:03.624182 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.624196 kubelet[2748]: W0912 00:14:03.624192 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.624252 kubelet[2748]: E0912 00:14:03.624199 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.624404 kubelet[2748]: E0912 00:14:03.624392 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.624404 kubelet[2748]: W0912 00:14:03.624401 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.624469 kubelet[2748]: E0912 00:14:03.624409 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.624616 kubelet[2748]: E0912 00:14:03.624603 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.624616 kubelet[2748]: W0912 00:14:03.624613 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.624682 kubelet[2748]: E0912 00:14:03.624622 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.624825 kubelet[2748]: E0912 00:14:03.624813 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.624825 kubelet[2748]: W0912 00:14:03.624823 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.624873 kubelet[2748]: E0912 00:14:03.624831 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.625050 kubelet[2748]: E0912 00:14:03.625036 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.625080 kubelet[2748]: W0912 00:14:03.625049 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.625080 kubelet[2748]: E0912 00:14:03.625060 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.625276 kubelet[2748]: E0912 00:14:03.625262 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.625301 kubelet[2748]: W0912 00:14:03.625274 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.625301 kubelet[2748]: E0912 00:14:03.625283 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.625498 kubelet[2748]: E0912 00:14:03.625484 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.625541 kubelet[2748]: W0912 00:14:03.625498 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.625541 kubelet[2748]: E0912 00:14:03.625517 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.625724 kubelet[2748]: E0912 00:14:03.625711 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.625750 kubelet[2748]: W0912 00:14:03.625722 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.625750 kubelet[2748]: E0912 00:14:03.625732 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.625938 kubelet[2748]: E0912 00:14:03.625925 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.625968 kubelet[2748]: W0912 00:14:03.625937 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.625968 kubelet[2748]: E0912 00:14:03.625948 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.626176 kubelet[2748]: E0912 00:14:03.626139 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.626176 kubelet[2748]: W0912 00:14:03.626152 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.626176 kubelet[2748]: E0912 00:14:03.626160 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.626395 kubelet[2748]: E0912 00:14:03.626316 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.626395 kubelet[2748]: W0912 00:14:03.626325 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.626395 kubelet[2748]: E0912 00:14:03.626334 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.626531 kubelet[2748]: E0912 00:14:03.626514 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.626531 kubelet[2748]: W0912 00:14:03.626525 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.626584 kubelet[2748]: E0912 00:14:03.626533 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.632049 kubelet[2748]: E0912 00:14:03.631994 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.632049 kubelet[2748]: W0912 00:14:03.632022 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.632049 kubelet[2748]: E0912 00:14:03.632046 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.632192 kubelet[2748]: I0912 00:14:03.632102 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a8d8933-0196-4a27-a293-4f89ec69d3dc-kubelet-dir\") pod \"csi-node-driver-kmdzp\" (UID: \"4a8d8933-0196-4a27-a293-4f89ec69d3dc\") " pod="calico-system/csi-node-driver-kmdzp" Sep 12 00:14:03.632329 kubelet[2748]: E0912 00:14:03.632303 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.632329 kubelet[2748]: W0912 00:14:03.632317 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.632405 kubelet[2748]: E0912 00:14:03.632332 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.632405 kubelet[2748]: I0912 00:14:03.632377 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4a8d8933-0196-4a27-a293-4f89ec69d3dc-socket-dir\") pod \"csi-node-driver-kmdzp\" (UID: \"4a8d8933-0196-4a27-a293-4f89ec69d3dc\") " pod="calico-system/csi-node-driver-kmdzp" Sep 12 00:14:03.632667 kubelet[2748]: E0912 00:14:03.632634 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.632667 kubelet[2748]: W0912 00:14:03.632657 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.632667 kubelet[2748]: E0912 00:14:03.632679 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.632890 kubelet[2748]: E0912 00:14:03.632871 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.632890 kubelet[2748]: W0912 00:14:03.632887 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.632966 kubelet[2748]: E0912 00:14:03.632905 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.633151 kubelet[2748]: E0912 00:14:03.633130 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.633151 kubelet[2748]: W0912 00:14:03.633146 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.633218 kubelet[2748]: E0912 00:14:03.633161 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.633218 kubelet[2748]: I0912 00:14:03.633190 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwnq\" (UniqueName: \"kubernetes.io/projected/4a8d8933-0196-4a27-a293-4f89ec69d3dc-kube-api-access-5hwnq\") pod \"csi-node-driver-kmdzp\" (UID: \"4a8d8933-0196-4a27-a293-4f89ec69d3dc\") " pod="calico-system/csi-node-driver-kmdzp" Sep 12 00:14:03.633544 kubelet[2748]: E0912 00:14:03.633485 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.633544 kubelet[2748]: W0912 00:14:03.633531 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.633703 kubelet[2748]: E0912 00:14:03.633571 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.633703 kubelet[2748]: I0912 00:14:03.633611 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4a8d8933-0196-4a27-a293-4f89ec69d3dc-registration-dir\") pod \"csi-node-driver-kmdzp\" (UID: \"4a8d8933-0196-4a27-a293-4f89ec69d3dc\") " pod="calico-system/csi-node-driver-kmdzp" Sep 12 00:14:03.633868 kubelet[2748]: E0912 00:14:03.633839 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.633868 kubelet[2748]: W0912 00:14:03.633857 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.633926 kubelet[2748]: E0912 00:14:03.633875 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.634070 kubelet[2748]: E0912 00:14:03.634055 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.634070 kubelet[2748]: W0912 00:14:03.634066 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.634121 kubelet[2748]: E0912 00:14:03.634081 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.634311 kubelet[2748]: E0912 00:14:03.634291 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.634311 kubelet[2748]: W0912 00:14:03.634305 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.634456 kubelet[2748]: E0912 00:14:03.634325 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.634583 kubelet[2748]: E0912 00:14:03.634566 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.634583 kubelet[2748]: W0912 00:14:03.634580 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.634655 kubelet[2748]: E0912 00:14:03.634595 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.634795 kubelet[2748]: E0912 00:14:03.634779 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.634795 kubelet[2748]: W0912 00:14:03.634790 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.634858 kubelet[2748]: E0912 00:14:03.634802 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.634858 kubelet[2748]: I0912 00:14:03.634819 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4a8d8933-0196-4a27-a293-4f89ec69d3dc-varrun\") pod \"csi-node-driver-kmdzp\" (UID: \"4a8d8933-0196-4a27-a293-4f89ec69d3dc\") " pod="calico-system/csi-node-driver-kmdzp" Sep 12 00:14:03.634988 kubelet[2748]: E0912 00:14:03.634970 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.634988 kubelet[2748]: W0912 00:14:03.634984 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.635045 kubelet[2748]: E0912 00:14:03.634999 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.635225 kubelet[2748]: E0912 00:14:03.635206 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.635225 kubelet[2748]: W0912 00:14:03.635221 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.635289 kubelet[2748]: E0912 00:14:03.635240 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.635490 kubelet[2748]: E0912 00:14:03.635460 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.635490 kubelet[2748]: W0912 00:14:03.635478 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.635490 kubelet[2748]: E0912 00:14:03.635491 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.635793 kubelet[2748]: E0912 00:14:03.635769 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.635793 kubelet[2748]: W0912 00:14:03.635783 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.635849 kubelet[2748]: E0912 00:14:03.635793 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.677793 containerd[1586]: time="2025-09-12T00:14:03.675411777Z" level=info msg="connecting to shim 328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2" address="unix:///run/containerd/s/8b85ce2d57f5c7cfecc20ccd79799a0002d7a6c2b1f31983f43abfd8f35f4307" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:03.712533 systemd[1]: Started cri-containerd-328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2.scope - libcontainer container 328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2. Sep 12 00:14:03.736604 kubelet[2748]: E0912 00:14:03.736544 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.736604 kubelet[2748]: W0912 00:14:03.736583 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.736873 kubelet[2748]: E0912 00:14:03.736651 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.737865 kubelet[2748]: E0912 00:14:03.737292 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.737865 kubelet[2748]: W0912 00:14:03.737307 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.737865 kubelet[2748]: E0912 00:14:03.737367 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.737865 kubelet[2748]: E0912 00:14:03.737846 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.737865 kubelet[2748]: W0912 00:14:03.737859 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.738069 kubelet[2748]: E0912 00:14:03.737921 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.738437 kubelet[2748]: E0912 00:14:03.738414 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.738522 kubelet[2748]: W0912 00:14:03.738432 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.738582 kubelet[2748]: E0912 00:14:03.738564 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.739022 kubelet[2748]: E0912 00:14:03.739001 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.739022 kubelet[2748]: W0912 00:14:03.739017 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.739130 kubelet[2748]: E0912 00:14:03.739112 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.739465 kubelet[2748]: E0912 00:14:03.739445 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.739465 kubelet[2748]: W0912 00:14:03.739460 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.739578 kubelet[2748]: E0912 00:14:03.739535 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.739974 kubelet[2748]: E0912 00:14:03.739836 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.739974 kubelet[2748]: W0912 00:14:03.739973 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.740180 kubelet[2748]: E0912 00:14:03.740138 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.740495 kubelet[2748]: E0912 00:14:03.740462 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.740586 kubelet[2748]: W0912 00:14:03.740567 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.740758 kubelet[2748]: E0912 00:14:03.740674 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.740936 kubelet[2748]: E0912 00:14:03.740920 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.741097 kubelet[2748]: W0912 00:14:03.741000 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.741097 kubelet[2748]: E0912 00:14:03.741070 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.741725 kubelet[2748]: E0912 00:14:03.741684 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.741725 kubelet[2748]: W0912 00:14:03.741707 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.741834 kubelet[2748]: E0912 00:14:03.741753 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.742215 kubelet[2748]: E0912 00:14:03.742191 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.742215 kubelet[2748]: W0912 00:14:03.742210 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.742540 kubelet[2748]: E0912 00:14:03.742389 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.742733 kubelet[2748]: E0912 00:14:03.742708 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.742733 kubelet[2748]: W0912 00:14:03.742727 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.742805 kubelet[2748]: E0912 00:14:03.742788 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.743086 kubelet[2748]: E0912 00:14:03.743051 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.743086 kubelet[2748]: W0912 00:14:03.743069 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.743175 kubelet[2748]: E0912 00:14:03.743161 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.743873 kubelet[2748]: E0912 00:14:03.743809 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.744083 kubelet[2748]: W0912 00:14:03.743849 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.744219 kubelet[2748]: E0912 00:14:03.744184 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.745606 kubelet[2748]: E0912 00:14:03.745574 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.745606 kubelet[2748]: W0912 00:14:03.745599 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.745707 kubelet[2748]: E0912 00:14:03.745653 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.747708 kubelet[2748]: E0912 00:14:03.747673 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.747708 kubelet[2748]: W0912 00:14:03.747699 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.748383 kubelet[2748]: E0912 00:14:03.747858 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.748655 kubelet[2748]: E0912 00:14:03.748629 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.748655 kubelet[2748]: W0912 00:14:03.748648 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.748759 kubelet[2748]: E0912 00:14:03.748743 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.749028 kubelet[2748]: E0912 00:14:03.749001 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.749028 kubelet[2748]: W0912 00:14:03.749019 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.750383 kubelet[2748]: E0912 00:14:03.749131 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.750383 kubelet[2748]: E0912 00:14:03.749420 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.750383 kubelet[2748]: W0912 00:14:03.749432 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.750383 kubelet[2748]: E0912 00:14:03.749498 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.750383 kubelet[2748]: E0912 00:14:03.749965 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.750383 kubelet[2748]: W0912 00:14:03.749979 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.750383 kubelet[2748]: E0912 00:14:03.750026 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.750624 kubelet[2748]: E0912 00:14:03.750412 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.750624 kubelet[2748]: W0912 00:14:03.750426 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.750624 kubelet[2748]: E0912 00:14:03.750470 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.751109 kubelet[2748]: E0912 00:14:03.750811 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.751109 kubelet[2748]: W0912 00:14:03.750828 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.751109 kubelet[2748]: E0912 00:14:03.751055 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.752405 kubelet[2748]: E0912 00:14:03.751195 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.752405 kubelet[2748]: W0912 00:14:03.751338 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.752405 kubelet[2748]: E0912 00:14:03.751826 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.752580 kubelet[2748]: E0912 00:14:03.752457 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.752580 kubelet[2748]: W0912 00:14:03.752470 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.752580 kubelet[2748]: E0912 00:14:03.752489 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.752954 kubelet[2748]: E0912 00:14:03.752927 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.752954 kubelet[2748]: W0912 00:14:03.752947 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.753044 kubelet[2748]: E0912 00:14:03.752959 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:03.766871 containerd[1586]: time="2025-09-12T00:14:03.766804782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tw5hs,Uid:09511593-de82-4505-9703-71b0af3ae77b,Namespace:calico-system,Attempt:0,} returns sandbox id \"328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2\"" Sep 12 00:14:03.774705 kubelet[2748]: E0912 00:14:03.774569 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:03.774705 kubelet[2748]: W0912 00:14:03.774606 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:03.774705 kubelet[2748]: E0912 00:14:03.774646 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:05.019909 kubelet[2748]: E0912 00:14:05.019847 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:06.088625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277095237.mount: Deactivated successfully. Sep 12 00:14:06.651122 containerd[1586]: time="2025-09-12T00:14:06.651044248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:06.652058 containerd[1586]: time="2025-09-12T00:14:06.652020766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 00:14:06.653462 containerd[1586]: time="2025-09-12T00:14:06.653417124Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:06.655927 containerd[1586]: time="2025-09-12T00:14:06.655716722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:06.656329 containerd[1586]: time="2025-09-12T00:14:06.656296053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.074786207s" Sep 12 00:14:06.656398 containerd[1586]: time="2025-09-12T00:14:06.656329326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 00:14:06.657556 containerd[1586]: time="2025-09-12T00:14:06.657519356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 00:14:06.671192 containerd[1586]: time="2025-09-12T00:14:06.671120227Z" level=info msg="CreateContainer within sandbox \"7efa2f9a77bfd98031ddc4706d1c37fc220b8701a65c450cbaaff4e71146ee01\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 00:14:06.680019 containerd[1586]: time="2025-09-12T00:14:06.679965145Z" level=info msg="Container 04c33b50c30883775fff8287dbffa193852fb12baace421f661038d08e86a4d0: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:06.690786 containerd[1586]: time="2025-09-12T00:14:06.690706262Z" level=info msg="CreateContainer within sandbox \"7efa2f9a77bfd98031ddc4706d1c37fc220b8701a65c450cbaaff4e71146ee01\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"04c33b50c30883775fff8287dbffa193852fb12baace421f661038d08e86a4d0\"" Sep 12 00:14:06.691434 containerd[1586]: time="2025-09-12T00:14:06.691388998Z" level=info msg="StartContainer for \"04c33b50c30883775fff8287dbffa193852fb12baace421f661038d08e86a4d0\"" Sep 12 00:14:06.692582 containerd[1586]: time="2025-09-12T00:14:06.692523553Z" level=info msg="connecting to shim 04c33b50c30883775fff8287dbffa193852fb12baace421f661038d08e86a4d0" address="unix:///run/containerd/s/78dbb56c7e41b7c2f15625a5f0494e1095d0fb6a4d6c336755a9e7f43d9f8632" protocol=ttrpc version=3 Sep 12 00:14:06.721659 systemd[1]: Started cri-containerd-04c33b50c30883775fff8287dbffa193852fb12baace421f661038d08e86a4d0.scope - libcontainer container 04c33b50c30883775fff8287dbffa193852fb12baace421f661038d08e86a4d0. Sep 12 00:14:06.790706 containerd[1586]: time="2025-09-12T00:14:06.790648551Z" level=info msg="StartContainer for \"04c33b50c30883775fff8287dbffa193852fb12baace421f661038d08e86a4d0\" returns successfully" Sep 12 00:14:07.020385 kubelet[2748]: E0912 00:14:07.019281 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:07.093935 kubelet[2748]: E0912 00:14:07.093836 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:07.148326 kubelet[2748]: E0912 00:14:07.148272 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.148326 kubelet[2748]: W0912 00:14:07.148305 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.148326 kubelet[2748]: E0912 00:14:07.148334 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.148643 kubelet[2748]: E0912 00:14:07.148623 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.148643 kubelet[2748]: W0912 00:14:07.148640 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.148736 kubelet[2748]: E0912 00:14:07.148652 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.148899 kubelet[2748]: E0912 00:14:07.148869 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.148899 kubelet[2748]: W0912 00:14:07.148882 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.148899 kubelet[2748]: E0912 00:14:07.148892 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.149154 kubelet[2748]: E0912 00:14:07.149126 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.149154 kubelet[2748]: W0912 00:14:07.149139 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.149154 kubelet[2748]: E0912 00:14:07.149149 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.149447 kubelet[2748]: E0912 00:14:07.149409 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.149447 kubelet[2748]: W0912 00:14:07.149423 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.149447 kubelet[2748]: E0912 00:14:07.149445 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.149695 kubelet[2748]: E0912 00:14:07.149670 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.149695 kubelet[2748]: W0912 00:14:07.149685 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.149795 kubelet[2748]: E0912 00:14:07.149697 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.149906 kubelet[2748]: E0912 00:14:07.149888 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.149906 kubelet[2748]: W0912 00:14:07.149901 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.150016 kubelet[2748]: E0912 00:14:07.149911 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.150146 kubelet[2748]: E0912 00:14:07.150126 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.150146 kubelet[2748]: W0912 00:14:07.150138 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.150226 kubelet[2748]: E0912 00:14:07.150149 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.150398 kubelet[2748]: E0912 00:14:07.150379 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.150398 kubelet[2748]: W0912 00:14:07.150392 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.150502 kubelet[2748]: E0912 00:14:07.150403 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.150638 kubelet[2748]: E0912 00:14:07.150608 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.150638 kubelet[2748]: W0912 00:14:07.150630 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.150714 kubelet[2748]: E0912 00:14:07.150640 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.150870 kubelet[2748]: E0912 00:14:07.150838 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.150870 kubelet[2748]: W0912 00:14:07.150849 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.150870 kubelet[2748]: E0912 00:14:07.150859 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.151074 kubelet[2748]: E0912 00:14:07.151055 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.151074 kubelet[2748]: W0912 00:14:07.151067 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.151152 kubelet[2748]: E0912 00:14:07.151077 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.151291 kubelet[2748]: E0912 00:14:07.151272 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.151291 kubelet[2748]: W0912 00:14:07.151284 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.151393 kubelet[2748]: E0912 00:14:07.151294 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.151560 kubelet[2748]: E0912 00:14:07.151541 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.151610 kubelet[2748]: W0912 00:14:07.151565 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.151610 kubelet[2748]: E0912 00:14:07.151577 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.151801 kubelet[2748]: E0912 00:14:07.151781 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.151801 kubelet[2748]: W0912 00:14:07.151793 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.151873 kubelet[2748]: E0912 00:14:07.151804 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.166551 kubelet[2748]: E0912 00:14:07.166489 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.166551 kubelet[2748]: W0912 00:14:07.166533 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.166551 kubelet[2748]: E0912 00:14:07.166563 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.166959 kubelet[2748]: E0912 00:14:07.166875 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.166959 kubelet[2748]: W0912 00:14:07.166889 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.166959 kubelet[2748]: E0912 00:14:07.166916 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.167276 kubelet[2748]: E0912 00:14:07.167229 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.167276 kubelet[2748]: W0912 00:14:07.167263 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.167276 kubelet[2748]: E0912 00:14:07.167299 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.167603 kubelet[2748]: E0912 00:14:07.167586 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.167603 kubelet[2748]: W0912 00:14:07.167601 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.167666 kubelet[2748]: E0912 00:14:07.167617 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.167826 kubelet[2748]: E0912 00:14:07.167808 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.167826 kubelet[2748]: W0912 00:14:07.167821 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.167826 kubelet[2748]: E0912 00:14:07.167836 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.168111 kubelet[2748]: E0912 00:14:07.168092 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.168111 kubelet[2748]: W0912 00:14:07.168107 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.168175 kubelet[2748]: E0912 00:14:07.168124 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.168516 kubelet[2748]: E0912 00:14:07.168491 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.168516 kubelet[2748]: W0912 00:14:07.168512 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.168734 kubelet[2748]: E0912 00:14:07.168546 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.168974 kubelet[2748]: E0912 00:14:07.168944 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.168974 kubelet[2748]: W0912 00:14:07.168957 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.169062 kubelet[2748]: E0912 00:14:07.169002 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.169264 kubelet[2748]: E0912 00:14:07.169216 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.169264 kubelet[2748]: W0912 00:14:07.169227 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.169264 kubelet[2748]: E0912 00:14:07.169261 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.169530 kubelet[2748]: E0912 00:14:07.169508 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.169530 kubelet[2748]: W0912 00:14:07.169524 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.169610 kubelet[2748]: E0912 00:14:07.169543 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.169831 kubelet[2748]: E0912 00:14:07.169810 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.169831 kubelet[2748]: W0912 00:14:07.169825 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.169831 kubelet[2748]: E0912 00:14:07.169844 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.170120 kubelet[2748]: E0912 00:14:07.170091 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.170120 kubelet[2748]: W0912 00:14:07.170110 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.170214 kubelet[2748]: E0912 00:14:07.170128 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.170414 kubelet[2748]: E0912 00:14:07.170394 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.170414 kubelet[2748]: W0912 00:14:07.170410 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.170508 kubelet[2748]: E0912 00:14:07.170430 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.170678 kubelet[2748]: E0912 00:14:07.170656 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.170678 kubelet[2748]: W0912 00:14:07.170671 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.170755 kubelet[2748]: E0912 00:14:07.170691 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.171038 kubelet[2748]: E0912 00:14:07.171014 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.171038 kubelet[2748]: W0912 00:14:07.171028 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.171124 kubelet[2748]: E0912 00:14:07.171051 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.171549 kubelet[2748]: E0912 00:14:07.171490 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.171549 kubelet[2748]: W0912 00:14:07.171546 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.171625 kubelet[2748]: E0912 00:14:07.171564 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.171886 kubelet[2748]: E0912 00:14:07.171864 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.171886 kubelet[2748]: W0912 00:14:07.171878 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.171973 kubelet[2748]: E0912 00:14:07.171894 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:07.172310 kubelet[2748]: E0912 00:14:07.172233 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:07.172342 kubelet[2748]: W0912 00:14:07.172309 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:07.172342 kubelet[2748]: E0912 00:14:07.172321 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.095296 kubelet[2748]: I0912 00:14:08.095247 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:14:08.095765 kubelet[2748]: E0912 00:14:08.095695 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:08.158505 kubelet[2748]: E0912 00:14:08.158450 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.158505 kubelet[2748]: W0912 00:14:08.158487 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.158505 kubelet[2748]: E0912 00:14:08.158516 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.158818 kubelet[2748]: E0912 00:14:08.158793 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.158818 kubelet[2748]: W0912 00:14:08.158809 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.158916 kubelet[2748]: E0912 00:14:08.158826 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.159071 kubelet[2748]: E0912 00:14:08.159014 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.159071 kubelet[2748]: W0912 00:14:08.159030 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.159071 kubelet[2748]: E0912 00:14:08.159041 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.159265 kubelet[2748]: E0912 00:14:08.159230 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.159265 kubelet[2748]: W0912 00:14:08.159257 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.159336 kubelet[2748]: E0912 00:14:08.159269 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.159573 kubelet[2748]: E0912 00:14:08.159540 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.159573 kubelet[2748]: W0912 00:14:08.159557 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.159573 kubelet[2748]: E0912 00:14:08.159569 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.159815 kubelet[2748]: E0912 00:14:08.159805 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.159860 kubelet[2748]: W0912 00:14:08.159817 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.159860 kubelet[2748]: E0912 00:14:08.159828 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.160045 kubelet[2748]: E0912 00:14:08.160029 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.160045 kubelet[2748]: W0912 00:14:08.160043 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.160045 kubelet[2748]: E0912 00:14:08.160054 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.160309 kubelet[2748]: E0912 00:14:08.160276 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.160309 kubelet[2748]: W0912 00:14:08.160293 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.160309 kubelet[2748]: E0912 00:14:08.160307 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.160575 kubelet[2748]: E0912 00:14:08.160556 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.160575 kubelet[2748]: W0912 00:14:08.160571 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.160638 kubelet[2748]: E0912 00:14:08.160582 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.160789 kubelet[2748]: E0912 00:14:08.160763 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.160789 kubelet[2748]: W0912 00:14:08.160779 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.160842 kubelet[2748]: E0912 00:14:08.160789 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.160972 kubelet[2748]: E0912 00:14:08.160950 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.160972 kubelet[2748]: W0912 00:14:08.160963 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.160972 kubelet[2748]: E0912 00:14:08.160973 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.161160 kubelet[2748]: E0912 00:14:08.161144 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.161160 kubelet[2748]: W0912 00:14:08.161157 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.161213 kubelet[2748]: E0912 00:14:08.161167 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.161408 kubelet[2748]: E0912 00:14:08.161389 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.161408 kubelet[2748]: W0912 00:14:08.161403 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.161408 kubelet[2748]: E0912 00:14:08.161423 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.161635 kubelet[2748]: E0912 00:14:08.161613 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.161635 kubelet[2748]: W0912 00:14:08.161627 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.161712 kubelet[2748]: E0912 00:14:08.161637 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.161821 kubelet[2748]: E0912 00:14:08.161802 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.161821 kubelet[2748]: W0912 00:14:08.161815 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.161912 kubelet[2748]: E0912 00:14:08.161825 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.176484 kubelet[2748]: E0912 00:14:08.176426 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.176484 kubelet[2748]: W0912 00:14:08.176460 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.176484 kubelet[2748]: E0912 00:14:08.176488 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.176879 kubelet[2748]: E0912 00:14:08.176825 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.176879 kubelet[2748]: W0912 00:14:08.176854 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.176960 kubelet[2748]: E0912 00:14:08.176897 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.177241 kubelet[2748]: E0912 00:14:08.177217 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.177241 kubelet[2748]: W0912 00:14:08.177234 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.177385 kubelet[2748]: E0912 00:14:08.177254 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.177580 kubelet[2748]: E0912 00:14:08.177531 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.177580 kubelet[2748]: W0912 00:14:08.177551 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.177580 kubelet[2748]: E0912 00:14:08.177570 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.177853 kubelet[2748]: E0912 00:14:08.177831 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.177853 kubelet[2748]: W0912 00:14:08.177846 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.177932 kubelet[2748]: E0912 00:14:08.177860 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.178122 kubelet[2748]: E0912 00:14:08.178104 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.178122 kubelet[2748]: W0912 00:14:08.178117 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.178189 kubelet[2748]: E0912 00:14:08.178130 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.178524 kubelet[2748]: E0912 00:14:08.178502 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.178588 kubelet[2748]: W0912 00:14:08.178519 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.178588 kubelet[2748]: E0912 00:14:08.178548 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.178819 kubelet[2748]: E0912 00:14:08.178796 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.178819 kubelet[2748]: W0912 00:14:08.178815 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.178926 kubelet[2748]: E0912 00:14:08.178852 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.179092 kubelet[2748]: E0912 00:14:08.179043 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.179092 kubelet[2748]: W0912 00:14:08.179072 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.179201 kubelet[2748]: E0912 00:14:08.179118 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.179305 kubelet[2748]: E0912 00:14:08.179282 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.179305 kubelet[2748]: W0912 00:14:08.179296 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.179391 kubelet[2748]: E0912 00:14:08.179314 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.179588 kubelet[2748]: E0912 00:14:08.179567 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.179588 kubelet[2748]: W0912 00:14:08.179580 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.179685 kubelet[2748]: E0912 00:14:08.179597 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.179848 kubelet[2748]: E0912 00:14:08.179818 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.179848 kubelet[2748]: W0912 00:14:08.179831 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.179926 kubelet[2748]: E0912 00:14:08.179869 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.180154 kubelet[2748]: E0912 00:14:08.180125 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.180154 kubelet[2748]: W0912 00:14:08.180138 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.180154 kubelet[2748]: E0912 00:14:08.180154 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.180456 kubelet[2748]: E0912 00:14:08.180388 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.180456 kubelet[2748]: W0912 00:14:08.180408 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.180456 kubelet[2748]: E0912 00:14:08.180432 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.180668 kubelet[2748]: E0912 00:14:08.180648 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.180668 kubelet[2748]: W0912 00:14:08.180666 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.180874 kubelet[2748]: E0912 00:14:08.180689 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.180973 kubelet[2748]: E0912 00:14:08.180944 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.180973 kubelet[2748]: W0912 00:14:08.180957 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.181051 kubelet[2748]: E0912 00:14:08.180976 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.181298 kubelet[2748]: E0912 00:14:08.181277 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.181298 kubelet[2748]: W0912 00:14:08.181293 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.181398 kubelet[2748]: E0912 00:14:08.181317 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.181582 kubelet[2748]: E0912 00:14:08.181562 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:14:08.181582 kubelet[2748]: W0912 00:14:08.181576 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:14:08.181647 kubelet[2748]: E0912 00:14:08.181586 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:14:08.764007 containerd[1586]: time="2025-09-12T00:14:08.763907960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:08.776819 containerd[1586]: time="2025-09-12T00:14:08.776728341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 00:14:08.807211 containerd[1586]: time="2025-09-12T00:14:08.807084775Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:08.835638 containerd[1586]: time="2025-09-12T00:14:08.835572435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:08.836292 containerd[1586]: time="2025-09-12T00:14:08.836243127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.178682384s" Sep 12 00:14:08.836406 containerd[1586]: time="2025-09-12T00:14:08.836290756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 00:14:08.838606 containerd[1586]: time="2025-09-12T00:14:08.838569443Z" level=info msg="CreateContainer within sandbox \"328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 00:14:09.019387 kubelet[2748]: E0912 00:14:09.019224 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:09.186142 containerd[1586]: time="2025-09-12T00:14:09.186068919Z" level=info msg="Container 8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:09.525596 containerd[1586]: time="2025-09-12T00:14:09.525521750Z" level=info msg="CreateContainer within sandbox \"328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6\"" Sep 12 00:14:09.526271 containerd[1586]: time="2025-09-12T00:14:09.526227558Z" level=info msg="StartContainer for \"8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6\"" Sep 12 00:14:09.527771 containerd[1586]: time="2025-09-12T00:14:09.527741125Z" level=info msg="connecting to shim 8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6" address="unix:///run/containerd/s/8b85ce2d57f5c7cfecc20ccd79799a0002d7a6c2b1f31983f43abfd8f35f4307" protocol=ttrpc version=3 Sep 12 00:14:09.562669 systemd[1]: Started cri-containerd-8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6.scope - libcontainer container 8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6. Sep 12 00:14:09.626326 systemd[1]: cri-containerd-8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6.scope: Deactivated successfully. Sep 12 00:14:09.628155 containerd[1586]: time="2025-09-12T00:14:09.628102426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6\" id:\"8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6\" pid:3481 exited_at:{seconds:1757636049 nanos:627590875}" Sep 12 00:14:09.806204 containerd[1586]: time="2025-09-12T00:14:09.806023007Z" level=info msg="received exit event container_id:\"8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6\" id:\"8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6\" pid:3481 exited_at:{seconds:1757636049 nanos:627590875}" Sep 12 00:14:09.809351 containerd[1586]: time="2025-09-12T00:14:09.809136384Z" level=info msg="StartContainer for \"8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6\" returns successfully" Sep 12 00:14:09.835899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a02dc6e1332c974d9691e9974c58d97c60e210f79dce00985d88666ecc065f6-rootfs.mount: Deactivated successfully. Sep 12 00:14:10.220005 kubelet[2748]: I0912 00:14:10.219506 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d79d4dd4b-flbhv" podStartSLOduration=4.142350825 podStartE2EDuration="7.219442353s" podCreationTimestamp="2025-09-12 00:14:03 +0000 UTC" firstStartedPulling="2025-09-12 00:14:03.580155235 +0000 UTC m=+19.670130026" lastFinishedPulling="2025-09-12 00:14:06.657246763 +0000 UTC m=+22.747221554" observedRunningTime="2025-09-12 00:14:07.110070846 +0000 UTC m=+23.200045637" watchObservedRunningTime="2025-09-12 00:14:10.219442353 +0000 UTC m=+26.309417144" Sep 12 00:14:11.019043 kubelet[2748]: E0912 00:14:11.018961 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:11.107134 containerd[1586]: time="2025-09-12T00:14:11.107082617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 00:14:13.019388 kubelet[2748]: E0912 00:14:13.019299 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:15.019048 kubelet[2748]: E0912 00:14:15.018987 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:17.019328 kubelet[2748]: E0912 00:14:17.019264 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:17.778671 containerd[1586]: time="2025-09-12T00:14:17.778590036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:17.779982 containerd[1586]: time="2025-09-12T00:14:17.779946544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 00:14:17.781702 containerd[1586]: time="2025-09-12T00:14:17.781646739Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:17.785965 containerd[1586]: time="2025-09-12T00:14:17.785882978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.678754745s" Sep 12 00:14:17.785965 containerd[1586]: time="2025-09-12T00:14:17.785948822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 00:14:17.786428 containerd[1586]: time="2025-09-12T00:14:17.786160490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:17.789172 containerd[1586]: time="2025-09-12T00:14:17.789117757Z" level=info msg="CreateContainer within sandbox \"328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 00:14:18.006788 containerd[1586]: time="2025-09-12T00:14:18.006701622Z" level=info msg="Container 6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:18.030307 containerd[1586]: time="2025-09-12T00:14:18.030167488Z" level=info msg="CreateContainer within sandbox \"328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4\"" Sep 12 00:14:18.032382 containerd[1586]: time="2025-09-12T00:14:18.030763638Z" level=info msg="StartContainer for \"6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4\"" Sep 12 00:14:18.032688 containerd[1586]: time="2025-09-12T00:14:18.032651885Z" level=info msg="connecting to shim 6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4" address="unix:///run/containerd/s/8b85ce2d57f5c7cfecc20ccd79799a0002d7a6c2b1f31983f43abfd8f35f4307" protocol=ttrpc version=3 Sep 12 00:14:18.061577 systemd[1]: Started cri-containerd-6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4.scope - libcontainer container 6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4. Sep 12 00:14:18.110408 containerd[1586]: time="2025-09-12T00:14:18.110339279Z" level=info msg="StartContainer for \"6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4\" returns successfully" Sep 12 00:14:19.019666 kubelet[2748]: E0912 00:14:19.019597 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:19.378161 systemd[1]: cri-containerd-6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4.scope: Deactivated successfully. Sep 12 00:14:19.378669 systemd[1]: cri-containerd-6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4.scope: Consumed 635ms CPU time, 178.2M memory peak, 2.7M read from disk, 171.3M written to disk. Sep 12 00:14:19.379566 containerd[1586]: time="2025-09-12T00:14:19.379523723Z" level=info msg="received exit event container_id:\"6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4\" id:\"6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4\" pid:3541 exited_at:{seconds:1757636059 nanos:379274896}" Sep 12 00:14:19.379906 containerd[1586]: time="2025-09-12T00:14:19.379677582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4\" id:\"6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4\" pid:3541 exited_at:{seconds:1757636059 nanos:379274896}" Sep 12 00:14:19.404127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f9c38de1a6637080dbfb13d0b7758b585f64bf8f5893bced9890f9b0d3c99e4-rootfs.mount: Deactivated successfully. Sep 12 00:14:19.471815 kubelet[2748]: I0912 00:14:19.471733 2748 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 00:14:20.088863 systemd[1]: Created slice kubepods-besteffort-pode16bddf2_c561_47c4_bd5f_16310c0129e0.slice - libcontainer container kubepods-besteffort-pode16bddf2_c561_47c4_bd5f_16310c0129e0.slice. Sep 12 00:14:20.096409 systemd[1]: Created slice kubepods-burstable-pod70a29a4d_6cd3_4232_ba35_c5a38be87f36.slice - libcontainer container kubepods-burstable-pod70a29a4d_6cd3_4232_ba35_c5a38be87f36.slice. Sep 12 00:14:20.103289 systemd[1]: Created slice kubepods-burstable-pod1b92995e_5cd8_4892_84c7_b3e0721991e6.slice - libcontainer container kubepods-burstable-pod1b92995e_5cd8_4892_84c7_b3e0721991e6.slice. Sep 12 00:14:20.109517 systemd[1]: Created slice kubepods-besteffort-podec9e5ecf_576e_41f0_ad72_4d4444b41b88.slice - libcontainer container kubepods-besteffort-podec9e5ecf_576e_41f0_ad72_4d4444b41b88.slice. Sep 12 00:14:20.115321 systemd[1]: Created slice kubepods-besteffort-pod423d4570_8291_4176_983b_647c391185ae.slice - libcontainer container kubepods-besteffort-pod423d4570_8291_4176_983b_647c391185ae.slice. Sep 12 00:14:20.124750 systemd[1]: Created slice kubepods-besteffort-pod668908df_9f8b_450a_bb7d_f0abb01d8bf3.slice - libcontainer container kubepods-besteffort-pod668908df_9f8b_450a_bb7d_f0abb01d8bf3.slice. Sep 12 00:14:20.131412 systemd[1]: Created slice kubepods-besteffort-pod9d49c369_50e2_4ca6_bb1c_be30deace4e7.slice - libcontainer container kubepods-besteffort-pod9d49c369_50e2_4ca6_bb1c_be30deace4e7.slice. Sep 12 00:14:20.163610 kubelet[2748]: I0912 00:14:20.163550 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b92995e-5cd8-4892-84c7-b3e0721991e6-config-volume\") pod \"coredns-668d6bf9bc-9gk4r\" (UID: \"1b92995e-5cd8-4892-84c7-b3e0721991e6\") " pod="kube-system/coredns-668d6bf9bc-9gk4r" Sep 12 00:14:20.163610 kubelet[2748]: I0912 00:14:20.163611 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdlxt\" (UniqueName: \"kubernetes.io/projected/1b92995e-5cd8-4892-84c7-b3e0721991e6-kube-api-access-hdlxt\") pod \"coredns-668d6bf9bc-9gk4r\" (UID: \"1b92995e-5cd8-4892-84c7-b3e0721991e6\") " pod="kube-system/coredns-668d6bf9bc-9gk4r" Sep 12 00:14:20.164218 kubelet[2748]: I0912 00:14:20.163643 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d49c369-50e2-4ca6-bb1c-be30deace4e7-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-ztw7n\" (UID: \"9d49c369-50e2-4ca6-bb1c-be30deace4e7\") " pod="calico-system/goldmane-54d579b49d-ztw7n" Sep 12 00:14:20.164218 kubelet[2748]: I0912 00:14:20.163666 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6mx\" (UniqueName: \"kubernetes.io/projected/9d49c369-50e2-4ca6-bb1c-be30deace4e7-kube-api-access-5g6mx\") pod \"goldmane-54d579b49d-ztw7n\" (UID: \"9d49c369-50e2-4ca6-bb1c-be30deace4e7\") " pod="calico-system/goldmane-54d579b49d-ztw7n" Sep 12 00:14:20.164218 kubelet[2748]: I0912 00:14:20.163688 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvx66\" (UniqueName: \"kubernetes.io/projected/423d4570-8291-4176-983b-647c391185ae-kube-api-access-zvx66\") pod \"whisker-5df5555d8d-jbpk8\" (UID: \"423d4570-8291-4176-983b-647c391185ae\") " pod="calico-system/whisker-5df5555d8d-jbpk8" Sep 12 00:14:20.164218 kubelet[2748]: I0912 00:14:20.163709 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e16bddf2-c561-47c4-bd5f-16310c0129e0-tigera-ca-bundle\") pod \"calico-kube-controllers-58555d9d7-fshxl\" (UID: \"e16bddf2-c561-47c4-bd5f-16310c0129e0\") " pod="calico-system/calico-kube-controllers-58555d9d7-fshxl" Sep 12 00:14:20.164218 kubelet[2748]: I0912 00:14:20.163731 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9d49c369-50e2-4ca6-bb1c-be30deace4e7-goldmane-key-pair\") pod \"goldmane-54d579b49d-ztw7n\" (UID: \"9d49c369-50e2-4ca6-bb1c-be30deace4e7\") " pod="calico-system/goldmane-54d579b49d-ztw7n" Sep 12 00:14:20.164437 kubelet[2748]: I0912 00:14:20.163765 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjtrg\" (UniqueName: \"kubernetes.io/projected/ec9e5ecf-576e-41f0-ad72-4d4444b41b88-kube-api-access-jjtrg\") pod \"calico-apiserver-67bdfdb66c-8ldzv\" (UID: \"ec9e5ecf-576e-41f0-ad72-4d4444b41b88\") " pod="calico-apiserver/calico-apiserver-67bdfdb66c-8ldzv" Sep 12 00:14:20.164437 kubelet[2748]: I0912 00:14:20.163789 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49gz\" (UniqueName: \"kubernetes.io/projected/e16bddf2-c561-47c4-bd5f-16310c0129e0-kube-api-access-p49gz\") pod \"calico-kube-controllers-58555d9d7-fshxl\" (UID: \"e16bddf2-c561-47c4-bd5f-16310c0129e0\") " pod="calico-system/calico-kube-controllers-58555d9d7-fshxl" Sep 12 00:14:20.164437 kubelet[2748]: I0912 00:14:20.163817 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec9e5ecf-576e-41f0-ad72-4d4444b41b88-calico-apiserver-certs\") pod \"calico-apiserver-67bdfdb66c-8ldzv\" (UID: \"ec9e5ecf-576e-41f0-ad72-4d4444b41b88\") " pod="calico-apiserver/calico-apiserver-67bdfdb66c-8ldzv" Sep 12 00:14:20.164437 kubelet[2748]: I0912 00:14:20.163837 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/423d4570-8291-4176-983b-647c391185ae-whisker-backend-key-pair\") pod \"whisker-5df5555d8d-jbpk8\" (UID: \"423d4570-8291-4176-983b-647c391185ae\") " pod="calico-system/whisker-5df5555d8d-jbpk8" Sep 12 00:14:20.164437 kubelet[2748]: I0912 00:14:20.163861 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/668908df-9f8b-450a-bb7d-f0abb01d8bf3-calico-apiserver-certs\") pod \"calico-apiserver-67bdfdb66c-vd47f\" (UID: \"668908df-9f8b-450a-bb7d-f0abb01d8bf3\") " pod="calico-apiserver/calico-apiserver-67bdfdb66c-vd47f" Sep 12 00:14:20.164601 kubelet[2748]: I0912 00:14:20.163905 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z85k\" (UniqueName: \"kubernetes.io/projected/70a29a4d-6cd3-4232-ba35-c5a38be87f36-kube-api-access-7z85k\") pod \"coredns-668d6bf9bc-vmt22\" (UID: \"70a29a4d-6cd3-4232-ba35-c5a38be87f36\") " pod="kube-system/coredns-668d6bf9bc-vmt22" Sep 12 00:14:20.164601 kubelet[2748]: I0912 00:14:20.163926 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d49c369-50e2-4ca6-bb1c-be30deace4e7-config\") pod \"goldmane-54d579b49d-ztw7n\" (UID: \"9d49c369-50e2-4ca6-bb1c-be30deace4e7\") " pod="calico-system/goldmane-54d579b49d-ztw7n" Sep 12 00:14:20.164601 kubelet[2748]: I0912 00:14:20.163950 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70a29a4d-6cd3-4232-ba35-c5a38be87f36-config-volume\") pod \"coredns-668d6bf9bc-vmt22\" (UID: \"70a29a4d-6cd3-4232-ba35-c5a38be87f36\") " pod="kube-system/coredns-668d6bf9bc-vmt22" Sep 12 00:14:20.164601 kubelet[2748]: I0912 00:14:20.163971 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/423d4570-8291-4176-983b-647c391185ae-whisker-ca-bundle\") pod \"whisker-5df5555d8d-jbpk8\" (UID: \"423d4570-8291-4176-983b-647c391185ae\") " pod="calico-system/whisker-5df5555d8d-jbpk8" Sep 12 00:14:20.164601 kubelet[2748]: I0912 00:14:20.163995 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grd2\" (UniqueName: \"kubernetes.io/projected/668908df-9f8b-450a-bb7d-f0abb01d8bf3-kube-api-access-8grd2\") pod \"calico-apiserver-67bdfdb66c-vd47f\" (UID: \"668908df-9f8b-450a-bb7d-f0abb01d8bf3\") " pod="calico-apiserver/calico-apiserver-67bdfdb66c-vd47f" Sep 12 00:14:20.169705 containerd[1586]: time="2025-09-12T00:14:20.169619386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 00:14:20.394062 containerd[1586]: time="2025-09-12T00:14:20.393912302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58555d9d7-fshxl,Uid:e16bddf2-c561-47c4-bd5f-16310c0129e0,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:20.400319 kubelet[2748]: E0912 00:14:20.400276 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:20.400790 containerd[1586]: time="2025-09-12T00:14:20.400753843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmt22,Uid:70a29a4d-6cd3-4232-ba35-c5a38be87f36,Namespace:kube-system,Attempt:0,}" Sep 12 00:14:20.407272 kubelet[2748]: E0912 00:14:20.406386 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:20.407770 containerd[1586]: time="2025-09-12T00:14:20.407725097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gk4r,Uid:1b92995e-5cd8-4892-84c7-b3e0721991e6,Namespace:kube-system,Attempt:0,}" Sep 12 00:14:20.418751 containerd[1586]: time="2025-09-12T00:14:20.418710152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67bdfdb66c-8ldzv,Uid:ec9e5ecf-576e-41f0-ad72-4d4444b41b88,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:14:20.423230 containerd[1586]: time="2025-09-12T00:14:20.423097843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df5555d8d-jbpk8,Uid:423d4570-8291-4176-983b-647c391185ae,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:20.430387 containerd[1586]: time="2025-09-12T00:14:20.430317814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67bdfdb66c-vd47f,Uid:668908df-9f8b-450a-bb7d-f0abb01d8bf3,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:14:20.435428 containerd[1586]: time="2025-09-12T00:14:20.435189145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ztw7n,Uid:9d49c369-50e2-4ca6-bb1c-be30deace4e7,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:20.552987 containerd[1586]: time="2025-09-12T00:14:20.552922031Z" level=error msg="Failed to destroy network for sandbox \"25558a5da286905c9adf34f93c01e6ca0bdc814c822113aafa26bdef5642be82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.569388 containerd[1586]: time="2025-09-12T00:14:20.568680211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df5555d8d-jbpk8,Uid:423d4570-8291-4176-983b-647c391185ae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25558a5da286905c9adf34f93c01e6ca0bdc814c822113aafa26bdef5642be82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.574986 kubelet[2748]: E0912 00:14:20.574918 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25558a5da286905c9adf34f93c01e6ca0bdc814c822113aafa26bdef5642be82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.575179 kubelet[2748]: E0912 00:14:20.575009 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25558a5da286905c9adf34f93c01e6ca0bdc814c822113aafa26bdef5642be82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df5555d8d-jbpk8" Sep 12 00:14:20.575179 kubelet[2748]: E0912 00:14:20.575037 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25558a5da286905c9adf34f93c01e6ca0bdc814c822113aafa26bdef5642be82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df5555d8d-jbpk8" Sep 12 00:14:20.575179 kubelet[2748]: E0912 00:14:20.575083 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5df5555d8d-jbpk8_calico-system(423d4570-8291-4176-983b-647c391185ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5df5555d8d-jbpk8_calico-system(423d4570-8291-4176-983b-647c391185ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25558a5da286905c9adf34f93c01e6ca0bdc814c822113aafa26bdef5642be82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5df5555d8d-jbpk8" podUID="423d4570-8291-4176-983b-647c391185ae" Sep 12 00:14:20.576738 containerd[1586]: time="2025-09-12T00:14:20.576674597Z" level=error msg="Failed to destroy network for sandbox \"951972e3fe439e03ebd902df120a38c8e6e3179d5f3ba44312a4bf7e69a2a916\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.579767 containerd[1586]: time="2025-09-12T00:14:20.579717393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67bdfdb66c-8ldzv,Uid:ec9e5ecf-576e-41f0-ad72-4d4444b41b88,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"951972e3fe439e03ebd902df120a38c8e6e3179d5f3ba44312a4bf7e69a2a916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.580069 kubelet[2748]: E0912 00:14:20.579978 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951972e3fe439e03ebd902df120a38c8e6e3179d5f3ba44312a4bf7e69a2a916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.580199 kubelet[2748]: E0912 00:14:20.580088 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951972e3fe439e03ebd902df120a38c8e6e3179d5f3ba44312a4bf7e69a2a916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67bdfdb66c-8ldzv" Sep 12 00:14:20.580199 kubelet[2748]: E0912 00:14:20.580114 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951972e3fe439e03ebd902df120a38c8e6e3179d5f3ba44312a4bf7e69a2a916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67bdfdb66c-8ldzv" Sep 12 00:14:20.580323 kubelet[2748]: E0912 00:14:20.580218 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67bdfdb66c-8ldzv_calico-apiserver(ec9e5ecf-576e-41f0-ad72-4d4444b41b88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67bdfdb66c-8ldzv_calico-apiserver(ec9e5ecf-576e-41f0-ad72-4d4444b41b88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"951972e3fe439e03ebd902df120a38c8e6e3179d5f3ba44312a4bf7e69a2a916\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67bdfdb66c-8ldzv" podUID="ec9e5ecf-576e-41f0-ad72-4d4444b41b88" Sep 12 00:14:20.585890 containerd[1586]: time="2025-09-12T00:14:20.585835155Z" level=error msg="Failed to destroy network for sandbox \"42c59d4ce78bdd5c19c0d0ca8f91edbd5758b8cff46b2c00266b6b274560985e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.587976 containerd[1586]: time="2025-09-12T00:14:20.587927274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gk4r,Uid:1b92995e-5cd8-4892-84c7-b3e0721991e6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c59d4ce78bdd5c19c0d0ca8f91edbd5758b8cff46b2c00266b6b274560985e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.588454 kubelet[2748]: E0912 00:14:20.588369 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c59d4ce78bdd5c19c0d0ca8f91edbd5758b8cff46b2c00266b6b274560985e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.588647 kubelet[2748]: E0912 00:14:20.588460 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c59d4ce78bdd5c19c0d0ca8f91edbd5758b8cff46b2c00266b6b274560985e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9gk4r" Sep 12 00:14:20.588647 kubelet[2748]: E0912 00:14:20.588487 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c59d4ce78bdd5c19c0d0ca8f91edbd5758b8cff46b2c00266b6b274560985e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9gk4r" Sep 12 00:14:20.588647 kubelet[2748]: E0912 00:14:20.588535 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9gk4r_kube-system(1b92995e-5cd8-4892-84c7-b3e0721991e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9gk4r_kube-system(1b92995e-5cd8-4892-84c7-b3e0721991e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42c59d4ce78bdd5c19c0d0ca8f91edbd5758b8cff46b2c00266b6b274560985e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9gk4r" podUID="1b92995e-5cd8-4892-84c7-b3e0721991e6" Sep 12 00:14:20.592503 containerd[1586]: time="2025-09-12T00:14:20.592427738Z" level=error msg="Failed to destroy network for sandbox \"0991fd0b06d737d8d469e1bed93a058dfcc4857bca4b41bf2b5bb8f9db0f66a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.594432 containerd[1586]: time="2025-09-12T00:14:20.593659912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmt22,Uid:70a29a4d-6cd3-4232-ba35-c5a38be87f36,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0991fd0b06d737d8d469e1bed93a058dfcc4857bca4b41bf2b5bb8f9db0f66a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.594548 kubelet[2748]: E0912 00:14:20.593847 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0991fd0b06d737d8d469e1bed93a058dfcc4857bca4b41bf2b5bb8f9db0f66a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.594548 kubelet[2748]: E0912 00:14:20.593888 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0991fd0b06d737d8d469e1bed93a058dfcc4857bca4b41bf2b5bb8f9db0f66a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vmt22" Sep 12 00:14:20.594548 kubelet[2748]: E0912 00:14:20.593906 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0991fd0b06d737d8d469e1bed93a058dfcc4857bca4b41bf2b5bb8f9db0f66a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vmt22" Sep 12 00:14:20.594670 kubelet[2748]: E0912 00:14:20.593947 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vmt22_kube-system(70a29a4d-6cd3-4232-ba35-c5a38be87f36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vmt22_kube-system(70a29a4d-6cd3-4232-ba35-c5a38be87f36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0991fd0b06d737d8d469e1bed93a058dfcc4857bca4b41bf2b5bb8f9db0f66a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vmt22" podUID="70a29a4d-6cd3-4232-ba35-c5a38be87f36" Sep 12 00:14:20.595939 containerd[1586]: time="2025-09-12T00:14:20.595888177Z" level=error msg="Failed to destroy network for sandbox \"8d7390fe64ed274a6e46175fda0e7bb6a9ba80f1dd04839cb29bf685612b1f8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.597202 containerd[1586]: time="2025-09-12T00:14:20.597151651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67bdfdb66c-vd47f,Uid:668908df-9f8b-450a-bb7d-f0abb01d8bf3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d7390fe64ed274a6e46175fda0e7bb6a9ba80f1dd04839cb29bf685612b1f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.597466 kubelet[2748]: E0912 00:14:20.597413 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d7390fe64ed274a6e46175fda0e7bb6a9ba80f1dd04839cb29bf685612b1f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.597589 kubelet[2748]: E0912 00:14:20.597561 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d7390fe64ed274a6e46175fda0e7bb6a9ba80f1dd04839cb29bf685612b1f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67bdfdb66c-vd47f" Sep 12 00:14:20.597589 kubelet[2748]: E0912 00:14:20.597587 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d7390fe64ed274a6e46175fda0e7bb6a9ba80f1dd04839cb29bf685612b1f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67bdfdb66c-vd47f" Sep 12 00:14:20.597692 kubelet[2748]: E0912 00:14:20.597621 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67bdfdb66c-vd47f_calico-apiserver(668908df-9f8b-450a-bb7d-f0abb01d8bf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67bdfdb66c-vd47f_calico-apiserver(668908df-9f8b-450a-bb7d-f0abb01d8bf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d7390fe64ed274a6e46175fda0e7bb6a9ba80f1dd04839cb29bf685612b1f8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67bdfdb66c-vd47f" podUID="668908df-9f8b-450a-bb7d-f0abb01d8bf3" Sep 12 00:14:20.600626 containerd[1586]: time="2025-09-12T00:14:20.600563570Z" level=error msg="Failed to destroy network for sandbox \"1a4d6f804a1d50c057a068b2783eb2faa219b8a6b42a1e3f60d9a1b524154ff1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.604149 containerd[1586]: time="2025-09-12T00:14:20.604090965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58555d9d7-fshxl,Uid:e16bddf2-c561-47c4-bd5f-16310c0129e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a4d6f804a1d50c057a068b2783eb2faa219b8a6b42a1e3f60d9a1b524154ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.604401 kubelet[2748]: E0912 00:14:20.604326 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a4d6f804a1d50c057a068b2783eb2faa219b8a6b42a1e3f60d9a1b524154ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.604475 kubelet[2748]: E0912 00:14:20.604422 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a4d6f804a1d50c057a068b2783eb2faa219b8a6b42a1e3f60d9a1b524154ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58555d9d7-fshxl" Sep 12 00:14:20.604475 kubelet[2748]: E0912 00:14:20.604467 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a4d6f804a1d50c057a068b2783eb2faa219b8a6b42a1e3f60d9a1b524154ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58555d9d7-fshxl" Sep 12 00:14:20.604649 kubelet[2748]: E0912 00:14:20.604515 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58555d9d7-fshxl_calico-system(e16bddf2-c561-47c4-bd5f-16310c0129e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58555d9d7-fshxl_calico-system(e16bddf2-c561-47c4-bd5f-16310c0129e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a4d6f804a1d50c057a068b2783eb2faa219b8a6b42a1e3f60d9a1b524154ff1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58555d9d7-fshxl" podUID="e16bddf2-c561-47c4-bd5f-16310c0129e0" Sep 12 00:14:20.610514 containerd[1586]: time="2025-09-12T00:14:20.610461011Z" level=error msg="Failed to destroy network for sandbox \"cd0dcfbdd8419d71ee648df3974b905dc0175b483f7e1b059ae2e5bc804aa85a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.611974 containerd[1586]: time="2025-09-12T00:14:20.611916565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ztw7n,Uid:9d49c369-50e2-4ca6-bb1c-be30deace4e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0dcfbdd8419d71ee648df3974b905dc0175b483f7e1b059ae2e5bc804aa85a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.612227 kubelet[2748]: E0912 00:14:20.612179 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0dcfbdd8419d71ee648df3974b905dc0175b483f7e1b059ae2e5bc804aa85a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:20.612270 kubelet[2748]: E0912 00:14:20.612254 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0dcfbdd8419d71ee648df3974b905dc0175b483f7e1b059ae2e5bc804aa85a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ztw7n" Sep 12 00:14:20.612295 kubelet[2748]: E0912 00:14:20.612277 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd0dcfbdd8419d71ee648df3974b905dc0175b483f7e1b059ae2e5bc804aa85a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ztw7n" Sep 12 00:14:20.612438 kubelet[2748]: E0912 00:14:20.612328 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ztw7n_calico-system(9d49c369-50e2-4ca6-bb1c-be30deace4e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ztw7n_calico-system(9d49c369-50e2-4ca6-bb1c-be30deace4e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd0dcfbdd8419d71ee648df3974b905dc0175b483f7e1b059ae2e5bc804aa85a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ztw7n" podUID="9d49c369-50e2-4ca6-bb1c-be30deace4e7" Sep 12 00:14:21.026686 systemd[1]: Created slice kubepods-besteffort-pod4a8d8933_0196_4a27_a293_4f89ec69d3dc.slice - libcontainer container kubepods-besteffort-pod4a8d8933_0196_4a27_a293_4f89ec69d3dc.slice. Sep 12 00:14:21.030161 containerd[1586]: time="2025-09-12T00:14:21.030118369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmdzp,Uid:4a8d8933-0196-4a27-a293-4f89ec69d3dc,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:21.162581 containerd[1586]: time="2025-09-12T00:14:21.162479049Z" level=error msg="Failed to destroy network for sandbox \"771e21460b1346c8607e8ed5c382048fb63407d09b947318760d343e8ba73756\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:21.258504 containerd[1586]: time="2025-09-12T00:14:21.258315900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmdzp,Uid:4a8d8933-0196-4a27-a293-4f89ec69d3dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"771e21460b1346c8607e8ed5c382048fb63407d09b947318760d343e8ba73756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:21.258785 kubelet[2748]: E0912 00:14:21.258651 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"771e21460b1346c8607e8ed5c382048fb63407d09b947318760d343e8ba73756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:14:21.258785 kubelet[2748]: E0912 00:14:21.258718 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"771e21460b1346c8607e8ed5c382048fb63407d09b947318760d343e8ba73756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kmdzp" Sep 12 00:14:21.258785 kubelet[2748]: E0912 00:14:21.258740 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"771e21460b1346c8607e8ed5c382048fb63407d09b947318760d343e8ba73756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kmdzp" Sep 12 00:14:21.259289 kubelet[2748]: E0912 00:14:21.258787 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kmdzp_calico-system(4a8d8933-0196-4a27-a293-4f89ec69d3dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kmdzp_calico-system(4a8d8933-0196-4a27-a293-4f89ec69d3dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"771e21460b1346c8607e8ed5c382048fb63407d09b947318760d343e8ba73756\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kmdzp" podUID="4a8d8933-0196-4a27-a293-4f89ec69d3dc" Sep 12 00:14:21.404497 systemd[1]: run-netns-cni\x2d363173e2\x2da405\x2d1436\x2dd8ad\x2d330e168fb188.mount: Deactivated successfully. Sep 12 00:14:21.404617 systemd[1]: run-netns-cni\x2d9b2fbaf2\x2dfd84\x2db80c\x2dca30\x2dc3c49dcb747b.mount: Deactivated successfully. Sep 12 00:14:21.404691 systemd[1]: run-netns-cni\x2daecf0cee\x2d540e\x2d9243\x2d0e43\x2dfee49abc30d8.mount: Deactivated successfully. Sep 12 00:14:21.404758 systemd[1]: run-netns-cni\x2d16ea9709\x2d5ae5\x2d4a96\x2d9a7a\x2db1f591058c98.mount: Deactivated successfully. Sep 12 00:14:21.404828 systemd[1]: run-netns-cni\x2dab41bdd7\x2d9518\x2d17bf\x2d617c\x2dc337b41a4279.mount: Deactivated successfully. Sep 12 00:14:21.404894 systemd[1]: run-netns-cni\x2d8df64036\x2d78ac\x2d1dd1\x2dc738\x2d57a7e36c61cf.mount: Deactivated successfully. Sep 12 00:14:28.903022 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:37440.service - OpenSSH per-connection server daemon (10.0.0.1:37440). Sep 12 00:14:29.058298 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 37440 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:14:29.061044 sshd-session[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:14:29.104033 systemd-logind[1568]: New session 8 of user core. Sep 12 00:14:29.108585 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 00:14:29.133440 kubelet[2748]: I0912 00:14:29.132477 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:14:29.133440 kubelet[2748]: E0912 00:14:29.132994 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:29.190388 kubelet[2748]: E0912 00:14:29.189541 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:29.292927 sshd[3859]: Connection closed by 10.0.0.1 port 37440 Sep 12 00:14:29.295887 sshd-session[3856]: pam_unix(sshd:session): session closed for user core Sep 12 00:14:29.304724 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:37440.service: Deactivated successfully. Sep 12 00:14:29.309482 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 00:14:29.310941 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Sep 12 00:14:29.312418 systemd-logind[1568]: Removed session 8. Sep 12 00:14:30.029050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521322492.mount: Deactivated successfully. Sep 12 00:14:31.570798 containerd[1586]: time="2025-09-12T00:14:31.570722165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:31.571854 containerd[1586]: time="2025-09-12T00:14:31.571801390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 00:14:31.573892 containerd[1586]: time="2025-09-12T00:14:31.573832232Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:31.576202 containerd[1586]: time="2025-09-12T00:14:31.576164640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:31.576897 containerd[1586]: time="2025-09-12T00:14:31.576781038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.407091399s" Sep 12 00:14:31.576897 containerd[1586]: time="2025-09-12T00:14:31.576827775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 00:14:31.591382 containerd[1586]: time="2025-09-12T00:14:31.589977638Z" level=info msg="CreateContainer within sandbox \"328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 00:14:31.648767 containerd[1586]: time="2025-09-12T00:14:31.648710160Z" level=info msg="Container 8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:31.666824 containerd[1586]: time="2025-09-12T00:14:31.666767023Z" level=info msg="CreateContainer within sandbox \"328c8ae2eae96473bc675ade100da85ea6f5d95e49c2d31ee54b51ee386b7cf2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59\"" Sep 12 00:14:31.674871 containerd[1586]: time="2025-09-12T00:14:31.674789421Z" level=info msg="StartContainer for \"8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59\"" Sep 12 00:14:31.676618 containerd[1586]: time="2025-09-12T00:14:31.676586405Z" level=info msg="connecting to shim 8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59" address="unix:///run/containerd/s/8b85ce2d57f5c7cfecc20ccd79799a0002d7a6c2b1f31983f43abfd8f35f4307" protocol=ttrpc version=3 Sep 12 00:14:31.697568 systemd[1]: Started cri-containerd-8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59.scope - libcontainer container 8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59. Sep 12 00:14:31.759729 containerd[1586]: time="2025-09-12T00:14:31.759654785Z" level=info msg="StartContainer for \"8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59\" returns successfully" Sep 12 00:14:31.841248 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 00:14:31.841437 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 00:14:32.019607 kubelet[2748]: E0912 00:14:32.019552 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:32.020260 kubelet[2748]: E0912 00:14:32.019665 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:32.020310 containerd[1586]: time="2025-09-12T00:14:32.020254675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmt22,Uid:70a29a4d-6cd3-4232-ba35-c5a38be87f36,Namespace:kube-system,Attempt:0,}" Sep 12 00:14:32.020506 containerd[1586]: time="2025-09-12T00:14:32.020337631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gk4r,Uid:1b92995e-5cd8-4892-84c7-b3e0721991e6,Namespace:kube-system,Attempt:0,}" Sep 12 00:14:32.404920 containerd[1586]: time="2025-09-12T00:14:32.404754348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59\" id:\"c9639d34e4cf6547c01d45f2acf572549490111af9f4f70cfe8bcd68c2f3659d\" pid:3968 exit_status:1 exited_at:{seconds:1757636072 nanos:404337807}" Sep 12 00:14:32.555997 kubelet[2748]: I0912 00:14:32.555933 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvx66\" (UniqueName: \"kubernetes.io/projected/423d4570-8291-4176-983b-647c391185ae-kube-api-access-zvx66\") pod \"423d4570-8291-4176-983b-647c391185ae\" (UID: \"423d4570-8291-4176-983b-647c391185ae\") " Sep 12 00:14:32.555997 kubelet[2748]: I0912 00:14:32.555991 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/423d4570-8291-4176-983b-647c391185ae-whisker-ca-bundle\") pod \"423d4570-8291-4176-983b-647c391185ae\" (UID: \"423d4570-8291-4176-983b-647c391185ae\") " Sep 12 00:14:32.556185 kubelet[2748]: I0912 00:14:32.556017 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/423d4570-8291-4176-983b-647c391185ae-whisker-backend-key-pair\") pod \"423d4570-8291-4176-983b-647c391185ae\" (UID: \"423d4570-8291-4176-983b-647c391185ae\") " Sep 12 00:14:32.556652 kubelet[2748]: I0912 00:14:32.556608 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/423d4570-8291-4176-983b-647c391185ae-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "423d4570-8291-4176-983b-647c391185ae" (UID: "423d4570-8291-4176-983b-647c391185ae"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 00:14:32.560416 kubelet[2748]: I0912 00:14:32.560374 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423d4570-8291-4176-983b-647c391185ae-kube-api-access-zvx66" (OuterVolumeSpecName: "kube-api-access-zvx66") pod "423d4570-8291-4176-983b-647c391185ae" (UID: "423d4570-8291-4176-983b-647c391185ae"). InnerVolumeSpecName "kube-api-access-zvx66". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 00:14:32.560416 kubelet[2748]: I0912 00:14:32.560381 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423d4570-8291-4176-983b-647c391185ae-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "423d4570-8291-4176-983b-647c391185ae" (UID: "423d4570-8291-4176-983b-647c391185ae"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 00:14:32.587487 systemd[1]: var-lib-kubelet-pods-423d4570\x2d8291\x2d4176\x2d983b\x2d647c391185ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzvx66.mount: Deactivated successfully. Sep 12 00:14:32.587863 systemd[1]: var-lib-kubelet-pods-423d4570\x2d8291\x2d4176\x2d983b\x2d647c391185ae-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 00:14:32.657440 kubelet[2748]: I0912 00:14:32.657309 2748 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/423d4570-8291-4176-983b-647c391185ae-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 00:14:32.657440 kubelet[2748]: I0912 00:14:32.657386 2748 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/423d4570-8291-4176-983b-647c391185ae-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 00:14:32.657440 kubelet[2748]: I0912 00:14:32.657401 2748 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zvx66\" (UniqueName: \"kubernetes.io/projected/423d4570-8291-4176-983b-647c391185ae-kube-api-access-zvx66\") on node \"localhost\" DevicePath \"\"" Sep 12 00:14:32.791254 systemd-networkd[1496]: cali4c1d68b3a80: Link UP Sep 12 00:14:32.792655 systemd-networkd[1496]: cali4c1d68b3a80: Gained carrier Sep 12 00:14:32.801391 kubelet[2748]: I0912 00:14:32.801262 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tw5hs" podStartSLOduration=1.994752171 podStartE2EDuration="29.80122565s" podCreationTimestamp="2025-09-12 00:14:03 +0000 UTC" firstStartedPulling="2025-09-12 00:14:03.771543519 +0000 UTC m=+19.861518310" lastFinishedPulling="2025-09-12 00:14:31.578016998 +0000 UTC m=+47.667991789" observedRunningTime="2025-09-12 00:14:32.540460069 +0000 UTC m=+48.630434860" watchObservedRunningTime="2025-09-12 00:14:32.80122565 +0000 UTC m=+48.891200431" Sep 12 00:14:32.811212 containerd[1586]: 2025-09-12 00:14:32.169 [INFO][3920] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 00:14:32.811212 containerd[1586]: 2025-09-12 00:14:32.565 [INFO][3920] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--vmt22-eth0 coredns-668d6bf9bc- kube-system 70a29a4d-6cd3-4232-ba35-c5a38be87f36 833 0 2025-09-12 00:13:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-vmt22 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c1d68b3a80 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmt22" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmt22-" Sep 12 00:14:32.811212 containerd[1586]: 2025-09-12 00:14:32.565 [INFO][3920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmt22" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" Sep 12 00:14:32.811212 containerd[1586]: 2025-09-12 00:14:32.652 [INFO][3985] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" HandleID="k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Workload="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.653 [INFO][3985] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" HandleID="k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Workload="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f76c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-vmt22", "timestamp":"2025-09-12 00:14:32.652865517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.653 [INFO][3985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.653 [INFO][3985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.653 [INFO][3985] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.702 [INFO][3985] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" host="localhost" Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.739 [INFO][3985] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.748 [INFO][3985] ipam/ipam.go 543: Ran out of existing affine blocks for host host="localhost" Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.750 [INFO][3985] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.752 [INFO][3985] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.752 [INFO][3985] ipam/ipam.go 572: Found unclaimed block host="localhost" subnet=192.168.88.128/26 Sep 12 00:14:32.811689 containerd[1586]: 2025-09-12 00:14:32.752 [INFO][3985] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.755 [INFO][3985] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.755 [INFO][3985] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.757 [INFO][3985] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.759 [INFO][3985] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.761 [INFO][3985] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.761 [INFO][3985] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.765 [INFO][3985] ipam/ipam_block_reader_writer.go 267: Successfully created block Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.765 [INFO][3985] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.768 [INFO][3985] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.768 [INFO][3985] ipam/ipam.go 607: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.768 [INFO][3985] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" host="localhost" Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.770 [INFO][3985] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41 Sep 12 00:14:32.811954 containerd[1586]: 2025-09-12 00:14:32.773 [INFO][3985] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" host="localhost" Sep 12 00:14:32.812212 containerd[1586]: 2025-09-12 00:14:32.777 [INFO][3985] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" host="localhost" Sep 12 00:14:32.812212 containerd[1586]: 2025-09-12 00:14:32.777 [INFO][3985] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" host="localhost" Sep 12 00:14:32.812212 containerd[1586]: 2025-09-12 00:14:32.777 [INFO][3985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:14:32.812212 containerd[1586]: 2025-09-12 00:14:32.777 [INFO][3985] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" HandleID="k8s-pod-network.fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Workload="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" Sep 12 00:14:32.812288 containerd[1586]: 2025-09-12 00:14:32.781 [INFO][3920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmt22" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vmt22-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"70a29a4d-6cd3-4232-ba35-c5a38be87f36", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 13, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-vmt22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c1d68b3a80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:32.812383 containerd[1586]: 2025-09-12 00:14:32.782 [INFO][3920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmt22" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" Sep 12 00:14:32.812383 containerd[1586]: 2025-09-12 00:14:32.782 [INFO][3920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c1d68b3a80 ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmt22" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" Sep 12 00:14:32.812383 containerd[1586]: 2025-09-12 00:14:32.792 [INFO][3920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmt22" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" Sep 12 00:14:32.812455 containerd[1586]: 2025-09-12 00:14:32.793 [INFO][3920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmt22" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vmt22-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"70a29a4d-6cd3-4232-ba35-c5a38be87f36", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 13, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41", Pod:"coredns-668d6bf9bc-vmt22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c1d68b3a80", MAC:"56:2a:8f:74:7c:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:32.812455 containerd[1586]: 2025-09-12 00:14:32.803 [INFO][3920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" Namespace="kube-system" Pod="coredns-668d6bf9bc-vmt22" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vmt22-eth0" Sep 12 00:14:32.873943 systemd-networkd[1496]: cali449a68ab8e0: Link UP Sep 12 00:14:32.875757 systemd-networkd[1496]: cali449a68ab8e0: Gained carrier Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.212 [INFO][3934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.565 [INFO][3934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0 coredns-668d6bf9bc- kube-system 1b92995e-5cd8-4892-84c7-b3e0721991e6 835 0 2025-09-12 00:13:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9gk4r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali449a68ab8e0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Namespace="kube-system" Pod="coredns-668d6bf9bc-9gk4r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9gk4r-" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.565 [INFO][3934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Namespace="kube-system" Pod="coredns-668d6bf9bc-9gk4r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.652 [INFO][3983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" HandleID="k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Workload="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.653 [INFO][3983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" HandleID="k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Workload="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9gk4r", "timestamp":"2025-09-12 00:14:32.652860508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.653 [INFO][3983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.777 [INFO][3983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.777 [INFO][3983] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.800 [INFO][3983] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.840 [INFO][3983] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.845 [INFO][3983] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.846 [INFO][3983] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.848 [INFO][3983] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.848 [INFO][3983] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.849 [INFO][3983] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12 Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.852 [INFO][3983] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.860 [INFO][3983] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.860 [INFO][3983] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" host="localhost" Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.861 [INFO][3983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:14:32.898623 containerd[1586]: 2025-09-12 00:14:32.861 [INFO][3983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" HandleID="k8s-pod-network.8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Workload="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" Sep 12 00:14:32.899213 containerd[1586]: 2025-09-12 00:14:32.866 [INFO][3934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Namespace="kube-system" Pod="coredns-668d6bf9bc-9gk4r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b92995e-5cd8-4892-84c7-b3e0721991e6", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 13, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9gk4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali449a68ab8e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:32.899213 containerd[1586]: 2025-09-12 00:14:32.867 [INFO][3934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Namespace="kube-system" Pod="coredns-668d6bf9bc-9gk4r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" Sep 12 00:14:32.899213 containerd[1586]: 2025-09-12 00:14:32.867 [INFO][3934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali449a68ab8e0 ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Namespace="kube-system" Pod="coredns-668d6bf9bc-9gk4r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" Sep 12 00:14:32.899213 containerd[1586]: 2025-09-12 00:14:32.876 [INFO][3934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Namespace="kube-system" Pod="coredns-668d6bf9bc-9gk4r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" Sep 12 00:14:32.899213 containerd[1586]: 2025-09-12 00:14:32.877 [INFO][3934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Namespace="kube-system" Pod="coredns-668d6bf9bc-9gk4r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b92995e-5cd8-4892-84c7-b3e0721991e6", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 13, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12", Pod:"coredns-668d6bf9bc-9gk4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali449a68ab8e0", MAC:"2e:8a:64:ce:0d:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:32.899213 containerd[1586]: 2025-09-12 00:14:32.893 [INFO][3934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" Namespace="kube-system" Pod="coredns-668d6bf9bc-9gk4r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9gk4r-eth0" Sep 12 00:14:32.941350 containerd[1586]: time="2025-09-12T00:14:32.940768264Z" level=info msg="connecting to shim fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41" address="unix:///run/containerd/s/59fce73ad572d550f22acad79466a6ed8951d14f40ab038adc7184bcbbb89537" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:32.947337 containerd[1586]: time="2025-09-12T00:14:32.947264847Z" level=info msg="connecting to shim 8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12" address="unix:///run/containerd/s/66e381c2c2dbfce26f3dc9a9d4d386fff320c1b924149c376e5f58b616497059" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:32.972505 systemd[1]: Started cri-containerd-fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41.scope - libcontainer container fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41. Sep 12 00:14:32.976379 systemd[1]: Started cri-containerd-8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12.scope - libcontainer container 8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12. Sep 12 00:14:32.988756 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:14:32.990016 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:14:33.021382 containerd[1586]: time="2025-09-12T00:14:33.021169710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67bdfdb66c-8ldzv,Uid:ec9e5ecf-576e-41f0-ad72-4d4444b41b88,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:14:33.048622 containerd[1586]: time="2025-09-12T00:14:33.048582831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmt22,Uid:70a29a4d-6cd3-4232-ba35-c5a38be87f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41\"" Sep 12 00:14:33.051811 containerd[1586]: time="2025-09-12T00:14:33.051761046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9gk4r,Uid:1b92995e-5cd8-4892-84c7-b3e0721991e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12\"" Sep 12 00:14:33.053616 kubelet[2748]: E0912 00:14:33.053586 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:33.054934 kubelet[2748]: E0912 00:14:33.054376 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:33.057833 containerd[1586]: time="2025-09-12T00:14:33.057386564Z" level=info msg="CreateContainer within sandbox \"8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 00:14:33.057833 containerd[1586]: time="2025-09-12T00:14:33.057529804Z" level=info msg="CreateContainer within sandbox \"fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 00:14:33.081289 containerd[1586]: time="2025-09-12T00:14:33.081241447Z" level=info msg="Container ca6bf2149f99b7909cb1a10d5c4f42abd90ccb79a7511828ed850a01598a9699: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:33.081865 containerd[1586]: time="2025-09-12T00:14:33.081818781Z" level=info msg="Container f503e51a239212744973a7e9048099afcbcb697af7533aaf51a2d0540c14d177: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:33.089550 containerd[1586]: time="2025-09-12T00:14:33.089504765Z" level=info msg="CreateContainer within sandbox \"fb94f8f88d4ac8df3c793a32ce693537f2d2f6a185bf9ea0039f62be9f617b41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f503e51a239212744973a7e9048099afcbcb697af7533aaf51a2d0540c14d177\"" Sep 12 00:14:33.090592 containerd[1586]: time="2025-09-12T00:14:33.090556039Z" level=info msg="StartContainer for \"f503e51a239212744973a7e9048099afcbcb697af7533aaf51a2d0540c14d177\"" Sep 12 00:14:33.094335 containerd[1586]: time="2025-09-12T00:14:33.094306658Z" level=info msg="connecting to shim f503e51a239212744973a7e9048099afcbcb697af7533aaf51a2d0540c14d177" address="unix:///run/containerd/s/59fce73ad572d550f22acad79466a6ed8951d14f40ab038adc7184bcbbb89537" protocol=ttrpc version=3 Sep 12 00:14:33.096753 containerd[1586]: time="2025-09-12T00:14:33.096717313Z" level=info msg="CreateContainer within sandbox \"8f810e5a0b9e1e97ec624b7f82c054a7c249090da06275ae21551b08c2f4bd12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca6bf2149f99b7909cb1a10d5c4f42abd90ccb79a7511828ed850a01598a9699\"" Sep 12 00:14:33.097307 containerd[1586]: time="2025-09-12T00:14:33.097287002Z" level=info msg="StartContainer for \"ca6bf2149f99b7909cb1a10d5c4f42abd90ccb79a7511828ed850a01598a9699\"" Sep 12 00:14:33.099215 containerd[1586]: time="2025-09-12T00:14:33.099181517Z" level=info msg="connecting to shim ca6bf2149f99b7909cb1a10d5c4f42abd90ccb79a7511828ed850a01598a9699" address="unix:///run/containerd/s/66e381c2c2dbfce26f3dc9a9d4d386fff320c1b924149c376e5f58b616497059" protocol=ttrpc version=3 Sep 12 00:14:33.124643 systemd[1]: Started cri-containerd-f503e51a239212744973a7e9048099afcbcb697af7533aaf51a2d0540c14d177.scope - libcontainer container f503e51a239212744973a7e9048099afcbcb697af7533aaf51a2d0540c14d177. Sep 12 00:14:33.128388 systemd[1]: Started cri-containerd-ca6bf2149f99b7909cb1a10d5c4f42abd90ccb79a7511828ed850a01598a9699.scope - libcontainer container ca6bf2149f99b7909cb1a10d5c4f42abd90ccb79a7511828ed850a01598a9699. Sep 12 00:14:33.187216 systemd-networkd[1496]: cali7385b4debd6: Link UP Sep 12 00:14:33.189721 systemd-networkd[1496]: cali7385b4debd6: Gained carrier Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.068 [INFO][4115] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.077 [INFO][4115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0 calico-apiserver-67bdfdb66c- calico-apiserver ec9e5ecf-576e-41f0-ad72-4d4444b41b88 836 0 2025-09-12 00:13:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67bdfdb66c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67bdfdb66c-8ldzv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7385b4debd6 [] [] }} ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-8ldzv" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.077 [INFO][4115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-8ldzv" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.128 [INFO][4132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" HandleID="k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Workload="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.128 [INFO][4132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" HandleID="k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Workload="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67bdfdb66c-8ldzv", "timestamp":"2025-09-12 00:14:33.128116585 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.128 [INFO][4132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.128 [INFO][4132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.128 [INFO][4132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.142 [INFO][4132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.148 [INFO][4132] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.154 [INFO][4132] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.156 [INFO][4132] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.159 [INFO][4132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.159 [INFO][4132] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.161 [INFO][4132] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593 Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.165 [INFO][4132] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.174 [INFO][4132] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.174 [INFO][4132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" host="localhost" Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.174 [INFO][4132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:14:33.209896 containerd[1586]: 2025-09-12 00:14:33.174 [INFO][4132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" HandleID="k8s-pod-network.630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Workload="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" Sep 12 00:14:33.210696 containerd[1586]: 2025-09-12 00:14:33.179 [INFO][4115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-8ldzv" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0", GenerateName:"calico-apiserver-67bdfdb66c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec9e5ecf-576e-41f0-ad72-4d4444b41b88", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 13, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67bdfdb66c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67bdfdb66c-8ldzv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7385b4debd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:33.210696 containerd[1586]: 2025-09-12 00:14:33.180 [INFO][4115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-8ldzv" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" Sep 12 00:14:33.210696 containerd[1586]: 2025-09-12 00:14:33.180 [INFO][4115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7385b4debd6 ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-8ldzv" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" Sep 12 00:14:33.210696 containerd[1586]: 2025-09-12 00:14:33.188 [INFO][4115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-8ldzv" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" Sep 12 00:14:33.210696 containerd[1586]: 2025-09-12 00:14:33.189 [INFO][4115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-8ldzv" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0", GenerateName:"calico-apiserver-67bdfdb66c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec9e5ecf-576e-41f0-ad72-4d4444b41b88", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 13, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67bdfdb66c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593", Pod:"calico-apiserver-67bdfdb66c-8ldzv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7385b4debd6", MAC:"02:70:9c:1d:be:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:33.210696 containerd[1586]: 2025-09-12 00:14:33.206 [INFO][4115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-8ldzv" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--8ldzv-eth0" Sep 12 00:14:33.308832 containerd[1586]: time="2025-09-12T00:14:33.308763454Z" level=info msg="StartContainer for \"ca6bf2149f99b7909cb1a10d5c4f42abd90ccb79a7511828ed850a01598a9699\" returns successfully" Sep 12 00:14:33.309210 containerd[1586]: time="2025-09-12T00:14:33.309144429Z" level=info msg="StartContainer for \"f503e51a239212744973a7e9048099afcbcb697af7533aaf51a2d0540c14d177\" returns successfully" Sep 12 00:14:33.315012 kubelet[2748]: E0912 00:14:33.314895 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:33.323957 systemd[1]: Removed slice kubepods-besteffort-pod423d4570_8291_4176_983b_647c391185ae.slice - libcontainer container kubepods-besteffort-pod423d4570_8291_4176_983b_647c391185ae.slice. Sep 12 00:14:33.350070 kubelet[2748]: I0912 00:14:33.349990 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vmt22" podStartSLOduration=44.34996289 podStartE2EDuration="44.34996289s" podCreationTimestamp="2025-09-12 00:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:14:33.335330978 +0000 UTC m=+49.425305769" watchObservedRunningTime="2025-09-12 00:14:33.34996289 +0000 UTC m=+49.439937681" Sep 12 00:14:33.356188 containerd[1586]: time="2025-09-12T00:14:33.356103154Z" level=info msg="connecting to shim 630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593" address="unix:///run/containerd/s/a7d13dd3da196b58e3dce2f04932674b82f9d7a2f7f9c188273b96f8f8dead85" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:33.396014 systemd[1]: Started cri-containerd-630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593.scope - libcontainer container 630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593. Sep 12 00:14:33.426958 systemd[1]: Created slice kubepods-besteffort-podebc2ad0d_2504_48ea_9634_fba0f00388c7.slice - libcontainer container kubepods-besteffort-podebc2ad0d_2504_48ea_9634_fba0f00388c7.slice. Sep 12 00:14:33.462816 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:14:33.465240 kubelet[2748]: I0912 00:14:33.465199 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc2ad0d-2504-48ea-9634-fba0f00388c7-whisker-ca-bundle\") pod \"whisker-6dcbf89b99-zbqh4\" (UID: \"ebc2ad0d-2504-48ea-9634-fba0f00388c7\") " pod="calico-system/whisker-6dcbf89b99-zbqh4" Sep 12 00:14:33.465579 kubelet[2748]: I0912 00:14:33.465554 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ebc2ad0d-2504-48ea-9634-fba0f00388c7-whisker-backend-key-pair\") pod \"whisker-6dcbf89b99-zbqh4\" (UID: \"ebc2ad0d-2504-48ea-9634-fba0f00388c7\") " pod="calico-system/whisker-6dcbf89b99-zbqh4" Sep 12 00:14:33.466079 kubelet[2748]: I0912 00:14:33.465783 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtjh\" (UniqueName: \"kubernetes.io/projected/ebc2ad0d-2504-48ea-9634-fba0f00388c7-kube-api-access-kxtjh\") pod \"whisker-6dcbf89b99-zbqh4\" (UID: \"ebc2ad0d-2504-48ea-9634-fba0f00388c7\") " pod="calico-system/whisker-6dcbf89b99-zbqh4" Sep 12 00:14:33.502091 containerd[1586]: time="2025-09-12T00:14:33.502025571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59\" id:\"655af4a54735e6e6d5d70e0eb1a047bb023ccf6733c7a7fefedeee432c725dc2\" pid:4214 exit_status:1 exited_at:{seconds:1757636073 nanos:501549346}" Sep 12 00:14:33.622711 containerd[1586]: time="2025-09-12T00:14:33.622637433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67bdfdb66c-8ldzv,Uid:ec9e5ecf-576e-41f0-ad72-4d4444b41b88,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593\"" Sep 12 00:14:33.625842 containerd[1586]: time="2025-09-12T00:14:33.625760163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 00:14:33.736233 containerd[1586]: time="2025-09-12T00:14:33.736101519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dcbf89b99-zbqh4,Uid:ebc2ad0d-2504-48ea-9634-fba0f00388c7,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:33.854979 systemd-networkd[1496]: calidf67039996b: Link UP Sep 12 00:14:33.855306 systemd-networkd[1496]: calidf67039996b: Gained carrier Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.768 [INFO][4295] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.780 [INFO][4295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0 whisker-6dcbf89b99- calico-system ebc2ad0d-2504-48ea-9634-fba0f00388c7 991 0 2025-09-12 00:14:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dcbf89b99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6dcbf89b99-zbqh4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf67039996b [] [] }} ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Namespace="calico-system" Pod="whisker-6dcbf89b99-zbqh4" WorkloadEndpoint="localhost-k8s-whisker--6dcbf89b99--zbqh4-" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.780 [INFO][4295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Namespace="calico-system" Pod="whisker-6dcbf89b99-zbqh4" WorkloadEndpoint="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.809 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" HandleID="k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Workload="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.809 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" HandleID="k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Workload="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6dcbf89b99-zbqh4", "timestamp":"2025-09-12 00:14:33.809319569 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.809 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.809 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.809 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.817 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.822 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.829 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.832 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.835 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.835 [INFO][4309] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.837 [INFO][4309] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.842 [INFO][4309] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.847 [INFO][4309] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.847 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" host="localhost" Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.847 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:14:33.869938 containerd[1586]: 2025-09-12 00:14:33.847 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" HandleID="k8s-pod-network.e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Workload="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" Sep 12 00:14:33.870918 containerd[1586]: 2025-09-12 00:14:33.852 [INFO][4295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Namespace="calico-system" Pod="whisker-6dcbf89b99-zbqh4" WorkloadEndpoint="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0", GenerateName:"whisker-6dcbf89b99-", Namespace:"calico-system", SelfLink:"", UID:"ebc2ad0d-2504-48ea-9634-fba0f00388c7", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 14, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dcbf89b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6dcbf89b99-zbqh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf67039996b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:33.870918 containerd[1586]: 2025-09-12 00:14:33.852 [INFO][4295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Namespace="calico-system" Pod="whisker-6dcbf89b99-zbqh4" WorkloadEndpoint="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" Sep 12 00:14:33.870918 containerd[1586]: 2025-09-12 00:14:33.852 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf67039996b ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Namespace="calico-system" Pod="whisker-6dcbf89b99-zbqh4" WorkloadEndpoint="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" Sep 12 00:14:33.870918 containerd[1586]: 2025-09-12 00:14:33.856 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Namespace="calico-system" Pod="whisker-6dcbf89b99-zbqh4" WorkloadEndpoint="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" Sep 12 00:14:33.870918 containerd[1586]: 2025-09-12 00:14:33.856 [INFO][4295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Namespace="calico-system" Pod="whisker-6dcbf89b99-zbqh4" WorkloadEndpoint="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0", GenerateName:"whisker-6dcbf89b99-", Namespace:"calico-system", SelfLink:"", UID:"ebc2ad0d-2504-48ea-9634-fba0f00388c7", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 14, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dcbf89b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c", Pod:"whisker-6dcbf89b99-zbqh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf67039996b", MAC:"36:37:2b:eb:b5:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:33.870918 containerd[1586]: 2025-09-12 00:14:33.865 [INFO][4295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" Namespace="calico-system" Pod="whisker-6dcbf89b99-zbqh4" WorkloadEndpoint="localhost-k8s-whisker--6dcbf89b99--zbqh4-eth0" Sep 12 00:14:33.895517 containerd[1586]: time="2025-09-12T00:14:33.895438101Z" level=info msg="connecting to shim e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c" address="unix:///run/containerd/s/dbc6eb2768d5f113c5b131e85a47e48a2e6f5ff945f2630a647429c94ae6ddb1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:33.936737 systemd[1]: Started cri-containerd-e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c.scope - libcontainer container e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c. Sep 12 00:14:33.954448 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:14:34.024478 kubelet[2748]: I0912 00:14:34.024291 2748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423d4570-8291-4176-983b-647c391185ae" path="/var/lib/kubelet/pods/423d4570-8291-4176-983b-647c391185ae/volumes" Sep 12 00:14:34.139596 systemd-networkd[1496]: cali4c1d68b3a80: Gained IPv6LL Sep 12 00:14:34.313258 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:50920.service - OpenSSH per-connection server daemon (10.0.0.1:50920). Sep 12 00:14:34.319336 kubelet[2748]: E0912 00:14:34.319040 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:34.320575 kubelet[2748]: E0912 00:14:34.320540 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:34.520212 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 50920 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:14:34.522057 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:14:34.526905 systemd-logind[1568]: New session 9 of user core. Sep 12 00:14:34.534495 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 00:14:34.596044 containerd[1586]: time="2025-09-12T00:14:34.595922621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dcbf89b99-zbqh4,Uid:ebc2ad0d-2504-48ea-9634-fba0f00388c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c\"" Sep 12 00:14:34.605903 kubelet[2748]: I0912 00:14:34.605745 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9gk4r" podStartSLOduration=45.605723897 podStartE2EDuration="45.605723897s" podCreationTimestamp="2025-09-12 00:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:14:34.388702183 +0000 UTC m=+50.478676974" watchObservedRunningTime="2025-09-12 00:14:34.605723897 +0000 UTC m=+50.695698688" Sep 12 00:14:34.907542 systemd-networkd[1496]: cali449a68ab8e0: Gained IPv6LL Sep 12 00:14:34.972541 systemd-networkd[1496]: cali7385b4debd6: Gained IPv6LL Sep 12 00:14:35.022990 containerd[1586]: time="2025-09-12T00:14:35.022203781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67bdfdb66c-vd47f,Uid:668908df-9f8b-450a-bb7d-f0abb01d8bf3,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:14:35.023449 containerd[1586]: time="2025-09-12T00:14:35.023224667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ztw7n,Uid:9d49c369-50e2-4ca6-bb1c-be30deace4e7,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:35.035586 sshd[4507]: Connection closed by 10.0.0.1 port 50920 Sep 12 00:14:35.036129 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Sep 12 00:14:35.040119 systemd-networkd[1496]: calidf67039996b: Gained IPv6LL Sep 12 00:14:35.054988 systemd-networkd[1496]: vxlan.calico: Link UP Sep 12 00:14:35.055000 systemd-networkd[1496]: vxlan.calico: Gained carrier Sep 12 00:14:35.056603 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:50920.service: Deactivated successfully. Sep 12 00:14:35.062540 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 00:14:35.067833 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Sep 12 00:14:35.071018 systemd-logind[1568]: Removed session 9. Sep 12 00:14:35.211658 systemd-networkd[1496]: cali8a0bd7745ce: Link UP Sep 12 00:14:35.213134 systemd-networkd[1496]: cali8a0bd7745ce: Gained carrier Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.104 [INFO][4542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0 calico-apiserver-67bdfdb66c- calico-apiserver 668908df-9f8b-450a-bb7d-f0abb01d8bf3 830 0 2025-09-12 00:13:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67bdfdb66c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67bdfdb66c-vd47f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a0bd7745ce [] [] }} ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-vd47f" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.105 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-vd47f" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.164 [INFO][4584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" HandleID="k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Workload="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.165 [INFO][4584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" HandleID="k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Workload="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044e5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67bdfdb66c-vd47f", "timestamp":"2025-09-12 00:14:35.164737057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.165 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.165 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.165 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.172 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.177 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.182 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.184 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.186 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.186 [INFO][4584] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.188 [INFO][4584] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.192 [INFO][4584] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.199 [INFO][4584] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.199 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" host="localhost" Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.199 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:14:35.228186 containerd[1586]: 2025-09-12 00:14:35.199 [INFO][4584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" HandleID="k8s-pod-network.e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Workload="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" Sep 12 00:14:35.228894 containerd[1586]: 2025-09-12 00:14:35.203 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-vd47f" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0", GenerateName:"calico-apiserver-67bdfdb66c-", Namespace:"calico-apiserver", SelfLink:"", UID:"668908df-9f8b-450a-bb7d-f0abb01d8bf3", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 13, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67bdfdb66c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67bdfdb66c-vd47f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a0bd7745ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:35.228894 containerd[1586]: 2025-09-12 00:14:35.203 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-vd47f" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" Sep 12 00:14:35.228894 containerd[1586]: 2025-09-12 00:14:35.203 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a0bd7745ce ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-vd47f" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" Sep 12 00:14:35.228894 containerd[1586]: 2025-09-12 00:14:35.213 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-vd47f" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" Sep 12 00:14:35.228894 containerd[1586]: 2025-09-12 00:14:35.214 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-vd47f" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0", GenerateName:"calico-apiserver-67bdfdb66c-", Namespace:"calico-apiserver", SelfLink:"", UID:"668908df-9f8b-450a-bb7d-f0abb01d8bf3", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 13, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67bdfdb66c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b", Pod:"calico-apiserver-67bdfdb66c-vd47f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a0bd7745ce", MAC:"ae:1f:55:fd:f5:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:35.228894 containerd[1586]: 2025-09-12 00:14:35.223 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" Namespace="calico-apiserver" Pod="calico-apiserver-67bdfdb66c-vd47f" WorkloadEndpoint="localhost-k8s-calico--apiserver--67bdfdb66c--vd47f-eth0" Sep 12 00:14:35.274629 containerd[1586]: time="2025-09-12T00:14:35.274570864Z" level=info msg="connecting to shim e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b" address="unix:///run/containerd/s/46885437be1c86a2a97c707d3bd459e1dafdcc12643bee8fd1aed273071b40a2" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:35.311672 systemd[1]: Started cri-containerd-e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b.scope - libcontainer container e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b. Sep 12 00:14:35.323464 kubelet[2748]: E0912 00:14:35.323433 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:35.324159 kubelet[2748]: E0912 00:14:35.324128 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:35.331505 systemd-networkd[1496]: cali4e3e3c7638a: Link UP Sep 12 00:14:35.332093 systemd-networkd[1496]: cali4e3e3c7638a: Gained carrier Sep 12 00:14:35.342394 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.110 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--ztw7n-eth0 goldmane-54d579b49d- calico-system 9d49c369-50e2-4ca6-bb1c-be30deace4e7 834 0 2025-09-12 00:14:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-ztw7n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4e3e3c7638a [] [] }} ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Namespace="calico-system" Pod="goldmane-54d579b49d-ztw7n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ztw7n-" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.110 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Namespace="calico-system" Pod="goldmane-54d579b49d-ztw7n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.174 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" HandleID="k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Workload="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.175 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" HandleID="k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Workload="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-ztw7n", "timestamp":"2025-09-12 00:14:35.174862219 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.175 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.199 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.200 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.272 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.287 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.292 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.294 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.297 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.298 [INFO][4593] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.299 [INFO][4593] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270 Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.304 [INFO][4593] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.315 [INFO][4593] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.315 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" host="localhost" Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.315 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:14:35.351326 containerd[1586]: 2025-09-12 00:14:35.315 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" HandleID="k8s-pod-network.c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Workload="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" Sep 12 00:14:35.352006 containerd[1586]: 2025-09-12 00:14:35.326 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Namespace="calico-system" Pod="goldmane-54d579b49d-ztw7n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--ztw7n-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9d49c369-50e2-4ca6-bb1c-be30deace4e7", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-ztw7n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e3e3c7638a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:35.352006 containerd[1586]: 2025-09-12 00:14:35.326 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Namespace="calico-system" Pod="goldmane-54d579b49d-ztw7n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" Sep 12 00:14:35.352006 containerd[1586]: 2025-09-12 00:14:35.326 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e3e3c7638a ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Namespace="calico-system" Pod="goldmane-54d579b49d-ztw7n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" Sep 12 00:14:35.352006 containerd[1586]: 2025-09-12 00:14:35.331 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Namespace="calico-system" Pod="goldmane-54d579b49d-ztw7n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" Sep 12 00:14:35.352006 containerd[1586]: 2025-09-12 00:14:35.333 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Namespace="calico-system" Pod="goldmane-54d579b49d-ztw7n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--ztw7n-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9d49c369-50e2-4ca6-bb1c-be30deace4e7", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270", Pod:"goldmane-54d579b49d-ztw7n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e3e3c7638a", MAC:"8a:38:f2:a4:26:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:35.352006 containerd[1586]: 2025-09-12 00:14:35.346 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" Namespace="calico-system" Pod="goldmane-54d579b49d-ztw7n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ztw7n-eth0" Sep 12 00:14:35.391971 containerd[1586]: time="2025-09-12T00:14:35.391904377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67bdfdb66c-vd47f,Uid:668908df-9f8b-450a-bb7d-f0abb01d8bf3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b\"" Sep 12 00:14:35.409837 containerd[1586]: time="2025-09-12T00:14:35.409754116Z" level=info msg="connecting to shim c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270" address="unix:///run/containerd/s/bf54ec4cd3bf949fc3b01eb960a08c5b5a3ca92fbddd7ace82b04248337ba78d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:35.445657 systemd[1]: Started cri-containerd-c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270.scope - libcontainer container c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270. Sep 12 00:14:35.462017 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:14:35.509937 containerd[1586]: time="2025-09-12T00:14:35.509793853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ztw7n,Uid:9d49c369-50e2-4ca6-bb1c-be30deace4e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270\"" Sep 12 00:14:36.021810 containerd[1586]: time="2025-09-12T00:14:36.021759980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58555d9d7-fshxl,Uid:e16bddf2-c561-47c4-bd5f-16310c0129e0,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:36.022808 containerd[1586]: time="2025-09-12T00:14:36.022780786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmdzp,Uid:4a8d8933-0196-4a27-a293-4f89ec69d3dc,Namespace:calico-system,Attempt:0,}" Sep 12 00:14:36.170880 systemd-networkd[1496]: cali38cb523d418: Link UP Sep 12 00:14:36.172053 systemd-networkd[1496]: cali38cb523d418: Gained carrier Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.087 [INFO][4769] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0 calico-kube-controllers-58555d9d7- calico-system e16bddf2-c561-47c4-bd5f-16310c0129e0 823 0 2025-09-12 00:14:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58555d9d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58555d9d7-fshxl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali38cb523d418 [] [] }} ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Namespace="calico-system" Pod="calico-kube-controllers-58555d9d7-fshxl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.087 [INFO][4769] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Namespace="calico-system" Pod="calico-kube-controllers-58555d9d7-fshxl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.125 [INFO][4803] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" HandleID="k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Workload="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.125 [INFO][4803] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" HandleID="k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Workload="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003242b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58555d9d7-fshxl", "timestamp":"2025-09-12 00:14:36.125101176 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.125 [INFO][4803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.125 [INFO][4803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.125 [INFO][4803] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.134 [INFO][4803] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.139 [INFO][4803] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.144 [INFO][4803] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.145 [INFO][4803] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.147 [INFO][4803] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.147 [INFO][4803] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.149 [INFO][4803] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.152 [INFO][4803] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.159 [INFO][4803] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.159 [INFO][4803] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" host="localhost" Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.160 [INFO][4803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:14:36.189040 containerd[1586]: 2025-09-12 00:14:36.160 [INFO][4803] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" HandleID="k8s-pod-network.1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Workload="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" Sep 12 00:14:36.189900 containerd[1586]: 2025-09-12 00:14:36.163 [INFO][4769] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Namespace="calico-system" Pod="calico-kube-controllers-58555d9d7-fshxl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0", GenerateName:"calico-kube-controllers-58555d9d7-", Namespace:"calico-system", SelfLink:"", UID:"e16bddf2-c561-47c4-bd5f-16310c0129e0", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58555d9d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58555d9d7-fshxl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali38cb523d418", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:36.189900 containerd[1586]: 2025-09-12 00:14:36.164 [INFO][4769] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Namespace="calico-system" Pod="calico-kube-controllers-58555d9d7-fshxl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" Sep 12 00:14:36.189900 containerd[1586]: 2025-09-12 00:14:36.164 [INFO][4769] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38cb523d418 ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Namespace="calico-system" Pod="calico-kube-controllers-58555d9d7-fshxl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" Sep 12 00:14:36.189900 containerd[1586]: 2025-09-12 00:14:36.172 [INFO][4769] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Namespace="calico-system" Pod="calico-kube-controllers-58555d9d7-fshxl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" Sep 12 00:14:36.189900 containerd[1586]: 2025-09-12 00:14:36.173 [INFO][4769] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Namespace="calico-system" Pod="calico-kube-controllers-58555d9d7-fshxl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0", GenerateName:"calico-kube-controllers-58555d9d7-", Namespace:"calico-system", SelfLink:"", UID:"e16bddf2-c561-47c4-bd5f-16310c0129e0", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58555d9d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f", Pod:"calico-kube-controllers-58555d9d7-fshxl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali38cb523d418", MAC:"7a:06:5f:4f:01:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:36.189900 containerd[1586]: 2025-09-12 00:14:36.182 [INFO][4769] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" Namespace="calico-system" Pod="calico-kube-controllers-58555d9d7-fshxl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58555d9d7--fshxl-eth0" Sep 12 00:14:36.316944 systemd-networkd[1496]: caliaf6c69bb014: Link UP Sep 12 00:14:36.320623 containerd[1586]: time="2025-09-12T00:14:36.320568071Z" level=info msg="connecting to shim 1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f" address="unix:///run/containerd/s/d189c33107af34fe8a0993563222913329c9a7bebd7332688fb88d8db377348f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:36.321847 systemd-networkd[1496]: caliaf6c69bb014: Gained carrier Sep 12 00:14:36.333726 kubelet[2748]: E0912 00:14:36.333659 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.084 [INFO][4777] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kmdzp-eth0 csi-node-driver- calico-system 4a8d8933-0196-4a27-a293-4f89ec69d3dc 706 0 2025-09-12 00:14:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kmdzp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaf6c69bb014 [] [] }} ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Namespace="calico-system" Pod="csi-node-driver-kmdzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--kmdzp-" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.084 [INFO][4777] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Namespace="calico-system" Pod="csi-node-driver-kmdzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--kmdzp-eth0" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.126 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" HandleID="k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Workload="localhost-k8s-csi--node--driver--kmdzp-eth0" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.126 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" HandleID="k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Workload="localhost-k8s-csi--node--driver--kmdzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kmdzp", "timestamp":"2025-09-12 00:14:36.126117173 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.126 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.160 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.160 [INFO][4801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.235 [INFO][4801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.241 [INFO][4801] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.245 [INFO][4801] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.247 [INFO][4801] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.249 [INFO][4801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.249 [INFO][4801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.251 [INFO][4801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512 Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.260 [INFO][4801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.274 [INFO][4801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.276 [INFO][4801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" host="localhost" Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.276 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:14:36.355389 containerd[1586]: 2025-09-12 00:14:36.276 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" HandleID="k8s-pod-network.f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Workload="localhost-k8s-csi--node--driver--kmdzp-eth0" Sep 12 00:14:36.356065 containerd[1586]: 2025-09-12 00:14:36.292 [INFO][4777] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Namespace="calico-system" Pod="csi-node-driver-kmdzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--kmdzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kmdzp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a8d8933-0196-4a27-a293-4f89ec69d3dc", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kmdzp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf6c69bb014", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:36.356065 containerd[1586]: 2025-09-12 00:14:36.292 [INFO][4777] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Namespace="calico-system" Pod="csi-node-driver-kmdzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--kmdzp-eth0" Sep 12 00:14:36.356065 containerd[1586]: 2025-09-12 00:14:36.292 [INFO][4777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf6c69bb014 ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Namespace="calico-system" Pod="csi-node-driver-kmdzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--kmdzp-eth0" Sep 12 00:14:36.356065 containerd[1586]: 2025-09-12 00:14:36.320 [INFO][4777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Namespace="calico-system" Pod="csi-node-driver-kmdzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--kmdzp-eth0" Sep 12 00:14:36.356065 containerd[1586]: 2025-09-12 00:14:36.322 [INFO][4777] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Namespace="calico-system" Pod="csi-node-driver-kmdzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--kmdzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kmdzp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a8d8933-0196-4a27-a293-4f89ec69d3dc", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 14, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512", Pod:"csi-node-driver-kmdzp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf6c69bb014", MAC:"ba:8e:8a:cc:f4:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:14:36.356065 containerd[1586]: 2025-09-12 00:14:36.347 [INFO][4777] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" Namespace="calico-system" Pod="csi-node-driver-kmdzp" WorkloadEndpoint="localhost-k8s-csi--node--driver--kmdzp-eth0" Sep 12 00:14:36.383535 systemd[1]: Started cri-containerd-1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f.scope - libcontainer container 1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f. Sep 12 00:14:36.393377 containerd[1586]: time="2025-09-12T00:14:36.390786247Z" level=info msg="connecting to shim f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512" address="unix:///run/containerd/s/efef942fe3dca409dafd35f0e06120a0a32a3d84aafc04859f8abd4bce6a7561" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:14:36.415044 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:14:36.419582 systemd[1]: Started cri-containerd-f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512.scope - libcontainer container f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512. Sep 12 00:14:36.448436 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:14:36.477605 containerd[1586]: time="2025-09-12T00:14:36.477558820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58555d9d7-fshxl,Uid:e16bddf2-c561-47c4-bd5f-16310c0129e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f\"" Sep 12 00:14:36.478679 containerd[1586]: time="2025-09-12T00:14:36.478487322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmdzp,Uid:4a8d8933-0196-4a27-a293-4f89ec69d3dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512\"" Sep 12 00:14:36.689657 containerd[1586]: time="2025-09-12T00:14:36.689498699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:36.690533 containerd[1586]: time="2025-09-12T00:14:36.690402625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 00:14:36.691732 containerd[1586]: time="2025-09-12T00:14:36.691672619Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:36.694088 containerd[1586]: time="2025-09-12T00:14:36.694028730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:36.694762 containerd[1586]: time="2025-09-12T00:14:36.694729115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.068880957s" Sep 12 00:14:36.694837 containerd[1586]: time="2025-09-12T00:14:36.694766645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 00:14:36.697826 containerd[1586]: time="2025-09-12T00:14:36.697651730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 00:14:36.702724 containerd[1586]: time="2025-09-12T00:14:36.702663194Z" level=info msg="CreateContainer within sandbox \"630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 00:14:36.709155 containerd[1586]: time="2025-09-12T00:14:36.709119129Z" level=info msg="Container fc345d8fd3c31c4df0cc9070b5012e927255782ce6bc1eedba7b9a6cd88b600f: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:36.716954 containerd[1586]: time="2025-09-12T00:14:36.716914088Z" level=info msg="CreateContainer within sandbox \"630a54a6c5430f54b3f158a092174f8c002160b9a58e4b353284d6eb4729a593\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fc345d8fd3c31c4df0cc9070b5012e927255782ce6bc1eedba7b9a6cd88b600f\"" Sep 12 00:14:36.717483 containerd[1586]: time="2025-09-12T00:14:36.717437570Z" level=info msg="StartContainer for \"fc345d8fd3c31c4df0cc9070b5012e927255782ce6bc1eedba7b9a6cd88b600f\"" Sep 12 00:14:36.718691 containerd[1586]: time="2025-09-12T00:14:36.718646179Z" level=info msg="connecting to shim fc345d8fd3c31c4df0cc9070b5012e927255782ce6bc1eedba7b9a6cd88b600f" address="unix:///run/containerd/s/a7d13dd3da196b58e3dce2f04932674b82f9d7a2f7f9c188273b96f8f8dead85" protocol=ttrpc version=3 Sep 12 00:14:36.739530 systemd[1]: Started cri-containerd-fc345d8fd3c31c4df0cc9070b5012e927255782ce6bc1eedba7b9a6cd88b600f.scope - libcontainer container fc345d8fd3c31c4df0cc9070b5012e927255782ce6bc1eedba7b9a6cd88b600f. Sep 12 00:14:36.792983 containerd[1586]: time="2025-09-12T00:14:36.792936076Z" level=info msg="StartContainer for \"fc345d8fd3c31c4df0cc9070b5012e927255782ce6bc1eedba7b9a6cd88b600f\" returns successfully" Sep 12 00:14:36.827576 systemd-networkd[1496]: cali4e3e3c7638a: Gained IPv6LL Sep 12 00:14:37.019532 systemd-networkd[1496]: cali8a0bd7745ce: Gained IPv6LL Sep 12 00:14:37.020447 systemd-networkd[1496]: vxlan.calico: Gained IPv6LL Sep 12 00:14:37.403705 systemd-networkd[1496]: caliaf6c69bb014: Gained IPv6LL Sep 12 00:14:37.468539 systemd-networkd[1496]: cali38cb523d418: Gained IPv6LL Sep 12 00:14:38.096873 containerd[1586]: time="2025-09-12T00:14:38.096794194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:38.097697 containerd[1586]: time="2025-09-12T00:14:38.097618541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 00:14:38.099224 containerd[1586]: time="2025-09-12T00:14:38.099160003Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:38.102503 containerd[1586]: time="2025-09-12T00:14:38.102437023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:38.103283 containerd[1586]: time="2025-09-12T00:14:38.103204834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.405516496s" Sep 12 00:14:38.103283 containerd[1586]: time="2025-09-12T00:14:38.103273142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 00:14:38.105448 containerd[1586]: time="2025-09-12T00:14:38.105399613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 00:14:38.107105 containerd[1586]: time="2025-09-12T00:14:38.107073754Z" level=info msg="CreateContainer within sandbox \"e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 00:14:38.116892 containerd[1586]: time="2025-09-12T00:14:38.116751064Z" level=info msg="Container b4dd5f56f3c1b55e5833413a786bf3a147fba33850811438e2ad473af6952463: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:38.126152 containerd[1586]: time="2025-09-12T00:14:38.126066154Z" level=info msg="CreateContainer within sandbox \"e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b4dd5f56f3c1b55e5833413a786bf3a147fba33850811438e2ad473af6952463\"" Sep 12 00:14:38.127014 containerd[1586]: time="2025-09-12T00:14:38.126983787Z" level=info msg="StartContainer for \"b4dd5f56f3c1b55e5833413a786bf3a147fba33850811438e2ad473af6952463\"" Sep 12 00:14:38.128307 containerd[1586]: time="2025-09-12T00:14:38.128277013Z" level=info msg="connecting to shim b4dd5f56f3c1b55e5833413a786bf3a147fba33850811438e2ad473af6952463" address="unix:///run/containerd/s/dbc6eb2768d5f113c5b131e85a47e48a2e6f5ff945f2630a647429c94ae6ddb1" protocol=ttrpc version=3 Sep 12 00:14:38.165654 systemd[1]: Started cri-containerd-b4dd5f56f3c1b55e5833413a786bf3a147fba33850811438e2ad473af6952463.scope - libcontainer container b4dd5f56f3c1b55e5833413a786bf3a147fba33850811438e2ad473af6952463. Sep 12 00:14:38.222301 containerd[1586]: time="2025-09-12T00:14:38.222252212Z" level=info msg="StartContainer for \"b4dd5f56f3c1b55e5833413a786bf3a147fba33850811438e2ad473af6952463\" returns successfully" Sep 12 00:14:38.343124 kubelet[2748]: I0912 00:14:38.343086 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:14:38.524538 containerd[1586]: time="2025-09-12T00:14:38.524453807Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:38.526312 containerd[1586]: time="2025-09-12T00:14:38.526257812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 00:14:38.528947 containerd[1586]: time="2025-09-12T00:14:38.528790686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 423.348783ms" Sep 12 00:14:38.528947 containerd[1586]: time="2025-09-12T00:14:38.528856710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 00:14:38.531433 containerd[1586]: time="2025-09-12T00:14:38.530173200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 00:14:38.532716 containerd[1586]: time="2025-09-12T00:14:38.532669975Z" level=info msg="CreateContainer within sandbox \"e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 00:14:38.542280 containerd[1586]: time="2025-09-12T00:14:38.542228552Z" level=info msg="Container 35ba9ecc9676e1df82e5fadc87582b416abf9c3e09f7fabf5c3ca174a80b3c06: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:38.553264 containerd[1586]: time="2025-09-12T00:14:38.553212093Z" level=info msg="CreateContainer within sandbox \"e42bfb9548948f5194a7a6a2edd7f0b44d008fb2c148d0978e996b299171ac1b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35ba9ecc9676e1df82e5fadc87582b416abf9c3e09f7fabf5c3ca174a80b3c06\"" Sep 12 00:14:38.553865 containerd[1586]: time="2025-09-12T00:14:38.553829171Z" level=info msg="StartContainer for \"35ba9ecc9676e1df82e5fadc87582b416abf9c3e09f7fabf5c3ca174a80b3c06\"" Sep 12 00:14:38.555057 containerd[1586]: time="2025-09-12T00:14:38.554996822Z" level=info msg="connecting to shim 35ba9ecc9676e1df82e5fadc87582b416abf9c3e09f7fabf5c3ca174a80b3c06" address="unix:///run/containerd/s/46885437be1c86a2a97c707d3bd459e1dafdcc12643bee8fd1aed273071b40a2" protocol=ttrpc version=3 Sep 12 00:14:38.581611 systemd[1]: Started cri-containerd-35ba9ecc9676e1df82e5fadc87582b416abf9c3e09f7fabf5c3ca174a80b3c06.scope - libcontainer container 35ba9ecc9676e1df82e5fadc87582b416abf9c3e09f7fabf5c3ca174a80b3c06. Sep 12 00:14:38.778042 containerd[1586]: time="2025-09-12T00:14:38.777853744Z" level=info msg="StartContainer for \"35ba9ecc9676e1df82e5fadc87582b416abf9c3e09f7fabf5c3ca174a80b3c06\" returns successfully" Sep 12 00:14:39.360035 kubelet[2748]: I0912 00:14:39.359953 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67bdfdb66c-8ldzv" podStartSLOduration=37.286910927 podStartE2EDuration="40.359929608s" podCreationTimestamp="2025-09-12 00:13:59 +0000 UTC" firstStartedPulling="2025-09-12 00:14:33.624325962 +0000 UTC m=+49.714300753" lastFinishedPulling="2025-09-12 00:14:36.697344643 +0000 UTC m=+52.787319434" observedRunningTime="2025-09-12 00:14:37.420911988 +0000 UTC m=+53.510886779" watchObservedRunningTime="2025-09-12 00:14:39.359929608 +0000 UTC m=+55.449904399" Sep 12 00:14:40.055938 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:42248.service - OpenSSH per-connection server daemon (10.0.0.1:42248). Sep 12 00:14:40.153541 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 42248 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:14:40.156194 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:14:40.162432 systemd-logind[1568]: New session 10 of user core. Sep 12 00:14:40.171648 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 00:14:40.349761 kubelet[2748]: I0912 00:14:40.349644 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:14:40.431979 sshd[5048]: Connection closed by 10.0.0.1 port 42248 Sep 12 00:14:40.432576 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Sep 12 00:14:40.435928 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:42248.service: Deactivated successfully. Sep 12 00:14:40.438334 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 00:14:40.440192 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Sep 12 00:14:40.441885 systemd-logind[1568]: Removed session 10. Sep 12 00:14:40.878859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054590611.mount: Deactivated successfully. Sep 12 00:14:41.575846 containerd[1586]: time="2025-09-12T00:14:41.575776537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 00:14:41.581218 containerd[1586]: time="2025-09-12T00:14:41.581163345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.050947435s" Sep 12 00:14:41.581218 containerd[1586]: time="2025-09-12T00:14:41.581220913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 00:14:41.587331 containerd[1586]: time="2025-09-12T00:14:41.587239506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:41.588247 containerd[1586]: time="2025-09-12T00:14:41.588217472Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:41.588811 containerd[1586]: time="2025-09-12T00:14:41.588784135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:41.589191 containerd[1586]: time="2025-09-12T00:14:41.589152366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 00:14:41.592192 containerd[1586]: time="2025-09-12T00:14:41.592146224Z" level=info msg="CreateContainer within sandbox \"c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 00:14:41.605653 containerd[1586]: time="2025-09-12T00:14:41.603870794Z" level=info msg="Container ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:41.617262 containerd[1586]: time="2025-09-12T00:14:41.617203251Z" level=info msg="CreateContainer within sandbox \"c4302f1152045d72cc7dc6de7ad66ba1b42efbed996cec8b675b9efdf3392270\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6\"" Sep 12 00:14:41.618022 containerd[1586]: time="2025-09-12T00:14:41.617970951Z" level=info msg="StartContainer for \"ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6\"" Sep 12 00:14:41.619309 containerd[1586]: time="2025-09-12T00:14:41.619273956Z" level=info msg="connecting to shim ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6" address="unix:///run/containerd/s/bf54ec4cd3bf949fc3b01eb960a08c5b5a3ca92fbddd7ace82b04248337ba78d" protocol=ttrpc version=3 Sep 12 00:14:41.656551 systemd[1]: Started cri-containerd-ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6.scope - libcontainer container ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6. Sep 12 00:14:41.714535 containerd[1586]: time="2025-09-12T00:14:41.714475464Z" level=info msg="StartContainer for \"ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6\" returns successfully" Sep 12 00:14:42.443998 containerd[1586]: time="2025-09-12T00:14:42.443942999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6\" id:\"8fcbe43e2cbdf48b77e89bac0cf5832ce24414b04d9717032a18c0ae8c5f3506\" pid:5126 exit_status:1 exited_at:{seconds:1757636082 nanos:443509488}" Sep 12 00:14:42.519655 kubelet[2748]: I0912 00:14:42.519557 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-ztw7n" podStartSLOduration=33.442638363 podStartE2EDuration="39.51953727s" podCreationTimestamp="2025-09-12 00:14:03 +0000 UTC" firstStartedPulling="2025-09-12 00:14:35.51208351 +0000 UTC m=+51.602058301" lastFinishedPulling="2025-09-12 00:14:41.588982416 +0000 UTC m=+57.678957208" observedRunningTime="2025-09-12 00:14:42.519391177 +0000 UTC m=+58.609365978" watchObservedRunningTime="2025-09-12 00:14:42.51953727 +0000 UTC m=+58.609512061" Sep 12 00:14:42.520188 kubelet[2748]: I0912 00:14:42.519747 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67bdfdb66c-vd47f" podStartSLOduration=40.384240849 podStartE2EDuration="43.519741867s" podCreationTimestamp="2025-09-12 00:13:59 +0000 UTC" firstStartedPulling="2025-09-12 00:14:35.394448381 +0000 UTC m=+51.484423172" lastFinishedPulling="2025-09-12 00:14:38.529949399 +0000 UTC m=+54.619924190" observedRunningTime="2025-09-12 00:14:39.359509298 +0000 UTC m=+55.449484099" watchObservedRunningTime="2025-09-12 00:14:42.519741867 +0000 UTC m=+58.609716658" Sep 12 00:14:43.504834 containerd[1586]: time="2025-09-12T00:14:43.504697068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6\" id:\"9ebac7724ab24854495e1f2de0dd669f5bf83c3598c8d2a0ee4031ef493b5913\" pid:5151 exit_status:1 exited_at:{seconds:1757636083 nanos:504075424}" Sep 12 00:14:44.481588 containerd[1586]: time="2025-09-12T00:14:44.481538557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6\" id:\"a15e22a4e451c033a3b822ffd56f5d081fbd622d639927c5bd0d42c9b7569ecd\" pid:5182 exit_status:1 exited_at:{seconds:1757636084 nanos:481181266}" Sep 12 00:14:45.068686 containerd[1586]: time="2025-09-12T00:14:45.068626510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:45.069680 containerd[1586]: time="2025-09-12T00:14:45.069640352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 00:14:45.071305 containerd[1586]: time="2025-09-12T00:14:45.071267189Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:45.074025 containerd[1586]: time="2025-09-12T00:14:45.073969097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:45.074650 containerd[1586]: time="2025-09-12T00:14:45.074618924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.485422615s" Sep 12 00:14:45.074699 containerd[1586]: time="2025-09-12T00:14:45.074656257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 00:14:45.078226 containerd[1586]: time="2025-09-12T00:14:45.078145638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 00:14:45.090578 containerd[1586]: time="2025-09-12T00:14:45.090530518Z" level=info msg="CreateContainer within sandbox \"1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 00:14:45.100244 containerd[1586]: time="2025-09-12T00:14:45.100198723Z" level=info msg="Container 886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:45.109602 containerd[1586]: time="2025-09-12T00:14:45.109549704Z" level=info msg="CreateContainer within sandbox \"1152ed57037049efae3e3f152064874809b835e4bfd85283cbf1cc0b5be3691f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b\"" Sep 12 00:14:45.110154 containerd[1586]: time="2025-09-12T00:14:45.110126821Z" level=info msg="StartContainer for \"886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b\"" Sep 12 00:14:45.112015 containerd[1586]: time="2025-09-12T00:14:45.111897457Z" level=info msg="connecting to shim 886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b" address="unix:///run/containerd/s/d189c33107af34fe8a0993563222913329c9a7bebd7332688fb88d8db377348f" protocol=ttrpc version=3 Sep 12 00:14:45.136509 systemd[1]: Started cri-containerd-886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b.scope - libcontainer container 886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b. Sep 12 00:14:45.193205 containerd[1586]: time="2025-09-12T00:14:45.193161526Z" level=info msg="StartContainer for \"886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b\" returns successfully" Sep 12 00:14:45.407986 kubelet[2748]: I0912 00:14:45.407831 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58555d9d7-fshxl" podStartSLOduration=33.809118 podStartE2EDuration="42.407811744s" podCreationTimestamp="2025-09-12 00:14:03 +0000 UTC" firstStartedPulling="2025-09-12 00:14:36.479277756 +0000 UTC m=+52.569252547" lastFinishedPulling="2025-09-12 00:14:45.0779715 +0000 UTC m=+61.167946291" observedRunningTime="2025-09-12 00:14:45.40672392 +0000 UTC m=+61.496698711" watchObservedRunningTime="2025-09-12 00:14:45.407811744 +0000 UTC m=+61.497786535" Sep 12 00:14:45.444567 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:42252.service - OpenSSH per-connection server daemon (10.0.0.1:42252). Sep 12 00:14:45.512392 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 42252 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:14:45.514323 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:14:45.519259 systemd-logind[1568]: New session 11 of user core. Sep 12 00:14:45.528584 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 00:14:45.743938 sshd[5245]: Connection closed by 10.0.0.1 port 42252 Sep 12 00:14:45.744293 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Sep 12 00:14:45.753242 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:42252.service: Deactivated successfully. Sep 12 00:14:45.755264 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 00:14:45.756138 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Sep 12 00:14:45.759895 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:42260.service - OpenSSH per-connection server daemon (10.0.0.1:42260). Sep 12 00:14:45.760681 systemd-logind[1568]: Removed session 11. Sep 12 00:14:45.814395 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 42260 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:14:45.816184 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:14:45.821297 systemd-logind[1568]: New session 12 of user core. Sep 12 00:14:45.827627 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 00:14:46.040907 sshd[5270]: Connection closed by 10.0.0.1 port 42260 Sep 12 00:14:46.041766 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Sep 12 00:14:46.054095 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:42260.service: Deactivated successfully. Sep 12 00:14:46.058315 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 00:14:46.060551 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Sep 12 00:14:46.067236 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:42276.service - OpenSSH per-connection server daemon (10.0.0.1:42276). Sep 12 00:14:46.070693 systemd-logind[1568]: Removed session 12. Sep 12 00:14:46.128033 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 42276 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:14:46.130139 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:14:46.135304 systemd-logind[1568]: New session 13 of user core. Sep 12 00:14:46.145624 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 00:14:46.332656 sshd[5284]: Connection closed by 10.0.0.1 port 42276 Sep 12 00:14:46.333073 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Sep 12 00:14:46.339047 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:42276.service: Deactivated successfully. Sep 12 00:14:46.341426 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 00:14:46.342406 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Sep 12 00:14:46.344201 systemd-logind[1568]: Removed session 13. Sep 12 00:14:46.431526 containerd[1586]: time="2025-09-12T00:14:46.431307459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b\" id:\"084e3c204ad13c3bf99fd6e17434b77c0cf9b6ad438aedec1332e6f096deeb38\" pid:5309 exited_at:{seconds:1757636086 nanos:430730413}" Sep 12 00:14:46.763026 containerd[1586]: time="2025-09-12T00:14:46.762938008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:46.764255 containerd[1586]: time="2025-09-12T00:14:46.764143458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 00:14:46.771964 containerd[1586]: time="2025-09-12T00:14:46.771910743Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:46.774374 containerd[1586]: time="2025-09-12T00:14:46.774315722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:46.775023 containerd[1586]: time="2025-09-12T00:14:46.774968233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.69679354s" Sep 12 00:14:46.775023 containerd[1586]: time="2025-09-12T00:14:46.775005807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 00:14:46.776197 containerd[1586]: time="2025-09-12T00:14:46.776144147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 00:14:46.777315 containerd[1586]: time="2025-09-12T00:14:46.777257328Z" level=info msg="CreateContainer within sandbox \"f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 00:14:46.790007 containerd[1586]: time="2025-09-12T00:14:46.789958952Z" level=info msg="Container 12247ef0cfe33914098bd23fd41010afeff806a1b693296066eed2a5b08b5266: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:46.800896 containerd[1586]: time="2025-09-12T00:14:46.800840566Z" level=info msg="CreateContainer within sandbox \"f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"12247ef0cfe33914098bd23fd41010afeff806a1b693296066eed2a5b08b5266\"" Sep 12 00:14:46.801460 containerd[1586]: time="2025-09-12T00:14:46.801412191Z" level=info msg="StartContainer for \"12247ef0cfe33914098bd23fd41010afeff806a1b693296066eed2a5b08b5266\"" Sep 12 00:14:46.803293 containerd[1586]: time="2025-09-12T00:14:46.803264512Z" level=info msg="connecting to shim 12247ef0cfe33914098bd23fd41010afeff806a1b693296066eed2a5b08b5266" address="unix:///run/containerd/s/efef942fe3dca409dafd35f0e06120a0a32a3d84aafc04859f8abd4bce6a7561" protocol=ttrpc version=3 Sep 12 00:14:46.828738 systemd[1]: Started cri-containerd-12247ef0cfe33914098bd23fd41010afeff806a1b693296066eed2a5b08b5266.scope - libcontainer container 12247ef0cfe33914098bd23fd41010afeff806a1b693296066eed2a5b08b5266. Sep 12 00:14:46.878461 containerd[1586]: time="2025-09-12T00:14:46.878419992Z" level=info msg="StartContainer for \"12247ef0cfe33914098bd23fd41010afeff806a1b693296066eed2a5b08b5266\" returns successfully" Sep 12 00:14:49.583831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108492435.mount: Deactivated successfully. Sep 12 00:14:49.805225 containerd[1586]: time="2025-09-12T00:14:49.805143844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:49.806318 containerd[1586]: time="2025-09-12T00:14:49.806273002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 00:14:49.807944 containerd[1586]: time="2025-09-12T00:14:49.807728038Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:49.810469 containerd[1586]: time="2025-09-12T00:14:49.810419438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:49.811542 containerd[1586]: time="2025-09-12T00:14:49.811504380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.035326089s" Sep 12 00:14:49.811542 containerd[1586]: time="2025-09-12T00:14:49.811535010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 00:14:49.812455 containerd[1586]: time="2025-09-12T00:14:49.812420137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 00:14:49.814292 containerd[1586]: time="2025-09-12T00:14:49.814227802Z" level=info msg="CreateContainer within sandbox \"e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 00:14:49.824279 containerd[1586]: time="2025-09-12T00:14:49.824216236Z" level=info msg="Container 8cd5843c157c4619b29bdd2b73a66b359cdf2f9cca5118ea11af7b11b355c31e: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:49.837464 containerd[1586]: time="2025-09-12T00:14:49.837304787Z" level=info msg="CreateContainer within sandbox \"e5f5276ef14a6f8813aa873cfe88101ac38203993da8d8305de87b4aef33776c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8cd5843c157c4619b29bdd2b73a66b359cdf2f9cca5118ea11af7b11b355c31e\"" Sep 12 00:14:49.837929 containerd[1586]: time="2025-09-12T00:14:49.837889014Z" level=info msg="StartContainer for \"8cd5843c157c4619b29bdd2b73a66b359cdf2f9cca5118ea11af7b11b355c31e\"" Sep 12 00:14:49.839086 containerd[1586]: time="2025-09-12T00:14:49.839058138Z" level=info msg="connecting to shim 8cd5843c157c4619b29bdd2b73a66b359cdf2f9cca5118ea11af7b11b355c31e" address="unix:///run/containerd/s/dbc6eb2768d5f113c5b131e85a47e48a2e6f5ff945f2630a647429c94ae6ddb1" protocol=ttrpc version=3 Sep 12 00:14:49.865632 systemd[1]: Started cri-containerd-8cd5843c157c4619b29bdd2b73a66b359cdf2f9cca5118ea11af7b11b355c31e.scope - libcontainer container 8cd5843c157c4619b29bdd2b73a66b359cdf2f9cca5118ea11af7b11b355c31e. Sep 12 00:14:49.924717 containerd[1586]: time="2025-09-12T00:14:49.924665812Z" level=info msg="StartContainer for \"8cd5843c157c4619b29bdd2b73a66b359cdf2f9cca5118ea11af7b11b355c31e\" returns successfully" Sep 12 00:14:50.411263 kubelet[2748]: I0912 00:14:50.411042 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6dcbf89b99-zbqh4" podStartSLOduration=2.199215327 podStartE2EDuration="17.411025468s" podCreationTimestamp="2025-09-12 00:14:33 +0000 UTC" firstStartedPulling="2025-09-12 00:14:34.600460548 +0000 UTC m=+50.690435329" lastFinishedPulling="2025-09-12 00:14:49.812270669 +0000 UTC m=+65.902245470" observedRunningTime="2025-09-12 00:14:50.41059578 +0000 UTC m=+66.500570571" watchObservedRunningTime="2025-09-12 00:14:50.411025468 +0000 UTC m=+66.501000249" Sep 12 00:14:51.353952 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:48374.service - OpenSSH per-connection server daemon (10.0.0.1:48374). Sep 12 00:14:51.443386 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 48374 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:14:51.444346 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:14:51.451407 systemd-logind[1568]: New session 14 of user core. Sep 12 00:14:51.454535 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 00:14:52.339709 sshd[5410]: Connection closed by 10.0.0.1 port 48374 Sep 12 00:14:52.340050 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Sep 12 00:14:52.343873 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:48374.service: Deactivated successfully. Sep 12 00:14:52.346288 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 00:14:52.348085 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Sep 12 00:14:52.349692 systemd-logind[1568]: Removed session 14. Sep 12 00:14:52.359797 containerd[1586]: time="2025-09-12T00:14:52.359698273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:52.360731 containerd[1586]: time="2025-09-12T00:14:52.360702796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 00:14:52.362157 containerd[1586]: time="2025-09-12T00:14:52.362122337Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:52.364684 containerd[1586]: time="2025-09-12T00:14:52.364647597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:14:52.365434 containerd[1586]: time="2025-09-12T00:14:52.365344839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.552883994s" Sep 12 00:14:52.365505 containerd[1586]: time="2025-09-12T00:14:52.365439931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 00:14:52.369509 containerd[1586]: time="2025-09-12T00:14:52.369457653Z" level=info msg="CreateContainer within sandbox \"f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 00:14:52.378806 containerd[1586]: time="2025-09-12T00:14:52.378734915Z" level=info msg="Container e8f05486aab1a95dcdc8bb073c984cd17143323520637311a0b76f2b9a833772: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:14:52.391613 containerd[1586]: time="2025-09-12T00:14:52.391512063Z" level=info msg="CreateContainer within sandbox \"f7ddeb638afe197a777ac61620a8ee954c213309fc95deb57585748c44fea512\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e8f05486aab1a95dcdc8bb073c984cd17143323520637311a0b76f2b9a833772\"" Sep 12 00:14:52.392233 containerd[1586]: time="2025-09-12T00:14:52.392191240Z" level=info msg="StartContainer for \"e8f05486aab1a95dcdc8bb073c984cd17143323520637311a0b76f2b9a833772\"" Sep 12 00:14:52.394128 containerd[1586]: time="2025-09-12T00:14:52.394096086Z" level=info msg="connecting to shim e8f05486aab1a95dcdc8bb073c984cd17143323520637311a0b76f2b9a833772" address="unix:///run/containerd/s/efef942fe3dca409dafd35f0e06120a0a32a3d84aafc04859f8abd4bce6a7561" protocol=ttrpc version=3 Sep 12 00:14:52.427631 systemd[1]: Started cri-containerd-e8f05486aab1a95dcdc8bb073c984cd17143323520637311a0b76f2b9a833772.scope - libcontainer container e8f05486aab1a95dcdc8bb073c984cd17143323520637311a0b76f2b9a833772. Sep 12 00:14:52.717024 containerd[1586]: time="2025-09-12T00:14:52.716966380Z" level=info msg="StartContainer for \"e8f05486aab1a95dcdc8bb073c984cd17143323520637311a0b76f2b9a833772\" returns successfully" Sep 12 00:14:53.308248 kubelet[2748]: I0912 00:14:53.308207 2748 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 00:14:53.308248 kubelet[2748]: I0912 00:14:53.308254 2748 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 00:14:53.676346 kubelet[2748]: I0912 00:14:53.676138 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kmdzp" podStartSLOduration=34.789509255 podStartE2EDuration="50.67605872s" podCreationTimestamp="2025-09-12 00:14:03 +0000 UTC" firstStartedPulling="2025-09-12 00:14:36.479775179 +0000 UTC m=+52.569749970" lastFinishedPulling="2025-09-12 00:14:52.366324644 +0000 UTC m=+68.456299435" observedRunningTime="2025-09-12 00:14:53.675198988 +0000 UTC m=+69.765173779" watchObservedRunningTime="2025-09-12 00:14:53.67605872 +0000 UTC m=+69.766033521" Sep 12 00:14:57.356101 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:48382.service - OpenSSH per-connection server daemon (10.0.0.1:48382). Sep 12 00:14:57.428936 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 48382 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:14:57.431468 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:14:57.437558 systemd-logind[1568]: New session 15 of user core. Sep 12 00:14:57.444692 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 00:14:57.579917 sshd[5472]: Connection closed by 10.0.0.1 port 48382 Sep 12 00:14:57.580414 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Sep 12 00:14:57.585428 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:48382.service: Deactivated successfully. Sep 12 00:14:57.587547 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 00:14:57.588477 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Sep 12 00:14:57.590153 systemd-logind[1568]: Removed session 15. Sep 12 00:15:02.600708 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:58904.service - OpenSSH per-connection server daemon (10.0.0.1:58904). Sep 12 00:15:02.698541 sshd[5486]: Accepted publickey for core from 10.0.0.1 port 58904 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:02.700350 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:02.705278 systemd-logind[1568]: New session 16 of user core. Sep 12 00:15:02.718511 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 00:15:02.880491 sshd[5489]: Connection closed by 10.0.0.1 port 58904 Sep 12 00:15:02.880818 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:02.885756 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:58904.service: Deactivated successfully. Sep 12 00:15:02.888112 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 00:15:02.889116 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Sep 12 00:15:02.890534 systemd-logind[1568]: Removed session 16. Sep 12 00:15:03.464460 containerd[1586]: time="2025-09-12T00:15:03.464412407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59\" id:\"eb9641888f559f7ccd3b683d74cdccd42caa59fab7904b0ab2d975b87ffe57d5\" pid:5512 exited_at:{seconds:1757636103 nanos:464042480}" Sep 12 00:15:06.400607 kubelet[2748]: I0912 00:15:06.400520 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:15:07.897938 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:58910.service - OpenSSH per-connection server daemon (10.0.0.1:58910). Sep 12 00:15:07.971906 sshd[5528]: Accepted publickey for core from 10.0.0.1 port 58910 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:07.973699 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:07.978445 systemd-logind[1568]: New session 17 of user core. Sep 12 00:15:07.988521 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 00:15:08.237874 sshd[5531]: Connection closed by 10.0.0.1 port 58910 Sep 12 00:15:08.238332 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:08.243990 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:58910.service: Deactivated successfully. Sep 12 00:15:08.246986 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 00:15:08.248038 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Sep 12 00:15:08.249842 systemd-logind[1568]: Removed session 17. Sep 12 00:15:13.020309 kubelet[2748]: E0912 00:15:13.020220 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:15:13.256038 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:47074.service - OpenSSH per-connection server daemon (10.0.0.1:47074). Sep 12 00:15:13.313013 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 47074 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:13.314655 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:13.319316 systemd-logind[1568]: New session 18 of user core. Sep 12 00:15:13.328490 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 00:15:13.457558 sshd[5547]: Connection closed by 10.0.0.1 port 47074 Sep 12 00:15:13.457848 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:13.470632 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:47074.service: Deactivated successfully. Sep 12 00:15:13.473527 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 00:15:13.474471 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Sep 12 00:15:13.478403 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:47086.service - OpenSSH per-connection server daemon (10.0.0.1:47086). Sep 12 00:15:13.479344 systemd-logind[1568]: Removed session 18. Sep 12 00:15:13.535673 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 47086 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:13.537492 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:13.542742 systemd-logind[1568]: New session 19 of user core. Sep 12 00:15:13.557518 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 00:15:13.923150 sshd[5564]: Connection closed by 10.0.0.1 port 47086 Sep 12 00:15:13.923576 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:13.933590 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:47086.service: Deactivated successfully. Sep 12 00:15:13.935849 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 00:15:13.937059 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Sep 12 00:15:13.940732 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:47102.service - OpenSSH per-connection server daemon (10.0.0.1:47102). Sep 12 00:15:13.941505 systemd-logind[1568]: Removed session 19. Sep 12 00:15:14.022966 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 47102 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:14.025235 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:14.030917 systemd-logind[1568]: New session 20 of user core. Sep 12 00:15:14.039576 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 00:15:14.474798 containerd[1586]: time="2025-09-12T00:15:14.474741233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6\" id:\"c6a612866433870eb002bf79b2e25adb49a269211bf150460e00f96e120b62ad\" pid:5604 exited_at:{seconds:1757636114 nanos:474435390}" Sep 12 00:15:14.630384 sshd[5578]: Connection closed by 10.0.0.1 port 47102 Sep 12 00:15:14.631660 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:14.642127 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:47102.service: Deactivated successfully. Sep 12 00:15:14.644664 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 00:15:14.648056 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. Sep 12 00:15:14.652487 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:47112.service - OpenSSH per-connection server daemon (10.0.0.1:47112). Sep 12 00:15:14.656086 systemd-logind[1568]: Removed session 20. Sep 12 00:15:14.713830 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 47112 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:14.715514 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:14.720488 systemd-logind[1568]: New session 21 of user core. Sep 12 00:15:14.731623 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 00:15:15.060291 sshd[5627]: Connection closed by 10.0.0.1 port 47112 Sep 12 00:15:15.060830 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:15.073920 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:47112.service: Deactivated successfully. Sep 12 00:15:15.076931 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 00:15:15.080669 systemd-logind[1568]: Session 21 logged out. Waiting for processes to exit. Sep 12 00:15:15.086862 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:47120.service - OpenSSH per-connection server daemon (10.0.0.1:47120). Sep 12 00:15:15.091084 systemd-logind[1568]: Removed session 21. Sep 12 00:15:15.164770 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 47120 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:15.166714 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:15.171625 systemd-logind[1568]: New session 22 of user core. Sep 12 00:15:15.182721 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 00:15:15.304199 sshd[5642]: Connection closed by 10.0.0.1 port 47120 Sep 12 00:15:15.304668 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:15.308898 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:47120.service: Deactivated successfully. Sep 12 00:15:15.310987 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 00:15:15.311958 systemd-logind[1568]: Session 22 logged out. Waiting for processes to exit. Sep 12 00:15:15.313527 systemd-logind[1568]: Removed session 22. Sep 12 00:15:16.020145 kubelet[2748]: E0912 00:15:16.020057 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:15:16.435741 containerd[1586]: time="2025-09-12T00:15:16.435601571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b\" id:\"9e3513b3232e06779611edf3d7ce49619897b8badbe0489bb3c9611c521733d2\" pid:5675 exited_at:{seconds:1757636116 nanos:435224685}" Sep 12 00:15:17.019633 kubelet[2748]: E0912 00:15:17.019565 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:15:17.422847 kubelet[2748]: I0912 00:15:17.422690 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:15:20.320391 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:58156.service - OpenSSH per-connection server daemon (10.0.0.1:58156). Sep 12 00:15:20.374666 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 58156 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:20.377178 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:20.382341 systemd-logind[1568]: New session 23 of user core. Sep 12 00:15:20.397687 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 00:15:20.515706 sshd[5695]: Connection closed by 10.0.0.1 port 58156 Sep 12 00:15:20.516151 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:20.520554 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:58156.service: Deactivated successfully. Sep 12 00:15:20.522844 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 00:15:20.523748 systemd-logind[1568]: Session 23 logged out. Waiting for processes to exit. Sep 12 00:15:20.525209 systemd-logind[1568]: Removed session 23. Sep 12 00:15:22.756151 containerd[1586]: time="2025-09-12T00:15:22.756099102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"886d1f8a1710c2dc83a35847fc7c5d4d3dd31a30d531cbd7509d53a0e6e4637b\" id:\"a54d0f8cbb1a5b024683d5fc2db214d480a8553baeb716901112a2422dcedaa2\" pid:5722 exited_at:{seconds:1757636122 nanos:755801276}" Sep 12 00:15:25.020265 kubelet[2748]: E0912 00:15:25.020183 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:15:25.537064 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:58158.service - OpenSSH per-connection server daemon (10.0.0.1:58158). Sep 12 00:15:25.588804 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 58158 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:25.590712 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:25.595568 systemd-logind[1568]: New session 24 of user core. Sep 12 00:15:25.603567 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 00:15:25.730283 sshd[5736]: Connection closed by 10.0.0.1 port 58158 Sep 12 00:15:25.730792 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:25.735817 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:58158.service: Deactivated successfully. Sep 12 00:15:25.738002 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 00:15:25.739063 systemd-logind[1568]: Session 24 logged out. Waiting for processes to exit. Sep 12 00:15:25.740495 systemd-logind[1568]: Removed session 24. Sep 12 00:15:28.018646 containerd[1586]: time="2025-09-12T00:15:28.018580436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba72a361cfd5534bb5d2d773ccbd69a5b05bdc9bc895e0bf67e661d30bc657d6\" id:\"bea06efe26f0580710a121b183df720087a41ee26a51f04aec397f5420f9d5fd\" pid:5760 exited_at:{seconds:1757636128 nanos:18194775}" Sep 12 00:15:30.745820 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:33864.service - OpenSSH per-connection server daemon (10.0.0.1:33864). Sep 12 00:15:30.838732 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 33864 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:30.841326 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:30.847806 systemd-logind[1568]: New session 25 of user core. Sep 12 00:15:30.857607 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 00:15:31.079396 sshd[5776]: Connection closed by 10.0.0.1 port 33864 Sep 12 00:15:31.080306 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:31.087042 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:33864.service: Deactivated successfully. Sep 12 00:15:31.089693 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 00:15:31.090659 systemd-logind[1568]: Session 25 logged out. Waiting for processes to exit. Sep 12 00:15:31.092375 systemd-logind[1568]: Removed session 25. Sep 12 00:15:33.412515 containerd[1586]: time="2025-09-12T00:15:33.412437937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a0a144f18ec90b90c0554be43085ae909fdcaeb527d97681eb6a00e74182b59\" id:\"15fe55282c0501282745ee25b4d2e532d856bd7a1f88c9ad09d3a92794b7e1b4\" pid:5800 exited_at:{seconds:1757636133 nanos:411602734}" Sep 12 00:15:36.098469 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:33872.service - OpenSSH per-connection server daemon (10.0.0.1:33872). Sep 12 00:15:36.210462 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 33872 ssh2: RSA SHA256:x73ldEr2qeXE9ibHaxoy3WYcFpfdGmGcQ8OLP2Cn2xU Sep 12 00:15:36.212697 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:15:36.220792 systemd-logind[1568]: New session 26 of user core. Sep 12 00:15:36.227687 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 00:15:36.543211 sshd[5818]: Connection closed by 10.0.0.1 port 33872 Sep 12 00:15:36.543769 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Sep 12 00:15:36.553197 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:33872.service: Deactivated successfully. Sep 12 00:15:36.556073 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 00:15:36.558182 systemd-logind[1568]: Session 26 logged out. Waiting for processes to exit. Sep 12 00:15:36.562042 systemd-logind[1568]: Removed session 26. Sep 12 00:15:38.022391 kubelet[2748]: E0912 00:15:38.021990 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"