Sep 12 22:54:06.294055 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 20:38:35 -00 2025 Sep 12 22:54:06.294110 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:54:06.294122 kernel: BIOS-provided physical RAM map: Sep 12 22:54:06.294131 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 22:54:06.294139 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 22:54:06.294148 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 22:54:06.294159 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 22:54:06.294169 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 22:54:06.294184 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 22:54:06.294193 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 22:54:06.294202 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 22:54:06.294210 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 22:54:06.294219 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 22:54:06.294229 kernel: NX (Execute Disable) protection: active Sep 12 22:54:06.294243 kernel: APIC: Static calls initialized Sep 12 22:54:06.294253 kernel: SMBIOS 2.8 present. Sep 12 22:54:06.294267 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 22:54:06.294277 kernel: DMI: Memory slots populated: 1/1 Sep 12 22:54:06.294287 kernel: Hypervisor detected: KVM Sep 12 22:54:06.294297 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 22:54:06.294307 kernel: kvm-clock: using sched offset of 7518955882 cycles Sep 12 22:54:06.294318 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 22:54:06.294329 kernel: tsc: Detected 2794.748 MHz processor Sep 12 22:54:06.294344 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 22:54:06.294356 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 22:54:06.294367 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 22:54:06.294378 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 22:54:06.294388 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 22:54:06.294398 kernel: Using GB pages for direct mapping Sep 12 22:54:06.294408 kernel: ACPI: Early table checksum verification disabled Sep 12 22:54:06.294419 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 22:54:06.294430 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:54:06.294445 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:54:06.294456 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:54:06.294466 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 22:54:06.294476 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:54:06.294488 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:54:06.294498 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:54:06.294509 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:54:06.294520 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 22:54:06.297282 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 22:54:06.297299 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 22:54:06.297310 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 22:54:06.297320 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 22:54:06.297331 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 22:54:06.297341 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 22:54:06.297354 kernel: No NUMA configuration found Sep 12 22:54:06.297365 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 22:54:06.297376 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 12 22:54:06.297386 kernel: Zone ranges: Sep 12 22:54:06.297397 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 22:54:06.297407 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 22:54:06.302761 kernel: Normal empty Sep 12 22:54:06.302804 kernel: Device empty Sep 12 22:54:06.302823 kernel: Movable zone start for each node Sep 12 22:54:06.302833 kernel: Early memory node ranges Sep 12 22:54:06.302852 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 22:54:06.302861 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 22:54:06.302871 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 22:54:06.302880 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 22:54:06.302890 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 22:54:06.302900 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 22:54:06.302909 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 22:54:06.302924 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 22:54:06.302933 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 22:54:06.302945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 22:54:06.302955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 22:54:06.302967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 22:54:06.302976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 22:54:06.302986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 22:54:06.302995 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 22:54:06.303005 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 22:54:06.303014 kernel: TSC deadline timer available Sep 12 22:54:06.303023 kernel: CPU topo: Max. logical packages: 1 Sep 12 22:54:06.303035 kernel: CPU topo: Max. logical dies: 1 Sep 12 22:54:06.303045 kernel: CPU topo: Max. dies per package: 1 Sep 12 22:54:06.303054 kernel: CPU topo: Max. threads per core: 1 Sep 12 22:54:06.303066 kernel: CPU topo: Num. cores per package: 4 Sep 12 22:54:06.303076 kernel: CPU topo: Num. threads per package: 4 Sep 12 22:54:06.303085 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 22:54:06.303094 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 22:54:06.303104 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 22:54:06.303113 kernel: kvm-guest: setup PV sched yield Sep 12 22:54:06.303125 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 22:54:06.303135 kernel: Booting paravirtualized kernel on KVM Sep 12 22:54:06.303144 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 22:54:06.303154 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 22:54:06.303163 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 22:54:06.303173 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 22:54:06.303183 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 22:54:06.303192 kernel: kvm-guest: PV spinlocks enabled Sep 12 22:54:06.303201 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 22:54:06.303215 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:54:06.303225 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 22:54:06.303234 kernel: random: crng init done Sep 12 22:54:06.303244 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 22:54:06.303253 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 22:54:06.303263 kernel: Fallback order for Node 0: 0 Sep 12 22:54:06.303272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 12 22:54:06.303281 kernel: Policy zone: DMA32 Sep 12 22:54:06.303293 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 22:54:06.303302 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 22:54:06.303312 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 22:54:06.303321 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 22:54:06.303330 kernel: Dynamic Preempt: voluntary Sep 12 22:54:06.303340 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 22:54:06.303351 kernel: rcu: RCU event tracing is enabled. Sep 12 22:54:06.303360 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 22:54:06.303370 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 22:54:06.303382 kernel: Rude variant of Tasks RCU enabled. Sep 12 22:54:06.303394 kernel: Tracing variant of Tasks RCU enabled. Sep 12 22:54:06.303403 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 22:54:06.303413 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 22:54:06.303422 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:54:06.303432 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:54:06.303441 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:54:06.303451 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 22:54:06.303460 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 22:54:06.303480 kernel: Console: colour VGA+ 80x25 Sep 12 22:54:06.303490 kernel: printk: legacy console [ttyS0] enabled Sep 12 22:54:06.303500 kernel: ACPI: Core revision 20240827 Sep 12 22:54:06.303512 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 22:54:06.303522 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 22:54:06.303557 kernel: x2apic enabled Sep 12 22:54:06.303577 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 22:54:06.303594 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 22:54:06.303604 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 22:54:06.303619 kernel: kvm-guest: setup PV IPIs Sep 12 22:54:06.303629 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 22:54:06.303639 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 22:54:06.303649 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 22:54:06.303659 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 22:54:06.303669 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 22:54:06.303679 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 22:54:06.303701 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 22:54:06.303713 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 22:54:06.303723 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 22:54:06.303733 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 22:54:06.303743 kernel: active return thunk: retbleed_return_thunk Sep 12 22:54:06.303752 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 22:54:06.303762 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 22:54:06.303772 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 22:54:06.303782 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 22:54:06.303795 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 22:54:06.303805 kernel: active return thunk: srso_return_thunk Sep 12 22:54:06.303815 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 22:54:06.304734 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 22:54:06.304747 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 22:54:06.304759 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 22:54:06.304771 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 22:54:06.304783 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 22:54:06.304795 kernel: Freeing SMP alternatives memory: 32K Sep 12 22:54:06.304813 kernel: pid_max: default: 32768 minimum: 301 Sep 12 22:54:06.304824 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 22:54:06.304834 kernel: landlock: Up and running. Sep 12 22:54:06.304846 kernel: SELinux: Initializing. Sep 12 22:54:06.304863 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 22:54:06.304875 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 22:54:06.304886 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 22:54:06.304898 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 22:54:06.304911 kernel: ... version: 0 Sep 12 22:54:06.304927 kernel: ... bit width: 48 Sep 12 22:54:06.304939 kernel: ... generic registers: 6 Sep 12 22:54:06.304950 kernel: ... value mask: 0000ffffffffffff Sep 12 22:54:06.304962 kernel: ... max period: 00007fffffffffff Sep 12 22:54:06.304973 kernel: ... fixed-purpose events: 0 Sep 12 22:54:06.304985 kernel: ... event mask: 000000000000003f Sep 12 22:54:06.304996 kernel: signal: max sigframe size: 1776 Sep 12 22:54:06.305008 kernel: rcu: Hierarchical SRCU implementation. Sep 12 22:54:06.305020 kernel: rcu: Max phase no-delay instances is 400. Sep 12 22:54:06.305036 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 22:54:06.305047 kernel: smp: Bringing up secondary CPUs ... Sep 12 22:54:06.305059 kernel: smpboot: x86: Booting SMP configuration: Sep 12 22:54:06.305071 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 22:54:06.305083 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 22:54:06.305095 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 22:54:06.305108 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54084K init, 2880K bss, 136904K reserved, 0K cma-reserved) Sep 12 22:54:06.305120 kernel: devtmpfs: initialized Sep 12 22:54:06.305132 kernel: x86/mm: Memory block size: 128MB Sep 12 22:54:06.305147 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 22:54:06.305160 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 22:54:06.305172 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 22:54:06.305184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 22:54:06.305196 kernel: audit: initializing netlink subsys (disabled) Sep 12 22:54:06.305207 kernel: audit: type=2000 audit(1757717639.909:1): state=initialized audit_enabled=0 res=1 Sep 12 22:54:06.305219 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 22:54:06.305230 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 22:54:06.305241 kernel: cpuidle: using governor menu Sep 12 22:54:06.305257 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 22:54:06.305269 kernel: dca service started, version 1.12.1 Sep 12 22:54:06.305281 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 12 22:54:06.305293 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 22:54:06.305304 kernel: PCI: Using configuration type 1 for base access Sep 12 22:54:06.305316 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 22:54:06.305328 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 22:54:06.305340 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 22:54:06.305352 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 22:54:06.305368 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 22:54:06.305380 kernel: ACPI: Added _OSI(Module Device) Sep 12 22:54:06.305392 kernel: ACPI: Added _OSI(Processor Device) Sep 12 22:54:06.305404 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 22:54:06.305416 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 22:54:06.305427 kernel: ACPI: Interpreter enabled Sep 12 22:54:06.305438 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 22:54:06.305450 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 22:54:06.305462 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 22:54:06.305476 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 22:54:06.305489 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 22:54:06.305501 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 22:54:06.305923 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 22:54:06.306099 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 22:54:06.306243 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 22:54:06.306260 kernel: PCI host bridge to bus 0000:00 Sep 12 22:54:06.306446 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 22:54:06.306616 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 22:54:06.306766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 22:54:06.306902 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 22:54:06.307037 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 22:54:06.307171 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 22:54:06.307307 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 22:54:06.307546 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 22:54:06.312871 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 22:54:06.313075 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 12 22:54:06.313211 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 12 22:54:06.313343 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 12 22:54:06.313504 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 22:54:06.313724 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 22:54:06.313877 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 12 22:54:06.317807 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 12 22:54:06.318019 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 22:54:06.318209 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 22:54:06.318371 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 12 22:54:06.318546 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 12 22:54:06.318711 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 22:54:06.318894 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 22:54:06.319047 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 12 22:54:06.319204 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 12 22:54:06.319369 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 22:54:06.319549 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 12 22:54:06.319751 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 22:54:06.319924 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 22:54:06.320109 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 22:54:06.320274 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 12 22:54:06.320437 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 12 22:54:06.320642 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 22:54:06.325330 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 12 22:54:06.325368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 22:54:06.325388 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 22:54:06.325400 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 22:54:06.325411 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 22:54:06.325422 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 22:54:06.325433 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 22:54:06.325444 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 22:54:06.325455 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 22:54:06.325466 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 22:54:06.325477 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 22:54:06.325491 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 22:54:06.325502 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 22:54:06.325513 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 22:54:06.325524 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 22:54:06.325549 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 22:54:06.325560 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 22:54:06.325571 kernel: iommu: Default domain type: Translated Sep 12 22:54:06.325582 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 22:54:06.325593 kernel: PCI: Using ACPI for IRQ routing Sep 12 22:54:06.325607 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 22:54:06.325618 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 22:54:06.325630 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 22:54:06.325811 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 22:54:06.325970 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 22:54:06.326122 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 22:54:06.326138 kernel: vgaarb: loaded Sep 12 22:54:06.326149 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 22:54:06.326162 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 22:54:06.326177 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 22:54:06.326188 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 22:54:06.326200 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 22:54:06.326212 kernel: pnp: PnP ACPI init Sep 12 22:54:06.326421 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 22:54:06.326440 kernel: pnp: PnP ACPI: found 6 devices Sep 12 22:54:06.326452 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 22:54:06.326464 kernel: NET: Registered PF_INET protocol family Sep 12 22:54:06.326479 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 22:54:06.326491 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 22:54:06.326502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 22:54:06.326514 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 22:54:06.326525 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 22:54:06.326552 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 22:54:06.326563 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 22:54:06.326576 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 22:54:06.326591 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 22:54:06.326602 kernel: NET: Registered PF_XDP protocol family Sep 12 22:54:06.329808 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 22:54:06.329945 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 22:54:06.330068 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 22:54:06.330216 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 22:54:06.330339 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 22:54:06.330460 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 22:54:06.330472 kernel: PCI: CLS 0 bytes, default 64 Sep 12 22:54:06.330488 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 22:54:06.330499 kernel: Initialise system trusted keyrings Sep 12 22:54:06.330509 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 22:54:06.330519 kernel: Key type asymmetric registered Sep 12 22:54:06.330544 kernel: Asymmetric key parser 'x509' registered Sep 12 22:54:06.330554 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 22:54:06.330565 kernel: io scheduler mq-deadline registered Sep 12 22:54:06.330575 kernel: io scheduler kyber registered Sep 12 22:54:06.330585 kernel: io scheduler bfq registered Sep 12 22:54:06.330598 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 22:54:06.330609 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 22:54:06.330620 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 22:54:06.330630 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 22:54:06.330640 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 22:54:06.330650 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 22:54:06.330660 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 22:54:06.330670 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 22:54:06.331715 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 22:54:06.331919 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 22:54:06.332053 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 22:54:06.332181 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T22:54:05 UTC (1757717645) Sep 12 22:54:06.332307 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 22:54:06.332319 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 22:54:06.332330 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 22:54:06.332340 kernel: NET: Registered PF_INET6 protocol family Sep 12 22:54:06.332351 kernel: Segment Routing with IPv6 Sep 12 22:54:06.332365 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 22:54:06.332375 kernel: NET: Registered PF_PACKET protocol family Sep 12 22:54:06.332385 kernel: Key type dns_resolver registered Sep 12 22:54:06.332394 kernel: IPI shorthand broadcast: enabled Sep 12 22:54:06.332405 kernel: sched_clock: Marking stable (6019004834, 142351912)->(6279305432, -117948686) Sep 12 22:54:06.332415 kernel: registered taskstats version 1 Sep 12 22:54:06.332425 kernel: Loading compiled-in X.509 certificates Sep 12 22:54:06.332435 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: c3297a5801573420030c321362a802da1fd49c4e' Sep 12 22:54:06.332445 kernel: Demotion targets for Node 0: null Sep 12 22:54:06.332457 kernel: Key type .fscrypt registered Sep 12 22:54:06.332467 kernel: Key type fscrypt-provisioning registered Sep 12 22:54:06.332477 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 22:54:06.332487 kernel: ima: Allocated hash algorithm: sha1 Sep 12 22:54:06.332497 kernel: ima: No architecture policies found Sep 12 22:54:06.332507 kernel: clk: Disabling unused clocks Sep 12 22:54:06.332518 kernel: Warning: unable to open an initial console. Sep 12 22:54:06.332552 kernel: Freeing unused kernel image (initmem) memory: 54084K Sep 12 22:54:06.332568 kernel: Write protecting the kernel read-only data: 24576k Sep 12 22:54:06.332580 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 12 22:54:06.332593 kernel: Run /init as init process Sep 12 22:54:06.332605 kernel: with arguments: Sep 12 22:54:06.332618 kernel: /init Sep 12 22:54:06.332631 kernel: with environment: Sep 12 22:54:06.332643 kernel: HOME=/ Sep 12 22:54:06.332655 kernel: TERM=linux Sep 12 22:54:06.332667 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 22:54:06.334736 systemd[1]: Successfully made /usr/ read-only. Sep 12 22:54:06.334780 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:54:06.334794 systemd[1]: Detected virtualization kvm. Sep 12 22:54:06.334805 systemd[1]: Detected architecture x86-64. Sep 12 22:54:06.334816 systemd[1]: Running in initrd. Sep 12 22:54:06.334827 systemd[1]: No hostname configured, using default hostname. Sep 12 22:54:06.334841 systemd[1]: Hostname set to . Sep 12 22:54:06.334851 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:54:06.334862 systemd[1]: Queued start job for default target initrd.target. Sep 12 22:54:06.334873 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:54:06.334884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:54:06.334897 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 22:54:06.334908 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:54:06.334921 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 22:54:06.334935 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 22:54:06.334948 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 22:54:06.334959 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 22:54:06.334970 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:54:06.334981 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:54:06.334992 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:54:06.335005 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:54:06.335016 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:54:06.335027 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:54:06.335038 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:54:06.335048 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:54:06.335059 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 22:54:06.335070 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 22:54:06.335081 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:54:06.335092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:54:06.335105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:54:06.335116 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:54:06.335126 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 22:54:06.335137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:54:06.335150 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 22:54:06.335164 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 22:54:06.335175 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 22:54:06.335186 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:54:06.335197 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:54:06.335208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:54:06.335219 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 22:54:06.335233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:54:06.335244 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 22:54:06.335255 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:54:06.335319 systemd-journald[221]: Collecting audit messages is disabled. Sep 12 22:54:06.335350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:54:06.335362 systemd-journald[221]: Journal started Sep 12 22:54:06.335386 systemd-journald[221]: Runtime Journal (/run/log/journal/1ab2124f609b402caf5bce82a94d7365) is 6M, max 48.6M, 42.5M free. Sep 12 22:54:06.340897 systemd-modules-load[222]: Inserted module 'overlay' Sep 12 22:54:06.393396 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:54:06.396336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:54:06.409503 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 22:54:06.416021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:54:06.426720 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:54:06.444613 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 22:54:06.450131 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 12 22:54:06.451361 kernel: Bridge firewalling registered Sep 12 22:54:06.462589 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:54:06.466945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:54:06.482144 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 22:54:06.493889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:54:06.505892 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:54:06.522419 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:54:06.544947 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 22:54:06.561244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:54:06.584041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:54:06.620579 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:54:06.734793 systemd-resolved[263]: Positive Trust Anchors: Sep 12 22:54:06.736807 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:54:06.739398 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:54:06.744330 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 12 22:54:06.750335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:54:06.770113 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:54:06.920603 kernel: SCSI subsystem initialized Sep 12 22:54:06.964123 kernel: Loading iSCSI transport class v2.0-870. Sep 12 22:54:06.993266 kernel: iscsi: registered transport (tcp) Sep 12 22:54:07.042160 kernel: iscsi: registered transport (qla4xxx) Sep 12 22:54:07.042254 kernel: QLogic iSCSI HBA Driver Sep 12 22:54:07.110873 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:54:07.153370 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:54:07.158339 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:54:07.521968 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 22:54:07.526942 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 22:54:07.612637 kernel: raid6: avx2x4 gen() 18804 MB/s Sep 12 22:54:07.630579 kernel: raid6: avx2x2 gen() 19724 MB/s Sep 12 22:54:07.648300 kernel: raid6: avx2x1 gen() 14563 MB/s Sep 12 22:54:07.648387 kernel: raid6: using algorithm avx2x2 gen() 19724 MB/s Sep 12 22:54:07.670891 kernel: raid6: .... xor() 11985 MB/s, rmw enabled Sep 12 22:54:07.670978 kernel: raid6: using avx2x2 recovery algorithm Sep 12 22:54:07.705589 kernel: xor: automatically using best checksumming function avx Sep 12 22:54:08.056772 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 22:54:08.075685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:54:08.080716 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:54:08.178343 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 12 22:54:08.195892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:54:08.200172 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 22:54:08.294268 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 12 22:54:08.398097 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:54:08.422725 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:54:08.588556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:54:08.598733 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 22:54:08.737731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:54:08.738109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:54:08.751073 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:54:08.755628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:54:08.769255 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:54:08.782575 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 22:54:09.163665 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 22:54:09.169254 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 22:54:09.169306 kernel: GPT:9289727 != 19775487 Sep 12 22:54:09.169322 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 22:54:09.171099 kernel: GPT:9289727 != 19775487 Sep 12 22:54:09.171140 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 22:54:09.171155 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:54:09.175667 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 22:54:09.186032 kernel: libata version 3.00 loaded. Sep 12 22:54:09.238951 kernel: AES CTR mode by8 optimization enabled Sep 12 22:54:09.239037 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 22:54:09.239347 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 22:54:09.256591 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 22:54:09.256904 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 22:54:09.257091 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 22:54:09.257253 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 22:54:09.294900 kernel: scsi host0: ahci Sep 12 22:54:09.296566 kernel: scsi host1: ahci Sep 12 22:54:09.312557 kernel: scsi host2: ahci Sep 12 22:54:09.312880 kernel: scsi host3: ahci Sep 12 22:54:09.313064 kernel: scsi host4: ahci Sep 12 22:54:09.325713 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 22:54:09.359685 kernel: scsi host5: ahci Sep 12 22:54:09.359983 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 12 22:54:09.360003 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 12 22:54:09.360027 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 12 22:54:09.360041 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 12 22:54:09.360055 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 12 22:54:09.360069 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 12 22:54:09.353251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:54:09.392079 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 22:54:09.394921 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 22:54:09.425387 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 22:54:09.453318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 22:54:09.461725 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 22:54:09.515026 disk-uuid[635]: Primary Header is updated. Sep 12 22:54:09.515026 disk-uuid[635]: Secondary Entries is updated. Sep 12 22:54:09.515026 disk-uuid[635]: Secondary Header is updated. Sep 12 22:54:09.527780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:54:09.535097 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:54:09.657726 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 22:54:09.664862 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 22:54:09.668866 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 22:54:09.668925 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 22:54:09.668941 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 22:54:09.668956 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 22:54:09.672050 kernel: ata3.00: applying bridge limits Sep 12 22:54:09.674657 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 22:54:09.674698 kernel: ata3.00: configured for UDMA/100 Sep 12 22:54:09.674713 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 22:54:09.676021 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 22:54:09.678786 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 22:54:09.814580 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 22:54:09.814927 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 22:54:09.856584 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 22:54:10.363967 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 22:54:10.366904 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:54:10.379483 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:54:10.386139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:54:10.401899 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 22:54:10.474796 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:54:10.554373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:54:10.562959 disk-uuid[636]: The operation has completed successfully. Sep 12 22:54:10.636738 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 22:54:10.637703 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 22:54:10.690962 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 22:54:10.719157 sh[665]: Success Sep 12 22:54:10.746466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 22:54:10.746585 kernel: device-mapper: uevent: version 1.0.3 Sep 12 22:54:10.748795 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 22:54:10.867206 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 22:54:10.943266 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 22:54:10.945704 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 22:54:10.985884 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 22:54:10.997127 kernel: BTRFS: device fsid 5d2ab445-1154-4e47-9d7e-ff4b81d84474 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (677) Sep 12 22:54:11.000448 kernel: BTRFS info (device dm-0): first mount of filesystem 5d2ab445-1154-4e47-9d7e-ff4b81d84474 Sep 12 22:54:11.000484 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:54:11.017967 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 22:54:11.018057 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 22:54:11.024014 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 22:54:11.028379 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:54:11.031099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 22:54:11.040526 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 22:54:11.046290 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 22:54:11.122382 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 12 22:54:11.131707 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:54:11.131789 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:54:11.144375 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:54:11.144457 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:54:11.171139 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:54:11.178886 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 22:54:11.186413 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 22:54:11.328126 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:54:11.337989 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:54:11.569029 systemd-networkd[847]: lo: Link UP Sep 12 22:54:11.569047 systemd-networkd[847]: lo: Gained carrier Sep 12 22:54:11.571224 systemd-networkd[847]: Enumeration completed Sep 12 22:54:11.571493 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:54:11.571793 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:54:11.571799 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:54:11.572362 systemd-networkd[847]: eth0: Link UP Sep 12 22:54:11.704141 systemd-networkd[847]: eth0: Gained carrier Sep 12 22:54:11.704167 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:54:11.992331 systemd[1]: Reached target network.target - Network. Sep 12 22:54:12.018661 systemd-networkd[847]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 22:54:12.070678 ignition[751]: Ignition 2.22.0 Sep 12 22:54:12.070700 ignition[751]: Stage: fetch-offline Sep 12 22:54:12.070745 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:54:12.070756 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:54:12.070873 ignition[751]: parsed url from cmdline: "" Sep 12 22:54:12.070878 ignition[751]: no config URL provided Sep 12 22:54:12.070884 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 22:54:12.070894 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 12 22:54:12.070928 ignition[751]: op(1): [started] loading QEMU firmware config module Sep 12 22:54:12.070934 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 22:54:12.118193 ignition[751]: op(1): [finished] loading QEMU firmware config module Sep 12 22:54:12.183264 ignition[751]: parsing config with SHA512: cc0d35e8560ed4fb4f2a6e6e68f593617de6f44261d17aaad23bdce4b7708610bcae1349e6ffcad706a9264fb273461c9f20b5f8516d4b1bb808b4cc9c09342f Sep 12 22:54:12.388496 unknown[751]: fetched base config from "system" Sep 12 22:54:12.388702 unknown[751]: fetched user config from "qemu" Sep 12 22:54:12.390157 ignition[751]: fetch-offline: fetch-offline passed Sep 12 22:54:12.390252 ignition[751]: Ignition finished successfully Sep 12 22:54:12.401941 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:54:12.422671 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 22:54:12.427761 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 22:54:12.519226 ignition[860]: Ignition 2.22.0 Sep 12 22:54:12.520600 ignition[860]: Stage: kargs Sep 12 22:54:12.526017 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:54:12.526486 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:54:12.534308 ignition[860]: kargs: kargs passed Sep 12 22:54:12.534378 ignition[860]: Ignition finished successfully Sep 12 22:54:12.560985 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 22:54:12.576930 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 22:54:12.708651 ignition[868]: Ignition 2.22.0 Sep 12 22:54:12.708821 ignition[868]: Stage: disks Sep 12 22:54:12.710086 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:54:12.710104 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:54:12.711385 ignition[868]: disks: disks passed Sep 12 22:54:12.711439 ignition[868]: Ignition finished successfully Sep 12 22:54:12.726003 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 22:54:12.728290 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 22:54:12.733907 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 22:54:12.735346 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:54:12.739737 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:54:12.740934 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:54:12.746208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 22:54:12.819816 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 22:54:12.834262 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 22:54:12.846640 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 22:54:13.084768 systemd-networkd[847]: eth0: Gained IPv6LL Sep 12 22:54:13.310212 kernel: EXT4-fs (vda9): mounted filesystem d027afc5-396a-49bf-a5be-60ddd42cb089 r/w with ordered data mode. Quota mode: none. Sep 12 22:54:13.314820 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 22:54:13.316032 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 22:54:13.325648 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:54:13.332725 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 22:54:13.336832 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 22:54:13.336892 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 22:54:13.336923 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:54:13.355984 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 22:54:13.361736 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 22:54:13.385484 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 12 22:54:13.392749 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:54:13.392807 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:54:13.419447 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:54:13.419552 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:54:13.421312 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:54:14.005801 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 22:54:14.033242 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 12 22:54:14.060029 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 22:54:14.087523 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 22:54:14.600148 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 22:54:14.608884 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 22:54:14.634783 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 22:54:14.656041 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 22:54:14.660810 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:54:14.722926 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 22:54:14.777105 ignition[999]: INFO : Ignition 2.22.0 Sep 12 22:54:14.777105 ignition[999]: INFO : Stage: mount Sep 12 22:54:14.781244 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:54:14.781244 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:54:14.783837 ignition[999]: INFO : mount: mount passed Sep 12 22:54:14.783837 ignition[999]: INFO : Ignition finished successfully Sep 12 22:54:14.792134 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 22:54:14.797156 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 22:54:14.831730 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:54:14.877311 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 12 22:54:14.877386 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:54:14.877401 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:54:14.897315 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:54:14.897401 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:54:14.908952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:54:15.034058 ignition[1028]: INFO : Ignition 2.22.0 Sep 12 22:54:15.034058 ignition[1028]: INFO : Stage: files Sep 12 22:54:15.043360 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:54:15.043360 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:54:15.054540 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Sep 12 22:54:15.056787 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 22:54:15.056787 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 22:54:15.071956 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 22:54:15.088016 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 22:54:15.088016 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 22:54:15.074236 unknown[1028]: wrote ssh authorized keys file for user: core Sep 12 22:54:15.104294 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 22:54:15.104294 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 22:54:15.169353 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 22:54:15.306352 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 22:54:15.309100 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 22:54:15.309100 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 22:54:15.315211 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:54:15.315211 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:54:15.315211 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:54:15.315211 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:54:15.315211 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:54:15.315211 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:54:15.332095 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:54:15.332095 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:54:15.332095 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 22:54:15.340750 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 22:54:15.340750 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 22:54:15.340750 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 22:54:15.827993 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 22:54:17.796319 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 22:54:17.796319 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 22:54:17.803039 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:54:17.816971 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:54:17.816971 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 22:54:17.816971 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 22:54:17.816971 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 22:54:17.816971 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 22:54:17.816971 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 22:54:17.816971 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 22:54:17.890610 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 22:54:17.918459 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 22:54:17.920615 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 22:54:17.920615 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 22:54:17.920615 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 22:54:17.920615 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:54:17.920615 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:54:17.920615 ignition[1028]: INFO : files: files passed Sep 12 22:54:17.920615 ignition[1028]: INFO : Ignition finished successfully Sep 12 22:54:17.925672 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 22:54:17.933396 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 22:54:17.935447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 22:54:17.969499 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 22:54:17.971452 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 22:54:17.986652 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:54:17.986652 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:54:17.971682 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 22:54:18.019298 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:54:17.984925 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:54:17.996037 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 22:54:18.005679 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 22:54:18.135661 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 22:54:18.137456 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 22:54:18.142435 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 22:54:18.146937 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 22:54:18.148442 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 22:54:18.151771 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 22:54:18.210356 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:54:18.215067 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 22:54:18.270818 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:54:18.273688 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:54:18.278256 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 22:54:18.280638 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 22:54:18.280862 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:54:18.298297 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 22:54:18.303993 systemd[1]: Stopped target basic.target - Basic System. Sep 12 22:54:18.321899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 22:54:18.338354 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:54:18.344349 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 22:54:18.357597 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:54:18.364393 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 22:54:18.369807 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:54:18.373886 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 22:54:18.384046 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 22:54:18.390020 systemd[1]: Stopped target swap.target - Swaps. Sep 12 22:54:18.403345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 22:54:18.403582 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:54:18.410046 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:54:18.414740 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:54:18.422028 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 22:54:18.422267 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:54:18.434642 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 22:54:18.434852 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 22:54:18.472901 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 22:54:18.474250 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:54:18.481573 systemd[1]: Stopped target paths.target - Path Units. Sep 12 22:54:18.483250 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 22:54:18.491140 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:54:18.500390 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 22:54:18.506348 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 22:54:18.511653 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 22:54:18.512182 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:54:18.526941 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 22:54:18.528910 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:54:18.530510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 22:54:18.530719 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:54:18.540947 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 22:54:18.541112 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 22:54:18.545589 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 22:54:18.549504 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 22:54:18.549718 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:54:18.556759 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 22:54:18.562309 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 22:54:18.562638 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:54:18.564602 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 22:54:18.564860 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:54:18.600602 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 22:54:18.600775 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 22:54:18.626933 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 22:54:18.666601 ignition[1084]: INFO : Ignition 2.22.0 Sep 12 22:54:18.666601 ignition[1084]: INFO : Stage: umount Sep 12 22:54:18.666601 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:54:18.666601 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:54:18.666601 ignition[1084]: INFO : umount: umount passed Sep 12 22:54:18.666601 ignition[1084]: INFO : Ignition finished successfully Sep 12 22:54:18.681578 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 22:54:18.681951 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 22:54:18.688903 systemd[1]: Stopped target network.target - Network. Sep 12 22:54:18.702174 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 22:54:18.702307 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 22:54:18.702448 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 22:54:18.702501 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 22:54:18.710447 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 22:54:18.710584 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 22:54:18.736675 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 22:54:18.736785 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 22:54:18.740164 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 22:54:18.753657 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 22:54:18.791441 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 22:54:18.791794 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 22:54:18.857444 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 22:54:18.859050 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 22:54:18.862745 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 22:54:18.885014 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 22:54:18.888894 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 22:54:18.894490 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 22:54:18.894612 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:54:18.904595 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 22:54:18.933583 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 22:54:18.933730 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:54:18.938084 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:54:18.938175 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:54:18.961789 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 22:54:18.961902 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 22:54:18.966907 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 22:54:18.967001 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:54:18.976911 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:54:18.978469 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 22:54:18.978584 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:54:18.987105 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 22:54:18.988964 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 22:54:18.993848 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 22:54:18.994019 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 22:54:18.999178 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 22:54:19.002200 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:54:19.017446 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 22:54:19.017549 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 22:54:19.021715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 22:54:19.021788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:54:19.025553 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 22:54:19.025665 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:54:19.039592 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 22:54:19.039718 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 22:54:19.043001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 22:54:19.043117 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:54:19.059829 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 22:54:19.070509 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 22:54:19.070686 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:54:19.081357 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 22:54:19.081459 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:54:19.095719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:54:19.095833 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:54:19.107108 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 22:54:19.107211 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 22:54:19.107289 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:54:19.107925 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 22:54:19.108099 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 22:54:19.111513 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 22:54:19.112865 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 22:54:19.123116 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 22:54:19.146642 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 22:54:19.198669 systemd[1]: Switching root. Sep 12 22:54:19.258222 systemd-journald[221]: Journal stopped Sep 12 22:54:22.146990 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Sep 12 22:54:22.147086 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 22:54:22.147108 kernel: SELinux: policy capability open_perms=1 Sep 12 22:54:22.147126 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 22:54:22.147150 kernel: SELinux: policy capability always_check_network=0 Sep 12 22:54:22.147168 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 22:54:22.147202 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 22:54:22.147220 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 22:54:22.147245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 22:54:22.147320 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 22:54:22.147468 kernel: audit: type=1403 audit(1757717660.183:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 22:54:22.147502 systemd[1]: Successfully loaded SELinux policy in 155.449ms. Sep 12 22:54:22.147527 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.330ms. Sep 12 22:54:22.147564 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:54:22.147579 systemd[1]: Detected virtualization kvm. Sep 12 22:54:22.147598 systemd[1]: Detected architecture x86-64. Sep 12 22:54:22.147613 systemd[1]: Detected first boot. Sep 12 22:54:22.147628 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:54:22.147642 zram_generator::config[1143]: No configuration found. Sep 12 22:54:22.147658 kernel: Guest personality initialized and is inactive Sep 12 22:54:22.147673 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 22:54:22.147687 kernel: Initialized host personality Sep 12 22:54:22.147701 kernel: NET: Registered PF_VSOCK protocol family Sep 12 22:54:22.147719 systemd[1]: Populated /etc with preset unit settings. Sep 12 22:54:22.147736 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 22:54:22.147751 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 22:54:22.147767 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 22:54:22.147783 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 22:54:22.147798 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 22:54:22.147820 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 22:54:22.147836 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 22:54:22.147873 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 22:54:22.147897 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 22:54:22.147913 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 22:54:22.147928 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 22:54:22.147942 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 22:54:22.147956 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:54:22.147971 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:54:22.147985 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 22:54:22.148000 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 22:54:22.148018 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 22:54:22.148033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:54:22.148053 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 22:54:22.148073 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:54:22.148089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:54:22.148104 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 22:54:22.148120 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 22:54:22.148135 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 22:54:22.148154 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 22:54:22.148168 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:54:22.148183 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:54:22.148197 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:54:22.148224 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:54:22.148239 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 22:54:22.148253 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 22:54:22.148279 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 22:54:22.148294 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:54:22.148314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:54:22.148329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:54:22.148344 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 22:54:22.148358 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 22:54:22.148372 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 22:54:22.148387 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 22:54:22.148408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:54:22.148428 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 22:54:22.148452 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 22:54:22.148470 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 22:54:22.148489 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 22:54:22.148526 systemd[1]: Reached target machines.target - Containers. Sep 12 22:54:22.148583 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 22:54:22.148614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:54:22.148657 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:54:22.148675 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 22:54:22.148690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:54:22.148705 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:54:22.148737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:54:22.148752 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 22:54:22.148767 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:54:22.148782 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 22:54:22.148796 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 22:54:22.148811 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 22:54:22.148825 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 22:54:22.148839 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 22:54:22.148857 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:54:22.148872 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:54:22.148886 kernel: fuse: init (API version 7.41) Sep 12 22:54:22.148900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:54:22.148914 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:54:22.148929 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 22:54:22.148944 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 22:54:22.148960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:54:22.148977 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 22:54:22.148992 systemd[1]: Stopped verity-setup.service. Sep 12 22:54:22.149007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:54:22.149022 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 22:54:22.149037 kernel: loop: module loaded Sep 12 22:54:22.149062 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 22:54:22.149077 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 22:54:22.149147 systemd-journald[1214]: Collecting audit messages is disabled. Sep 12 22:54:22.149175 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 22:54:22.149190 systemd-journald[1214]: Journal started Sep 12 22:54:22.149220 systemd-journald[1214]: Runtime Journal (/run/log/journal/1ab2124f609b402caf5bce82a94d7365) is 6M, max 48.6M, 42.5M free. Sep 12 22:54:21.471008 systemd[1]: Queued start job for default target multi-user.target. Sep 12 22:54:21.497296 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 22:54:21.497977 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 22:54:22.157712 kernel: ACPI: bus type drm_connector registered Sep 12 22:54:22.157769 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:54:22.159492 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 22:54:22.162028 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 22:54:22.164050 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 22:54:22.167242 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:54:22.170189 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 22:54:22.171714 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 22:54:22.173550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:54:22.173911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:54:22.177160 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:54:22.177440 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:54:22.179089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:54:22.179348 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:54:22.181062 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 22:54:22.181466 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 22:54:22.183239 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:54:22.183564 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:54:22.189737 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:54:22.196855 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:54:22.198909 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 22:54:22.208188 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 22:54:22.245117 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:54:22.250743 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 22:54:22.260664 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 22:54:22.262167 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 22:54:22.262223 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:54:22.267172 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 22:54:22.271237 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 22:54:22.273812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:54:22.275990 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 22:54:22.283772 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 22:54:22.286555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:54:22.289732 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 22:54:22.294123 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:54:22.296430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:54:22.302722 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 22:54:22.312885 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 22:54:22.317404 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:54:22.319734 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 22:54:22.322151 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 22:54:22.332795 systemd-journald[1214]: Time spent on flushing to /var/log/journal/1ab2124f609b402caf5bce82a94d7365 is 51.366ms for 986 entries. Sep 12 22:54:22.332795 systemd-journald[1214]: System Journal (/var/log/journal/1ab2124f609b402caf5bce82a94d7365) is 8M, max 195.6M, 187.6M free. Sep 12 22:54:22.406437 systemd-journald[1214]: Received client request to flush runtime journal. Sep 12 22:54:22.406497 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 22:54:22.350498 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 22:54:22.356082 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 22:54:22.367722 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 22:54:22.381685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:54:22.412391 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 22:54:22.420139 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 22:54:22.450128 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 22:54:22.464573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:54:22.471577 kernel: loop1: detected capacity change from 0 to 110984 Sep 12 22:54:22.485640 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 22:54:22.504845 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 22:54:22.513966 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Sep 12 22:54:22.513992 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Sep 12 22:54:22.521184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:54:22.714578 kernel: loop2: detected capacity change from 0 to 229808 Sep 12 22:54:22.795877 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 22:54:22.827605 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 22:54:22.886594 kernel: loop5: detected capacity change from 0 to 229808 Sep 12 22:54:22.917991 (sd-merge)[1285]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 22:54:22.918888 (sd-merge)[1285]: Merged extensions into '/usr'. Sep 12 22:54:22.928865 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 22:54:22.929115 systemd[1]: Reloading... Sep 12 22:54:23.516996 zram_generator::config[1307]: No configuration found. Sep 12 22:54:23.888320 systemd[1]: Reloading finished in 958 ms. Sep 12 22:54:23.921085 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 22:54:24.063827 systemd[1]: Starting ensure-sysext.service... Sep 12 22:54:24.110296 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:54:24.139132 systemd[1]: Reload requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Sep 12 22:54:24.139159 systemd[1]: Reloading... Sep 12 22:54:24.186350 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 22:54:24.187825 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 22:54:24.188565 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 22:54:24.189037 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 22:54:24.190412 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 22:54:24.191248 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Sep 12 22:54:24.191414 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Sep 12 22:54:24.202721 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:54:24.202741 systemd-tmpfiles[1348]: Skipping /boot Sep 12 22:54:24.301334 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:54:24.301717 systemd-tmpfiles[1348]: Skipping /boot Sep 12 22:54:24.310628 zram_generator::config[1375]: No configuration found. Sep 12 22:54:24.399661 ldconfig[1257]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 22:54:24.645924 systemd[1]: Reloading finished in 505 ms. Sep 12 22:54:24.677128 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 22:54:24.681219 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 22:54:24.720685 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:54:24.738037 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:54:24.761441 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 22:54:24.768862 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 22:54:24.786116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:54:24.796878 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:54:24.806864 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 22:54:24.825982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:54:24.826241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:54:24.828622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:54:24.885721 augenrules[1442]: No rules Sep 12 22:54:24.934673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:54:24.948787 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:54:24.950193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:54:24.950340 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:54:24.956634 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 22:54:24.957920 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:54:24.966009 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:54:24.966387 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:54:24.972671 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 22:54:24.978306 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:54:24.978913 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:54:24.981413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:54:24.981944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:54:24.984738 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:54:24.985031 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:54:24.985931 systemd-udevd[1426]: Using default interface naming scheme 'v255'. Sep 12 22:54:25.004828 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 22:54:25.011057 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 22:54:25.020521 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:54:25.024018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:54:25.028156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:54:25.036771 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:54:25.050215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:54:25.054221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:54:25.054416 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:54:25.061757 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 22:54:25.065642 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 22:54:25.065854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:54:25.072235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:54:25.083996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:54:25.084324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:54:25.094613 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 22:54:25.097678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:54:25.098302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:54:25.101696 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:54:25.101999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:54:25.157891 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 22:54:25.229491 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 22:54:25.232576 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:54:25.238048 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:54:25.239689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:54:25.248841 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:54:25.254938 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:54:25.275484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:54:25.288912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:54:25.290464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:54:25.290519 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:54:25.294080 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:54:25.295912 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 22:54:25.295950 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:54:25.297890 systemd[1]: Finished ensure-sysext.service. Sep 12 22:54:25.308890 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:54:25.317938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:54:25.327824 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 22:54:25.339753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:54:25.340068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:54:25.343266 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 22:54:25.344316 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:54:25.344648 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:54:25.347409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:54:25.347810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:54:25.361714 augenrules[1500]: /sbin/augenrules: No change Sep 12 22:54:25.361716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:54:25.361803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:54:25.372844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 22:54:25.388018 augenrules[1533]: No rules Sep 12 22:54:25.391903 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:54:25.392273 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:54:25.414229 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 12 22:54:25.416381 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 22:54:25.421045 kernel: ACPI: button: Power Button [PWRF] Sep 12 22:54:25.476843 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 22:54:25.510307 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 22:54:25.510723 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 22:54:25.707990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:54:25.903266 kernel: kvm_amd: TSC scaling supported Sep 12 22:54:25.903348 kernel: kvm_amd: Nested Virtualization enabled Sep 12 22:54:25.903403 kernel: kvm_amd: Nested Paging enabled Sep 12 22:54:25.903446 kernel: kvm_amd: LBR virtualization supported Sep 12 22:54:25.906826 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 22:54:25.909586 kernel: kvm_amd: Virtual GIF supported Sep 12 22:54:25.991267 systemd-resolved[1422]: Positive Trust Anchors: Sep 12 22:54:25.991291 systemd-resolved[1422]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:54:25.991334 systemd-resolved[1422]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:54:25.995553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:54:25.997486 systemd-networkd[1508]: lo: Link UP Sep 12 22:54:25.997500 systemd-networkd[1508]: lo: Gained carrier Sep 12 22:54:25.998703 systemd-resolved[1422]: Defaulting to hostname 'linux'. Sep 12 22:54:26.000711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:54:26.002083 systemd-networkd[1508]: Enumeration completed Sep 12 22:54:26.002447 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:54:26.002617 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:54:26.002631 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:54:26.004677 systemd-networkd[1508]: eth0: Link UP Sep 12 22:54:26.004888 systemd[1]: Reached target network.target - Network. Sep 12 22:54:26.005279 systemd-networkd[1508]: eth0: Gained carrier Sep 12 22:54:26.006208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:54:26.006357 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:54:26.011554 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 22:54:26.015143 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 22:54:26.028925 kernel: EDAC MC: Ver: 3.0.0 Sep 12 22:54:26.043332 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 22:54:26.051643 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 22:54:26.576281 systemd-resolved[1422]: Clock change detected. Flushing caches. Sep 12 22:54:26.576298 systemd-timesyncd[1517]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 22:54:26.576369 systemd-timesyncd[1517]: Initial clock synchronization to Fri 2025-09-12 22:54:26.576173 UTC. Sep 12 22:54:26.578182 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 22:54:26.582152 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:54:26.584354 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 22:54:26.592887 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 22:54:26.594903 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 22:54:26.596761 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 22:54:26.598545 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 22:54:26.598609 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:54:26.599916 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 22:54:26.601682 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 22:54:26.603357 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 22:54:26.605311 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:54:26.614359 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 22:54:26.620136 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 22:54:26.629611 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 22:54:26.631621 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 22:54:26.636630 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 22:54:26.650988 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 22:54:26.655937 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 22:54:26.670937 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 22:54:26.680307 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:54:26.683228 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:54:26.684546 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:54:26.684586 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:54:26.686376 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 22:54:26.689356 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 22:54:26.701260 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 22:54:26.710038 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 22:54:26.715257 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 22:54:26.716609 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 22:54:26.719323 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 22:54:26.723250 jq[1576]: false Sep 12 22:54:26.726268 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 22:54:26.731096 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 22:54:26.735411 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 22:54:26.739219 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 22:54:26.746831 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 22:54:26.750707 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing passwd entry cache Sep 12 22:54:26.752691 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 22:54:26.753480 oslogin_cache_refresh[1578]: Refreshing passwd entry cache Sep 12 22:54:26.753657 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 22:54:26.758426 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 22:54:26.763636 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 22:54:26.765889 oslogin_cache_refresh[1578]: Failure getting users, quitting Sep 12 22:54:26.769770 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting users, quitting Sep 12 22:54:26.769770 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 22:54:26.769770 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing group entry cache Sep 12 22:54:26.768716 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 22:54:26.765919 oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 22:54:26.766002 oslogin_cache_refresh[1578]: Refreshing group entry cache Sep 12 22:54:26.772768 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 22:54:26.779166 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 22:54:26.781977 extend-filesystems[1577]: Found /dev/vda6 Sep 12 22:54:26.794052 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting groups, quitting Sep 12 22:54:26.794052 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 22:54:26.793346 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 22:54:26.782049 oslogin_cache_refresh[1578]: Failure getting groups, quitting Sep 12 22:54:26.794866 jq[1588]: true Sep 12 22:54:26.798431 extend-filesystems[1577]: Found /dev/vda9 Sep 12 22:54:26.798431 extend-filesystems[1577]: Checking size of /dev/vda9 Sep 12 22:54:26.793866 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 22:54:26.782066 oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 22:54:26.797828 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 22:54:26.800125 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 22:54:26.810002 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 22:54:26.816016 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 22:54:26.816274 update_engine[1584]: I20250912 22:54:26.815351 1584 main.cc:92] Flatcar Update Engine starting Sep 12 22:54:26.816673 (ntainerd)[1605]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 22:54:26.826322 jq[1602]: true Sep 12 22:54:26.831732 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 22:54:26.902830 dbus-daemon[1574]: [system] SELinux support is enabled Sep 12 22:54:26.936018 update_engine[1584]: I20250912 22:54:26.930991 1584 update_check_scheduler.cc:74] Next update check in 6m22s Sep 12 22:54:26.903163 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 22:54:26.912816 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 22:54:26.912854 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 22:54:26.916175 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 22:54:26.916197 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 22:54:26.925318 systemd[1]: Started update-engine.service - Update Engine. Sep 12 22:54:26.937448 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 22:54:26.949043 tar[1594]: linux-amd64/LICENSE Sep 12 22:54:26.949474 tar[1594]: linux-amd64/helm Sep 12 22:54:26.951433 extend-filesystems[1577]: Resized partition /dev/vda9 Sep 12 22:54:26.976078 extend-filesystems[1634]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 22:54:26.993792 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 22:54:27.088693 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 22:54:27.106382 systemd-logind[1583]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 22:54:27.106436 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 22:54:27.110153 systemd-logind[1583]: New seat seat0. Sep 12 22:54:27.112365 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 22:54:27.135350 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 22:54:27.205132 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Sep 12 22:54:27.199631 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 22:54:27.202526 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 22:54:27.265596 extend-filesystems[1634]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 22:54:27.265596 extend-filesystems[1634]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 22:54:27.265596 extend-filesystems[1634]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 22:54:27.289841 extend-filesystems[1577]: Resized filesystem in /dev/vda9 Sep 12 22:54:27.278025 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 22:54:27.294713 sshd_keygen[1600]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 22:54:27.278520 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 22:54:27.335808 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 22:54:27.341792 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 22:54:27.357754 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:60754.service - OpenSSH per-connection server daemon (10.0.0.1:60754). Sep 12 22:54:27.395878 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 22:54:27.407972 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 22:54:27.420959 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 22:54:27.524794 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 22:54:27.538492 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 22:54:27.550963 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 22:54:27.552667 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 22:54:27.746873 systemd-networkd[1508]: eth0: Gained IPv6LL Sep 12 22:54:27.781788 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 22:54:27.787235 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 22:54:27.798733 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 22:54:27.807612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:54:27.817362 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 22:54:27.900866 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 60754 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:27.906844 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:27.921196 containerd[1605]: time="2025-09-12T22:54:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 22:54:27.922213 containerd[1605]: time="2025-09-12T22:54:27.922153210Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 22:54:27.938093 containerd[1605]: time="2025-09-12T22:54:27.938018779Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.785µs" Sep 12 22:54:27.938093 containerd[1605]: time="2025-09-12T22:54:27.938071178Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 22:54:27.938093 containerd[1605]: time="2025-09-12T22:54:27.938094051Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 22:54:27.938423 containerd[1605]: time="2025-09-12T22:54:27.938375418Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 22:54:27.938469 containerd[1605]: time="2025-09-12T22:54:27.938431263Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 22:54:27.938521 containerd[1605]: time="2025-09-12T22:54:27.938489502Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:54:27.938671 containerd[1605]: time="2025-09-12T22:54:27.938636729Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:54:27.938671 containerd[1605]: time="2025-09-12T22:54:27.938664441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:54:27.939141 containerd[1605]: time="2025-09-12T22:54:27.939101570Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:54:27.939141 containerd[1605]: time="2025-09-12T22:54:27.939131917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:54:27.939234 containerd[1605]: time="2025-09-12T22:54:27.939148058Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:54:27.939234 containerd[1605]: time="2025-09-12T22:54:27.939159659Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 22:54:27.939382 containerd[1605]: time="2025-09-12T22:54:27.939346089Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 22:54:27.940215 containerd[1605]: time="2025-09-12T22:54:27.940163693Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:54:27.940263 containerd[1605]: time="2025-09-12T22:54:27.940226000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:54:27.940263 containerd[1605]: time="2025-09-12T22:54:27.940241749Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 22:54:27.940325 containerd[1605]: time="2025-09-12T22:54:27.940302453Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 22:54:27.943251 containerd[1605]: time="2025-09-12T22:54:27.941143631Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 22:54:27.943454 containerd[1605]: time="2025-09-12T22:54:27.943383472Z" level=info msg="metadata content store policy set" policy=shared Sep 12 22:54:27.946318 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 22:54:27.953723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 22:54:27.968735 systemd-logind[1583]: New session 1 of user core. Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975163272Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975286072Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975369950Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975404064Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975423230Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975436765Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975465649Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975492369Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975512767Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975527134Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975538396Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975554055Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975774769Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 22:54:27.978455 containerd[1605]: time="2025-09-12T22:54:27.975804455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975836234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975854919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975869697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975884044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975898892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975912096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975932675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975947663Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.975982819Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.976084129Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.976109356Z" level=info msg="Start snapshots syncer" Sep 12 22:54:27.978938 containerd[1605]: time="2025-09-12T22:54:27.976146256Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 22:54:27.979380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 22:54:27.979646 containerd[1605]: time="2025-09-12T22:54:27.978259670Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 22:54:27.979646 containerd[1605]: time="2025-09-12T22:54:27.978328699Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 22:54:27.979874 containerd[1605]: time="2025-09-12T22:54:27.979603991Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 22:54:27.979904 containerd[1605]: time="2025-09-12T22:54:27.979875300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 22:54:27.979930 containerd[1605]: time="2025-09-12T22:54:27.979918131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 22:54:27.979955 containerd[1605]: time="2025-09-12T22:54:27.979935533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 22:54:27.979992 containerd[1605]: time="2025-09-12T22:54:27.979963897Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 22:54:27.980030 containerd[1605]: time="2025-09-12T22:54:27.980000746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 22:54:27.980030 containerd[1605]: time="2025-09-12T22:54:27.980017497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 22:54:27.980091 containerd[1605]: time="2025-09-12T22:54:27.980032135Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 22:54:27.980091 containerd[1605]: time="2025-09-12T22:54:27.980065026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 22:54:27.980143 containerd[1605]: time="2025-09-12T22:54:27.980099180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 22:54:27.980143 containerd[1605]: time="2025-09-12T22:54:27.980116613Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980201512Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980350462Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980370710Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980384455Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980566978Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980592476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980620398Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980648911Z" level=info msg="runtime interface created" Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980656886Z" level=info msg="created NRI interface" Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980673518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980696160Z" level=info msg="Connect containerd service" Sep 12 22:54:27.981329 containerd[1605]: time="2025-09-12T22:54:27.980969934Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 22:54:27.982329 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 22:54:27.983389 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 22:54:27.988204 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 22:54:27.992958 containerd[1605]: time="2025-09-12T22:54:27.992905531Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 22:54:28.003830 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 22:54:28.011816 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 22:54:28.103209 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 22:54:28.112684 systemd-logind[1583]: New session c1 of user core. Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469342371Z" level=info msg="Start subscribing containerd event" Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469500117Z" level=info msg="Start recovering state" Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469708968Z" level=info msg="Start event monitor" Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469728856Z" level=info msg="Start cni network conf syncer for default" Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469737652Z" level=info msg="Start streaming server" Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469756437Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469762459Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469830997Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469766917Z" level=info msg="runtime interface starting up..." Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469860513Z" level=info msg="starting plugins..." Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.469882424Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 22:54:28.470442 containerd[1605]: time="2025-09-12T22:54:28.470341304Z" level=info msg="containerd successfully booted in 0.550186s" Sep 12 22:54:28.474610 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 22:54:28.574717 systemd[1694]: Queued start job for default target default.target. Sep 12 22:54:28.609524 systemd[1694]: Created slice app.slice - User Application Slice. Sep 12 22:54:28.609565 systemd[1694]: Reached target paths.target - Paths. Sep 12 22:54:28.609747 systemd[1694]: Reached target timers.target - Timers. Sep 12 22:54:28.613918 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 22:54:28.665743 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 22:54:28.665952 systemd[1694]: Reached target sockets.target - Sockets. Sep 12 22:54:28.666232 systemd[1694]: Reached target basic.target - Basic System. Sep 12 22:54:28.666344 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 22:54:28.666564 systemd[1694]: Reached target default.target - Main User Target. Sep 12 22:54:28.666618 systemd[1694]: Startup finished in 510ms. Sep 12 22:54:28.678833 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 22:54:28.843646 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:60760.service - OpenSSH per-connection server daemon (10.0.0.1:60760). Sep 12 22:54:28.889377 tar[1594]: linux-amd64/README.md Sep 12 22:54:28.929650 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 22:54:28.988976 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 60760 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:28.992188 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:29.015445 systemd-logind[1583]: New session 2 of user core. Sep 12 22:54:29.028298 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 22:54:29.110285 sshd[1724]: Connection closed by 10.0.0.1 port 60760 Sep 12 22:54:29.110980 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Sep 12 22:54:29.135925 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:60760.service: Deactivated successfully. Sep 12 22:54:29.140129 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 22:54:29.147099 systemd-logind[1583]: Session 2 logged out. Waiting for processes to exit. Sep 12 22:54:29.158179 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:60774.service - OpenSSH per-connection server daemon (10.0.0.1:60774). Sep 12 22:54:29.167759 systemd-logind[1583]: Removed session 2. Sep 12 22:54:29.267565 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 60774 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:29.270827 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:29.290498 systemd-logind[1583]: New session 3 of user core. Sep 12 22:54:29.301725 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 22:54:29.393070 sshd[1733]: Connection closed by 10.0.0.1 port 60774 Sep 12 22:54:29.391481 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 12 22:54:29.403313 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:60774.service: Deactivated successfully. Sep 12 22:54:29.412322 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 22:54:29.419752 systemd-logind[1583]: Session 3 logged out. Waiting for processes to exit. Sep 12 22:54:29.421381 systemd-logind[1583]: Removed session 3. Sep 12 22:54:31.742610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:31.758282 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 22:54:31.758314 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:54:31.782333 systemd[1]: Startup finished in 6.204s (kernel) + 14.327s (initrd) + 11.226s (userspace) = 31.758s. Sep 12 22:54:34.180746 kubelet[1744]: E0912 22:54:34.180548 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:54:34.193831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:54:34.194184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:54:34.195909 systemd[1]: kubelet.service: Consumed 4.468s CPU time, 269.3M memory peak. Sep 12 22:54:39.437226 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:47720.service - OpenSSH per-connection server daemon (10.0.0.1:47720). Sep 12 22:54:39.571785 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 47720 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:39.578907 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:39.588687 systemd-logind[1583]: New session 4 of user core. Sep 12 22:54:39.597826 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 22:54:39.679202 sshd[1762]: Connection closed by 10.0.0.1 port 47720 Sep 12 22:54:39.679635 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Sep 12 22:54:39.705098 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:47720.service: Deactivated successfully. Sep 12 22:54:39.709426 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 22:54:39.716000 systemd-logind[1583]: Session 4 logged out. Waiting for processes to exit. Sep 12 22:54:39.719131 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:47726.service - OpenSSH per-connection server daemon (10.0.0.1:47726). Sep 12 22:54:39.720631 systemd-logind[1583]: Removed session 4. Sep 12 22:54:39.820389 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 47726 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:39.823115 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:39.833101 systemd-logind[1583]: New session 5 of user core. Sep 12 22:54:39.844189 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 22:54:39.914921 sshd[1771]: Connection closed by 10.0.0.1 port 47726 Sep 12 22:54:39.917347 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Sep 12 22:54:39.924594 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:34428.service - OpenSSH per-connection server daemon (10.0.0.1:34428). Sep 12 22:54:39.942312 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:47726.service: Deactivated successfully. Sep 12 22:54:39.949935 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 22:54:39.953521 systemd-logind[1583]: Session 5 logged out. Waiting for processes to exit. Sep 12 22:54:39.964698 systemd-logind[1583]: Removed session 5. Sep 12 22:54:39.990495 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 34428 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:39.992635 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:40.001447 systemd-logind[1583]: New session 6 of user core. Sep 12 22:54:40.015189 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 22:54:40.119786 sshd[1780]: Connection closed by 10.0.0.1 port 34428 Sep 12 22:54:40.122132 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Sep 12 22:54:40.142913 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:34428.service: Deactivated successfully. Sep 12 22:54:40.147851 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 22:54:40.148837 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Sep 12 22:54:40.152350 systemd-logind[1583]: Removed session 6. Sep 12 22:54:40.154414 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:34442.service - OpenSSH per-connection server daemon (10.0.0.1:34442). Sep 12 22:54:40.252179 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 34442 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:40.254498 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:40.277245 systemd-logind[1583]: New session 7 of user core. Sep 12 22:54:40.294765 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 22:54:40.393664 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 22:54:40.396524 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:54:40.429127 sudo[1790]: pam_unix(sudo:session): session closed for user root Sep 12 22:54:40.436923 sshd[1789]: Connection closed by 10.0.0.1 port 34442 Sep 12 22:54:40.437852 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Sep 12 22:54:40.453550 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:34442.service: Deactivated successfully. Sep 12 22:54:40.462744 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 22:54:40.465180 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Sep 12 22:54:40.470126 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:34448.service - OpenSSH per-connection server daemon (10.0.0.1:34448). Sep 12 22:54:40.473292 systemd-logind[1583]: Removed session 7. Sep 12 22:54:40.568197 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 34448 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:40.571141 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:40.581942 systemd-logind[1583]: New session 8 of user core. Sep 12 22:54:40.597792 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 22:54:40.666363 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 22:54:40.670612 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:54:40.865814 sudo[1801]: pam_unix(sudo:session): session closed for user root Sep 12 22:54:40.877921 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 22:54:40.879161 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:54:40.906036 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:54:41.009333 augenrules[1823]: No rules Sep 12 22:54:41.014086 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:54:41.014472 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:54:41.016707 sudo[1800]: pam_unix(sudo:session): session closed for user root Sep 12 22:54:41.019615 sshd[1799]: Connection closed by 10.0.0.1 port 34448 Sep 12 22:54:41.020374 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Sep 12 22:54:41.046073 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:34448.service: Deactivated successfully. Sep 12 22:54:41.051386 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 22:54:41.056691 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Sep 12 22:54:41.060913 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:34454.service - OpenSSH per-connection server daemon (10.0.0.1:34454). Sep 12 22:54:41.062697 systemd-logind[1583]: Removed session 8. Sep 12 22:54:41.172561 sshd[1832]: Accepted publickey for core from 10.0.0.1 port 34454 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:54:41.191358 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:54:41.240827 systemd-logind[1583]: New session 9 of user core. Sep 12 22:54:41.258783 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 22:54:41.341697 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 22:54:41.342971 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:54:43.633821 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 22:54:43.655136 (dockerd)[1856]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 22:54:44.215594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 22:54:44.222221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:54:45.168265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:45.196232 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:54:45.250764 dockerd[1856]: time="2025-09-12T22:54:45.249940757Z" level=info msg="Starting up" Sep 12 22:54:45.257028 dockerd[1856]: time="2025-09-12T22:54:45.255525322Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 22:54:45.466299 kubelet[1870]: E0912 22:54:45.466085 1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:54:45.477288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:54:45.477596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:54:45.478151 systemd[1]: kubelet.service: Consumed 755ms CPU time, 109.2M memory peak. Sep 12 22:54:45.560242 dockerd[1856]: time="2025-09-12T22:54:45.560132929Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 22:54:46.154215 dockerd[1856]: time="2025-09-12T22:54:46.153778829Z" level=info msg="Loading containers: start." Sep 12 22:54:46.204633 kernel: Initializing XFRM netlink socket Sep 12 22:54:47.020235 systemd-networkd[1508]: docker0: Link UP Sep 12 22:54:47.033548 dockerd[1856]: time="2025-09-12T22:54:47.032183535Z" level=info msg="Loading containers: done." Sep 12 22:54:47.077104 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2698244094-merged.mount: Deactivated successfully. Sep 12 22:54:47.098687 dockerd[1856]: time="2025-09-12T22:54:47.098049571Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 22:54:47.098687 dockerd[1856]: time="2025-09-12T22:54:47.098187539Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 22:54:47.098687 dockerd[1856]: time="2025-09-12T22:54:47.098327151Z" level=info msg="Initializing buildkit" Sep 12 22:54:47.208759 dockerd[1856]: time="2025-09-12T22:54:47.208323273Z" level=info msg="Completed buildkit initialization" Sep 12 22:54:47.226900 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 22:54:47.232140 dockerd[1856]: time="2025-09-12T22:54:47.226196838Z" level=info msg="Daemon has completed initialization" Sep 12 22:54:47.246940 dockerd[1856]: time="2025-09-12T22:54:47.234222662Z" level=info msg="API listen on /run/docker.sock" Sep 12 22:54:49.594438 containerd[1605]: time="2025-09-12T22:54:49.590423071Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 22:54:50.776457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154955452.mount: Deactivated successfully. Sep 12 22:54:55.728788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 22:54:55.750292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:54:56.305584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:56.328848 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:54:56.651076 kubelet[2159]: E0912 22:54:56.650769 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:54:56.657530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:54:56.657739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:54:56.658153 systemd[1]: kubelet.service: Consumed 500ms CPU time, 111.2M memory peak. Sep 12 22:54:57.911254 containerd[1605]: time="2025-09-12T22:54:57.907210717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:57.938492 containerd[1605]: time="2025-09-12T22:54:57.938345197Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 22:54:57.974248 containerd[1605]: time="2025-09-12T22:54:57.971374300Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:57.984273 containerd[1605]: time="2025-09-12T22:54:57.983065820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:57.989453 containerd[1605]: time="2025-09-12T22:54:57.987969057Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 8.397459835s" Sep 12 22:54:57.989453 containerd[1605]: time="2025-09-12T22:54:57.988023860Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 22:54:57.989453 containerd[1605]: time="2025-09-12T22:54:57.988859437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 22:55:05.011277 containerd[1605]: time="2025-09-12T22:55:05.011147829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:05.012611 containerd[1605]: time="2025-09-12T22:55:05.012518600Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 22:55:05.017979 containerd[1605]: time="2025-09-12T22:55:05.014309493Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:05.022850 containerd[1605]: time="2025-09-12T22:55:05.022253792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:05.024932 containerd[1605]: time="2025-09-12T22:55:05.023435938Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 7.034514605s" Sep 12 22:55:05.024932 containerd[1605]: time="2025-09-12T22:55:05.023494408Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 22:55:05.025876 containerd[1605]: time="2025-09-12T22:55:05.025336016Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 22:55:06.713784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 22:55:06.720574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:55:07.262778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:55:07.283943 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:55:07.444235 kubelet[2183]: E0912 22:55:07.444148 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:55:07.457811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:55:07.458146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:55:07.461509 systemd[1]: kubelet.service: Consumed 420ms CPU time, 109.6M memory peak. Sep 12 22:55:08.333860 containerd[1605]: time="2025-09-12T22:55:08.333656375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:08.336308 containerd[1605]: time="2025-09-12T22:55:08.336227453Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 22:55:08.343105 containerd[1605]: time="2025-09-12T22:55:08.342878479Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:08.352050 containerd[1605]: time="2025-09-12T22:55:08.348536175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:08.358997 containerd[1605]: time="2025-09-12T22:55:08.352335695Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 3.325050602s" Sep 12 22:55:08.358997 containerd[1605]: time="2025-09-12T22:55:08.358102978Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 22:55:08.363402 containerd[1605]: time="2025-09-12T22:55:08.363337739Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 22:55:10.772004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054654993.mount: Deactivated successfully. Sep 12 22:55:12.026791 update_engine[1584]: I20250912 22:55:12.026618 1584 update_attempter.cc:509] Updating boot flags... Sep 12 22:55:14.921039 containerd[1605]: time="2025-09-12T22:55:14.920875417Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 22:55:14.923274 containerd[1605]: time="2025-09-12T22:55:14.921841263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:14.924457 containerd[1605]: time="2025-09-12T22:55:14.923984291Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:14.927608 containerd[1605]: time="2025-09-12T22:55:14.927503897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:14.928509 containerd[1605]: time="2025-09-12T22:55:14.928470143Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 6.564815428s" Sep 12 22:55:14.928638 containerd[1605]: time="2025-09-12T22:55:14.928614956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 22:55:14.930879 containerd[1605]: time="2025-09-12T22:55:14.930704223Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 22:55:15.638030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312372309.mount: Deactivated successfully. Sep 12 22:55:17.460384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 22:55:17.464112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:55:18.093092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:55:18.110001 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:55:18.392710 kubelet[2276]: E0912 22:55:18.391200 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:55:18.402630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:55:18.402885 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:55:18.403633 systemd[1]: kubelet.service: Consumed 656ms CPU time, 110.9M memory peak. Sep 12 22:55:19.664346 containerd[1605]: time="2025-09-12T22:55:19.662908075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:19.664346 containerd[1605]: time="2025-09-12T22:55:19.664222244Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 22:55:19.665895 containerd[1605]: time="2025-09-12T22:55:19.665781644Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:19.671859 containerd[1605]: time="2025-09-12T22:55:19.671771367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:19.673221 containerd[1605]: time="2025-09-12T22:55:19.673149216Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.742410628s" Sep 12 22:55:19.673221 containerd[1605]: time="2025-09-12T22:55:19.673201283Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 22:55:19.674323 containerd[1605]: time="2025-09-12T22:55:19.674254472Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 22:55:20.308938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105884287.mount: Deactivated successfully. Sep 12 22:55:20.323638 containerd[1605]: time="2025-09-12T22:55:20.323512158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:55:20.326177 containerd[1605]: time="2025-09-12T22:55:20.326073469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 22:55:20.327965 containerd[1605]: time="2025-09-12T22:55:20.327417394Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:55:20.332745 containerd[1605]: time="2025-09-12T22:55:20.332641788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:55:20.333838 containerd[1605]: time="2025-09-12T22:55:20.333773795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 659.477004ms" Sep 12 22:55:20.333838 containerd[1605]: time="2025-09-12T22:55:20.333820523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 22:55:20.337349 containerd[1605]: time="2025-09-12T22:55:20.337273328Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 22:55:21.512707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908300292.mount: Deactivated successfully. Sep 12 22:55:28.467892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 22:55:28.476328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:55:29.419355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:55:29.461744 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:55:29.755317 kubelet[2349]: E0912 22:55:29.754990 2349 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:55:29.767581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:55:29.767813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:55:29.768255 systemd[1]: kubelet.service: Consumed 573ms CPU time, 110.8M memory peak. Sep 12 22:55:30.834115 containerd[1605]: time="2025-09-12T22:55:30.833708820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:30.837835 containerd[1605]: time="2025-09-12T22:55:30.837724007Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 22:55:30.847544 containerd[1605]: time="2025-09-12T22:55:30.845278608Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:30.868187 containerd[1605]: time="2025-09-12T22:55:30.863700900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:30.890480 containerd[1605]: time="2025-09-12T22:55:30.889635274Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 10.552314226s" Sep 12 22:55:30.890480 containerd[1605]: time="2025-09-12T22:55:30.889694925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 22:55:36.289437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:55:36.289641 systemd[1]: kubelet.service: Consumed 573ms CPU time, 110.8M memory peak. Sep 12 22:55:36.297683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:55:36.379696 systemd[1]: Reload requested from client PID 2392 ('systemctl') (unit session-9.scope)... Sep 12 22:55:36.381170 systemd[1]: Reloading... Sep 12 22:55:36.541204 zram_generator::config[2435]: No configuration found. Sep 12 22:55:37.195546 systemd[1]: Reloading finished in 811 ms. Sep 12 22:55:37.386920 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 22:55:37.387480 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 22:55:37.391378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:55:37.391485 systemd[1]: kubelet.service: Consumed 236ms CPU time, 98.4M memory peak. Sep 12 22:55:37.395621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:55:37.890658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:55:37.920695 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:55:38.060674 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:55:38.060674 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 22:55:38.060674 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:55:38.061214 kubelet[2482]: I0912 22:55:38.060713 2482 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:55:39.351882 kubelet[2482]: I0912 22:55:39.350654 2482 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 22:55:39.351882 kubelet[2482]: I0912 22:55:39.351573 2482 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:55:39.353263 kubelet[2482]: I0912 22:55:39.352812 2482 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 22:55:39.433415 kubelet[2482]: E0912 22:55:39.431862 2482 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 22:55:39.434622 kubelet[2482]: I0912 22:55:39.434441 2482 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:55:39.499038 kubelet[2482]: I0912 22:55:39.498290 2482 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:55:39.511812 kubelet[2482]: I0912 22:55:39.511739 2482 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:55:39.517654 kubelet[2482]: I0912 22:55:39.517565 2482 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:55:39.517893 kubelet[2482]: I0912 22:55:39.517625 2482 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:55:39.517893 kubelet[2482]: I0912 22:55:39.517892 2482 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:55:39.518162 kubelet[2482]: I0912 22:55:39.517905 2482 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 22:55:39.518162 kubelet[2482]: I0912 22:55:39.518131 2482 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:55:39.526472 kubelet[2482]: I0912 22:55:39.525573 2482 kubelet.go:480] "Attempting to sync node with API server" Sep 12 22:55:39.526472 kubelet[2482]: I0912 22:55:39.525661 2482 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:55:39.526472 kubelet[2482]: I0912 22:55:39.525702 2482 kubelet.go:386] "Adding apiserver pod source" Sep 12 22:55:39.526472 kubelet[2482]: I0912 22:55:39.525732 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:55:39.548117 kubelet[2482]: I0912 22:55:39.547251 2482 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:55:39.548557 kubelet[2482]: I0912 22:55:39.548513 2482 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 22:55:39.550762 kubelet[2482]: E0912 22:55:39.550684 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 22:55:39.551212 kubelet[2482]: E0912 22:55:39.551148 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 22:55:39.554250 kubelet[2482]: W0912 22:55:39.553580 2482 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 22:55:39.566693 kubelet[2482]: I0912 22:55:39.563989 2482 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 22:55:39.566693 kubelet[2482]: I0912 22:55:39.565699 2482 server.go:1289] "Started kubelet" Sep 12 22:55:39.582891 kubelet[2482]: I0912 22:55:39.580302 2482 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:55:39.582891 kubelet[2482]: I0912 22:55:39.580475 2482 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:55:39.587280 kubelet[2482]: I0912 22:55:39.580328 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:55:39.588814 kubelet[2482]: I0912 22:55:39.588718 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:55:39.589934 kubelet[2482]: I0912 22:55:39.589907 2482 server.go:317] "Adding debug handlers to kubelet server" Sep 12 22:55:39.592437 kubelet[2482]: I0912 22:55:39.592367 2482 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 22:55:39.601059 kubelet[2482]: E0912 22:55:39.592864 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:39.601059 kubelet[2482]: I0912 22:55:39.593480 2482 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 22:55:39.601059 kubelet[2482]: I0912 22:55:39.593536 2482 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:55:39.601059 kubelet[2482]: E0912 22:55:39.596764 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 22:55:39.601059 kubelet[2482]: E0912 22:55:39.596859 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Sep 12 22:55:39.603451 kubelet[2482]: E0912 22:55:39.591926 2482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aaf8aa2961af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 22:55:39.564786095 +0000 UTC m=+1.582283447,LastTimestamp:2025-09-12 22:55:39.564786095 +0000 UTC m=+1.582283447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 22:55:39.603451 kubelet[2482]: I0912 22:55:39.602916 2482 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:55:39.609246 kubelet[2482]: E0912 22:55:39.609159 2482 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:55:39.612986 kubelet[2482]: I0912 22:55:39.612955 2482 factory.go:223] Registration of the containerd container factory successfully Sep 12 22:55:39.613158 kubelet[2482]: I0912 22:55:39.613107 2482 factory.go:223] Registration of the systemd container factory successfully Sep 12 22:55:39.613316 kubelet[2482]: I0912 22:55:39.613293 2482 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:55:39.659863 kubelet[2482]: I0912 22:55:39.659789 2482 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 22:55:39.659863 kubelet[2482]: I0912 22:55:39.659823 2482 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 22:55:39.659863 kubelet[2482]: I0912 22:55:39.659849 2482 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:55:39.677305 kubelet[2482]: I0912 22:55:39.669280 2482 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 22:55:39.677305 kubelet[2482]: I0912 22:55:39.674033 2482 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 22:55:39.680355 kubelet[2482]: I0912 22:55:39.677708 2482 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 22:55:39.680355 kubelet[2482]: I0912 22:55:39.677784 2482 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 22:55:39.680355 kubelet[2482]: I0912 22:55:39.677802 2482 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 22:55:39.680355 kubelet[2482]: E0912 22:55:39.678009 2482 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:55:39.680355 kubelet[2482]: E0912 22:55:39.679121 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 22:55:39.697107 kubelet[2482]: E0912 22:55:39.697017 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:39.779478 kubelet[2482]: E0912 22:55:39.779358 2482 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 22:55:39.798032 kubelet[2482]: E0912 22:55:39.797438 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:39.798374 kubelet[2482]: E0912 22:55:39.798155 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Sep 12 22:55:39.899443 kubelet[2482]: E0912 22:55:39.898988 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:39.980574 kubelet[2482]: E0912 22:55:39.980467 2482 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 22:55:40.002340 kubelet[2482]: E0912 22:55:39.999730 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:40.100479 kubelet[2482]: E0912 22:55:40.100311 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:40.200656 kubelet[2482]: E0912 22:55:40.200450 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:40.211531 kubelet[2482]: E0912 22:55:40.210518 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Sep 12 22:55:40.301151 kubelet[2482]: E0912 22:55:40.300987 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:40.328761 kubelet[2482]: I0912 22:55:40.328129 2482 policy_none.go:49] "None policy: Start" Sep 12 22:55:40.328761 kubelet[2482]: I0912 22:55:40.328223 2482 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 22:55:40.328761 kubelet[2482]: I0912 22:55:40.328252 2482 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:55:40.380662 kubelet[2482]: E0912 22:55:40.380550 2482 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 22:55:40.396063 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 22:55:40.404252 kubelet[2482]: E0912 22:55:40.401376 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:55:40.436861 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 22:55:40.456229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 22:55:40.489295 kubelet[2482]: E0912 22:55:40.486745 2482 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 22:55:40.489295 kubelet[2482]: I0912 22:55:40.487071 2482 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:55:40.489295 kubelet[2482]: I0912 22:55:40.487091 2482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:55:40.489295 kubelet[2482]: I0912 22:55:40.487382 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:55:40.491802 kubelet[2482]: E0912 22:55:40.491040 2482 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 22:55:40.491802 kubelet[2482]: E0912 22:55:40.491097 2482 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 22:55:40.601941 kubelet[2482]: I0912 22:55:40.601854 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:55:40.602831 kubelet[2482]: E0912 22:55:40.602645 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 12 22:55:40.604088 kubelet[2482]: E0912 22:55:40.604040 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 22:55:40.673691 kubelet[2482]: E0912 22:55:40.673606 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 22:55:40.819301 kubelet[2482]: I0912 22:55:40.819038 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:55:40.832148 kubelet[2482]: E0912 22:55:40.831432 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 12 22:55:40.915788 kubelet[2482]: E0912 22:55:40.915606 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 22:55:40.952549 kubelet[2482]: E0912 22:55:40.952453 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 22:55:41.012022 kubelet[2482]: E0912 22:55:41.011921 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" Sep 12 22:55:41.218722 kubelet[2482]: I0912 22:55:41.218359 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f307fa5edb5a28749645675fbda34af9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f307fa5edb5a28749645675fbda34af9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:41.218722 kubelet[2482]: I0912 22:55:41.218437 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f307fa5edb5a28749645675fbda34af9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f307fa5edb5a28749645675fbda34af9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:41.218722 kubelet[2482]: I0912 22:55:41.218459 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f307fa5edb5a28749645675fbda34af9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f307fa5edb5a28749645675fbda34af9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:41.227417 systemd[1]: Created slice kubepods-burstable-podf307fa5edb5a28749645675fbda34af9.slice - libcontainer container kubepods-burstable-podf307fa5edb5a28749645675fbda34af9.slice. Sep 12 22:55:41.240284 kubelet[2482]: I0912 22:55:41.239155 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:55:41.240284 kubelet[2482]: E0912 22:55:41.240103 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 12 22:55:41.256572 kubelet[2482]: E0912 22:55:41.250972 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:41.271921 kubelet[2482]: E0912 22:55:41.271735 2482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aaf8aa2961af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 22:55:39.564786095 +0000 UTC m=+1.582283447,LastTimestamp:2025-09-12 22:55:39.564786095 +0000 UTC m=+1.582283447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 22:55:41.274898 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 22:55:41.281324 kubelet[2482]: E0912 22:55:41.280745 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:41.301455 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 22:55:41.304672 kubelet[2482]: E0912 22:55:41.304557 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:41.319611 kubelet[2482]: I0912 22:55:41.319514 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:41.319611 kubelet[2482]: I0912 22:55:41.319583 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:41.320021 kubelet[2482]: I0912 22:55:41.319632 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:41.320021 kubelet[2482]: I0912 22:55:41.319683 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:41.320021 kubelet[2482]: I0912 22:55:41.319703 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:41.320021 kubelet[2482]: I0912 22:55:41.319724 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 22:55:41.553594 kubelet[2482]: E0912 22:55:41.552781 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:41.554080 containerd[1605]: time="2025-09-12T22:55:41.553847438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f307fa5edb5a28749645675fbda34af9,Namespace:kube-system,Attempt:0,}" Sep 12 22:55:41.564802 kubelet[2482]: E0912 22:55:41.564694 2482 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 22:55:41.589698 kubelet[2482]: E0912 22:55:41.582021 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:41.594638 containerd[1605]: time="2025-09-12T22:55:41.594554532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 22:55:41.608182 kubelet[2482]: E0912 22:55:41.608112 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:41.609199 containerd[1605]: time="2025-09-12T22:55:41.609018732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 22:55:41.671677 containerd[1605]: time="2025-09-12T22:55:41.671608117Z" level=info msg="connecting to shim b76276359d5da2d5effc8b18c41adad5bb370c71047ed46399c7e78df73da064" address="unix:///run/containerd/s/bcde574250a749a457d18b70e93b19ca7d68b86d687fef43ca412f1ded4f72d2" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:41.756031 containerd[1605]: time="2025-09-12T22:55:41.755337008Z" level=info msg="connecting to shim bc29f71491800ead3246f6ffe9811b5253e9c6161506e8616c618d852491e248" address="unix:///run/containerd/s/a9d71aa5658a0a7809e05c4edd1ba487cc420b77e05040c725d3ceb52fd627c9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:41.767210 containerd[1605]: time="2025-09-12T22:55:41.764758387Z" level=info msg="connecting to shim 78699024f27a92bb5b0279df4f341eb1be4ad86443681c6d6dda41b93a0de642" address="unix:///run/containerd/s/64af8532349eb926559d79e191969c863529346402ec36fc784f72e25886ae57" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:41.813734 systemd[1]: Started cri-containerd-b76276359d5da2d5effc8b18c41adad5bb370c71047ed46399c7e78df73da064.scope - libcontainer container b76276359d5da2d5effc8b18c41adad5bb370c71047ed46399c7e78df73da064. Sep 12 22:55:41.852485 systemd[1]: Started cri-containerd-bc29f71491800ead3246f6ffe9811b5253e9c6161506e8616c618d852491e248.scope - libcontainer container bc29f71491800ead3246f6ffe9811b5253e9c6161506e8616c618d852491e248. Sep 12 22:55:41.901552 systemd[1]: Started cri-containerd-78699024f27a92bb5b0279df4f341eb1be4ad86443681c6d6dda41b93a0de642.scope - libcontainer container 78699024f27a92bb5b0279df4f341eb1be4ad86443681c6d6dda41b93a0de642. Sep 12 22:55:41.959196 containerd[1605]: time="2025-09-12T22:55:41.959118851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f307fa5edb5a28749645675fbda34af9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b76276359d5da2d5effc8b18c41adad5bb370c71047ed46399c7e78df73da064\"" Sep 12 22:55:41.960950 kubelet[2482]: E0912 22:55:41.960879 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:41.984658 containerd[1605]: time="2025-09-12T22:55:41.983749409Z" level=info msg="CreateContainer within sandbox \"b76276359d5da2d5effc8b18c41adad5bb370c71047ed46399c7e78df73da064\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 22:55:42.019236 containerd[1605]: time="2025-09-12T22:55:42.018106235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc29f71491800ead3246f6ffe9811b5253e9c6161506e8616c618d852491e248\"" Sep 12 22:55:42.019795 kubelet[2482]: E0912 22:55:42.019737 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:42.024758 containerd[1605]: time="2025-09-12T22:55:42.023117509Z" level=info msg="Container c652e49871c7b1d2b266af83d51585ca9544f7f47bdaf766b7105c0f2c7fdc95: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:42.040980 containerd[1605]: time="2025-09-12T22:55:42.040901418Z" level=info msg="CreateContainer within sandbox \"bc29f71491800ead3246f6ffe9811b5253e9c6161506e8616c618d852491e248\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 22:55:42.043088 kubelet[2482]: I0912 22:55:42.042642 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:55:42.043088 kubelet[2482]: E0912 22:55:42.043051 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Sep 12 22:55:42.052961 containerd[1605]: time="2025-09-12T22:55:42.052894632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"78699024f27a92bb5b0279df4f341eb1be4ad86443681c6d6dda41b93a0de642\"" Sep 12 22:55:42.054224 kubelet[2482]: E0912 22:55:42.054173 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:42.054805 containerd[1605]: time="2025-09-12T22:55:42.054748430Z" level=info msg="CreateContainer within sandbox \"b76276359d5da2d5effc8b18c41adad5bb370c71047ed46399c7e78df73da064\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c652e49871c7b1d2b266af83d51585ca9544f7f47bdaf766b7105c0f2c7fdc95\"" Sep 12 22:55:42.055746 containerd[1605]: time="2025-09-12T22:55:42.055535989Z" level=info msg="StartContainer for \"c652e49871c7b1d2b266af83d51585ca9544f7f47bdaf766b7105c0f2c7fdc95\"" Sep 12 22:55:42.057838 containerd[1605]: time="2025-09-12T22:55:42.057733521Z" level=info msg="connecting to shim c652e49871c7b1d2b266af83d51585ca9544f7f47bdaf766b7105c0f2c7fdc95" address="unix:///run/containerd/s/bcde574250a749a457d18b70e93b19ca7d68b86d687fef43ca412f1ded4f72d2" protocol=ttrpc version=3 Sep 12 22:55:42.072449 containerd[1605]: time="2025-09-12T22:55:42.072290216Z" level=info msg="CreateContainer within sandbox \"78699024f27a92bb5b0279df4f341eb1be4ad86443681c6d6dda41b93a0de642\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 22:55:42.089520 containerd[1605]: time="2025-09-12T22:55:42.089445105Z" level=info msg="Container 813ab999c27554a782ac6df12064d56884d8d7bc7908c2aaa77939306035ec4c: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:42.112862 containerd[1605]: time="2025-09-12T22:55:42.112718935Z" level=info msg="Container a731e326618f0647e42ce54cfb3906bb514c6a5351d6f1791726b65b9b12cb48: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:42.123760 containerd[1605]: time="2025-09-12T22:55:42.123630981Z" level=info msg="CreateContainer within sandbox \"bc29f71491800ead3246f6ffe9811b5253e9c6161506e8616c618d852491e248\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"813ab999c27554a782ac6df12064d56884d8d7bc7908c2aaa77939306035ec4c\"" Sep 12 22:55:42.124352 containerd[1605]: time="2025-09-12T22:55:42.124308292Z" level=info msg="StartContainer for \"813ab999c27554a782ac6df12064d56884d8d7bc7908c2aaa77939306035ec4c\"" Sep 12 22:55:42.125976 containerd[1605]: time="2025-09-12T22:55:42.125897183Z" level=info msg="connecting to shim 813ab999c27554a782ac6df12064d56884d8d7bc7908c2aaa77939306035ec4c" address="unix:///run/containerd/s/a9d71aa5658a0a7809e05c4edd1ba487cc420b77e05040c725d3ceb52fd627c9" protocol=ttrpc version=3 Sep 12 22:55:42.127672 systemd[1]: Started cri-containerd-c652e49871c7b1d2b266af83d51585ca9544f7f47bdaf766b7105c0f2c7fdc95.scope - libcontainer container c652e49871c7b1d2b266af83d51585ca9544f7f47bdaf766b7105c0f2c7fdc95. Sep 12 22:55:42.135638 containerd[1605]: time="2025-09-12T22:55:42.135460757Z" level=info msg="CreateContainer within sandbox \"78699024f27a92bb5b0279df4f341eb1be4ad86443681c6d6dda41b93a0de642\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a731e326618f0647e42ce54cfb3906bb514c6a5351d6f1791726b65b9b12cb48\"" Sep 12 22:55:42.142419 containerd[1605]: time="2025-09-12T22:55:42.142302646Z" level=info msg="StartContainer for \"a731e326618f0647e42ce54cfb3906bb514c6a5351d6f1791726b65b9b12cb48\"" Sep 12 22:55:42.148785 containerd[1605]: time="2025-09-12T22:55:42.148704758Z" level=info msg="connecting to shim a731e326618f0647e42ce54cfb3906bb514c6a5351d6f1791726b65b9b12cb48" address="unix:///run/containerd/s/64af8532349eb926559d79e191969c863529346402ec36fc784f72e25886ae57" protocol=ttrpc version=3 Sep 12 22:55:42.166695 systemd[1]: Started cri-containerd-813ab999c27554a782ac6df12064d56884d8d7bc7908c2aaa77939306035ec4c.scope - libcontainer container 813ab999c27554a782ac6df12064d56884d8d7bc7908c2aaa77939306035ec4c. Sep 12 22:55:42.202825 systemd[1]: Started cri-containerd-a731e326618f0647e42ce54cfb3906bb514c6a5351d6f1791726b65b9b12cb48.scope - libcontainer container a731e326618f0647e42ce54cfb3906bb514c6a5351d6f1791726b65b9b12cb48. Sep 12 22:55:42.305998 containerd[1605]: time="2025-09-12T22:55:42.305024185Z" level=info msg="StartContainer for \"c652e49871c7b1d2b266af83d51585ca9544f7f47bdaf766b7105c0f2c7fdc95\" returns successfully" Sep 12 22:55:42.396731 containerd[1605]: time="2025-09-12T22:55:42.396509516Z" level=info msg="StartContainer for \"a731e326618f0647e42ce54cfb3906bb514c6a5351d6f1791726b65b9b12cb48\" returns successfully" Sep 12 22:55:42.479647 containerd[1605]: time="2025-09-12T22:55:42.476344448Z" level=info msg="StartContainer for \"813ab999c27554a782ac6df12064d56884d8d7bc7908c2aaa77939306035ec4c\" returns successfully" Sep 12 22:55:42.705265 kubelet[2482]: E0912 22:55:42.705076 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:42.707224 kubelet[2482]: E0912 22:55:42.706945 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:42.709435 kubelet[2482]: E0912 22:55:42.709274 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:42.709535 kubelet[2482]: E0912 22:55:42.709516 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:42.714208 kubelet[2482]: E0912 22:55:42.714131 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:42.714577 kubelet[2482]: E0912 22:55:42.714518 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:43.645926 kubelet[2482]: I0912 22:55:43.645119 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:55:43.717477 kubelet[2482]: E0912 22:55:43.717437 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:43.717951 kubelet[2482]: E0912 22:55:43.717570 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:43.718408 kubelet[2482]: E0912 22:55:43.718282 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:43.718489 kubelet[2482]: E0912 22:55:43.718386 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:44.720235 kubelet[2482]: E0912 22:55:44.720175 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:44.721713 kubelet[2482]: E0912 22:55:44.721018 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:45.162529 kubelet[2482]: E0912 22:55:45.161621 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 22:55:45.162529 kubelet[2482]: E0912 22:55:45.161790 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:45.952559 kubelet[2482]: E0912 22:55:45.952498 2482 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 22:55:46.021989 kubelet[2482]: I0912 22:55:46.021889 2482 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 22:55:46.021989 kubelet[2482]: E0912 22:55:46.021967 2482 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 22:55:46.095312 kubelet[2482]: I0912 22:55:46.094300 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:46.124118 kubelet[2482]: E0912 22:55:46.123261 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:46.124118 kubelet[2482]: I0912 22:55:46.123302 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 22:55:46.139999 kubelet[2482]: E0912 22:55:46.139806 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 22:55:46.139999 kubelet[2482]: I0912 22:55:46.139911 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:46.149156 kubelet[2482]: E0912 22:55:46.149083 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:46.553152 kubelet[2482]: I0912 22:55:46.551350 2482 apiserver.go:52] "Watching apiserver" Sep 12 22:55:46.597665 kubelet[2482]: I0912 22:55:46.597568 2482 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 22:55:49.509056 systemd[1]: Reload requested from client PID 2775 ('systemctl') (unit session-9.scope)... Sep 12 22:55:49.509086 systemd[1]: Reloading... Sep 12 22:55:49.722448 zram_generator::config[2818]: No configuration found. Sep 12 22:55:50.303710 systemd[1]: Reloading finished in 794 ms. Sep 12 22:55:50.393020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:55:50.442110 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 22:55:50.442587 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:55:50.442666 systemd[1]: kubelet.service: Consumed 2.293s CPU time, 134.7M memory peak. Sep 12 22:55:50.452743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:55:50.945166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:55:50.967254 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:55:51.081418 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:55:51.081418 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 22:55:51.081418 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:55:51.081943 kubelet[2862]: I0912 22:55:51.081417 2862 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:55:51.097642 kubelet[2862]: I0912 22:55:51.097592 2862 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 22:55:51.101070 kubelet[2862]: I0912 22:55:51.101020 2862 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:55:51.101446 kubelet[2862]: I0912 22:55:51.101367 2862 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 22:55:51.102968 kubelet[2862]: I0912 22:55:51.102892 2862 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 22:55:51.105953 kubelet[2862]: I0912 22:55:51.105778 2862 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:55:51.131240 kubelet[2862]: I0912 22:55:51.131190 2862 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:55:51.149345 kubelet[2862]: I0912 22:55:51.149282 2862 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:55:51.149667 kubelet[2862]: I0912 22:55:51.149607 2862 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:55:51.149875 kubelet[2862]: I0912 22:55:51.149659 2862 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:55:51.149875 kubelet[2862]: I0912 22:55:51.149872 2862 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:55:51.150067 kubelet[2862]: I0912 22:55:51.149886 2862 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 22:55:51.150067 kubelet[2862]: I0912 22:55:51.150006 2862 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:55:51.153214 kubelet[2862]: I0912 22:55:51.153166 2862 kubelet.go:480] "Attempting to sync node with API server" Sep 12 22:55:51.153214 kubelet[2862]: I0912 22:55:51.153198 2862 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:55:51.153325 kubelet[2862]: I0912 22:55:51.153240 2862 kubelet.go:386] "Adding apiserver pod source" Sep 12 22:55:51.153325 kubelet[2862]: I0912 22:55:51.153272 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:55:51.167063 kubelet[2862]: I0912 22:55:51.166990 2862 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:55:51.172443 kubelet[2862]: I0912 22:55:51.171295 2862 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 22:55:51.190586 kubelet[2862]: I0912 22:55:51.185296 2862 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 22:55:51.190586 kubelet[2862]: I0912 22:55:51.185358 2862 server.go:1289] "Started kubelet" Sep 12 22:55:51.190586 kubelet[2862]: I0912 22:55:51.186155 2862 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:55:51.190586 kubelet[2862]: I0912 22:55:51.187459 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:55:51.190586 kubelet[2862]: I0912 22:55:51.187809 2862 server.go:317] "Adding debug handlers to kubelet server" Sep 12 22:55:51.191470 kubelet[2862]: I0912 22:55:51.191262 2862 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:55:51.195965 kubelet[2862]: I0912 22:55:51.194296 2862 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 22:55:51.195965 kubelet[2862]: I0912 22:55:51.195793 2862 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 22:55:51.203047 kubelet[2862]: I0912 22:55:51.202081 2862 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:55:51.218019 kubelet[2862]: I0912 22:55:51.214833 2862 factory.go:223] Registration of the systemd container factory successfully Sep 12 22:55:51.218019 kubelet[2862]: I0912 22:55:51.215151 2862 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:55:51.223463 kubelet[2862]: E0912 22:55:51.221041 2862 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:55:51.226981 kubelet[2862]: I0912 22:55:51.224251 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:55:51.226981 kubelet[2862]: I0912 22:55:51.225533 2862 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:55:51.234567 kubelet[2862]: I0912 22:55:51.227367 2862 factory.go:223] Registration of the containerd container factory successfully Sep 12 22:55:51.326460 kubelet[2862]: I0912 22:55:51.326212 2862 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 22:55:51.330018 kubelet[2862]: I0912 22:55:51.329659 2862 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 22:55:51.330018 kubelet[2862]: I0912 22:55:51.329686 2862 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 22:55:51.330018 kubelet[2862]: I0912 22:55:51.329712 2862 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 22:55:51.330018 kubelet[2862]: I0912 22:55:51.329722 2862 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 22:55:51.330018 kubelet[2862]: E0912 22:55:51.329779 2862 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:55:51.431918 kubelet[2862]: E0912 22:55:51.431830 2862 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 22:55:51.472219 kubelet[2862]: I0912 22:55:51.470179 2862 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 22:55:51.472219 kubelet[2862]: I0912 22:55:51.470788 2862 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 22:55:51.475876 kubelet[2862]: I0912 22:55:51.473960 2862 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:55:51.476182 kubelet[2862]: I0912 22:55:51.476140 2862 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 22:55:51.476226 kubelet[2862]: I0912 22:55:51.476177 2862 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 22:55:51.476226 kubelet[2862]: I0912 22:55:51.476213 2862 policy_none.go:49] "None policy: Start" Sep 12 22:55:51.476226 kubelet[2862]: I0912 22:55:51.476226 2862 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 22:55:51.476304 kubelet[2862]: I0912 22:55:51.476240 2862 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:55:51.476347 kubelet[2862]: I0912 22:55:51.476342 2862 state_mem.go:75] "Updated machine memory state" Sep 12 22:55:51.491428 kubelet[2862]: E0912 22:55:51.488873 2862 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 22:55:51.491428 kubelet[2862]: I0912 22:55:51.489149 2862 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:55:51.491428 kubelet[2862]: I0912 22:55:51.489180 2862 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:55:51.501122 kubelet[2862]: I0912 22:55:51.494074 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:55:51.501122 kubelet[2862]: E0912 22:55:51.499474 2862 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 22:55:51.624973 kubelet[2862]: I0912 22:55:51.622437 2862 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 22:55:51.636089 kubelet[2862]: I0912 22:55:51.633773 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 22:55:51.636089 kubelet[2862]: I0912 22:55:51.634639 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:51.636089 kubelet[2862]: I0912 22:55:51.635718 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:51.682316 kubelet[2862]: I0912 22:55:51.679982 2862 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 22:55:51.682316 kubelet[2862]: I0912 22:55:51.680108 2862 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 22:55:51.709542 kubelet[2862]: I0912 22:55:51.709464 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f307fa5edb5a28749645675fbda34af9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f307fa5edb5a28749645675fbda34af9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:51.709542 kubelet[2862]: I0912 22:55:51.709527 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f307fa5edb5a28749645675fbda34af9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f307fa5edb5a28749645675fbda34af9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:51.709542 kubelet[2862]: I0912 22:55:51.709559 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:51.709542 kubelet[2862]: I0912 22:55:51.709590 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:51.710018 kubelet[2862]: I0912 22:55:51.709624 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:51.710018 kubelet[2862]: I0912 22:55:51.709647 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 22:55:51.710018 kubelet[2862]: I0912 22:55:51.709680 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f307fa5edb5a28749645675fbda34af9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f307fa5edb5a28749645675fbda34af9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:55:51.710018 kubelet[2862]: I0912 22:55:51.709710 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:51.710018 kubelet[2862]: I0912 22:55:51.709733 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:55:51.964136 kubelet[2862]: E0912 22:55:51.962484 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:51.964795 kubelet[2862]: E0912 22:55:51.964722 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:51.965873 kubelet[2862]: E0912 22:55:51.965432 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:52.162106 kubelet[2862]: I0912 22:55:52.160697 2862 apiserver.go:52] "Watching apiserver" Sep 12 22:55:52.196565 kubelet[2862]: I0912 22:55:52.196487 2862 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 22:55:52.400429 kubelet[2862]: E0912 22:55:52.399543 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:52.401279 kubelet[2862]: E0912 22:55:52.401254 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:52.401657 kubelet[2862]: E0912 22:55:52.401637 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:52.491037 kubelet[2862]: I0912 22:55:52.490493 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.490470701 podStartE2EDuration="1.490470701s" podCreationTimestamp="2025-09-12 22:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:55:52.485712424 +0000 UTC m=+1.510020903" watchObservedRunningTime="2025-09-12 22:55:52.490470701 +0000 UTC m=+1.514779150" Sep 12 22:55:52.491037 kubelet[2862]: I0912 22:55:52.490632 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4906282069999999 podStartE2EDuration="1.490628207s" podCreationTimestamp="2025-09-12 22:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:55:52.462685254 +0000 UTC m=+1.486993723" watchObservedRunningTime="2025-09-12 22:55:52.490628207 +0000 UTC m=+1.514936656" Sep 12 22:55:52.518251 kubelet[2862]: I0912 22:55:52.518182 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.518161701 podStartE2EDuration="1.518161701s" podCreationTimestamp="2025-09-12 22:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:55:52.513829103 +0000 UTC m=+1.538137562" watchObservedRunningTime="2025-09-12 22:55:52.518161701 +0000 UTC m=+1.542470150" Sep 12 22:55:53.405164 kubelet[2862]: E0912 22:55:53.402879 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:53.405164 kubelet[2862]: E0912 22:55:53.403362 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:55.164329 kubelet[2862]: I0912 22:55:55.163667 2862 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 22:55:55.169323 kubelet[2862]: I0912 22:55:55.168389 2862 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 22:55:55.169444 containerd[1605]: time="2025-09-12T22:55:55.167692001Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 22:55:55.344704 kubelet[2862]: E0912 22:55:55.327859 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:55.409932 kubelet[2862]: E0912 22:55:55.409863 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:56.412464 kubelet[2862]: I0912 22:55:56.412228 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd602e32-c386-4704-bc5b-e4c918f8ffa1-xtables-lock\") pod \"kube-proxy-sf8rk\" (UID: \"fd602e32-c386-4704-bc5b-e4c918f8ffa1\") " pod="kube-system/kube-proxy-sf8rk" Sep 12 22:55:56.412464 kubelet[2862]: I0912 22:55:56.412305 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd602e32-c386-4704-bc5b-e4c918f8ffa1-lib-modules\") pod \"kube-proxy-sf8rk\" (UID: \"fd602e32-c386-4704-bc5b-e4c918f8ffa1\") " pod="kube-system/kube-proxy-sf8rk" Sep 12 22:55:56.412464 kubelet[2862]: I0912 22:55:56.412323 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd602e32-c386-4704-bc5b-e4c918f8ffa1-kube-proxy\") pod \"kube-proxy-sf8rk\" (UID: \"fd602e32-c386-4704-bc5b-e4c918f8ffa1\") " pod="kube-system/kube-proxy-sf8rk" Sep 12 22:55:56.412464 kubelet[2862]: I0912 22:55:56.412358 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4xdv\" (UniqueName: \"kubernetes.io/projected/fd602e32-c386-4704-bc5b-e4c918f8ffa1-kube-api-access-t4xdv\") pod \"kube-proxy-sf8rk\" (UID: \"fd602e32-c386-4704-bc5b-e4c918f8ffa1\") " pod="kube-system/kube-proxy-sf8rk" Sep 12 22:55:56.424660 systemd[1]: Created slice kubepods-besteffort-podfd602e32_c386_4704_bc5b_e4c918f8ffa1.slice - libcontainer container kubepods-besteffort-podfd602e32_c386_4704_bc5b_e4c918f8ffa1.slice. Sep 12 22:55:56.605438 systemd[1]: Created slice kubepods-besteffort-pod38e6ffe2_e67a_470d_bdff_14cacfb6ff4d.slice - libcontainer container kubepods-besteffort-pod38e6ffe2_e67a_470d_bdff_14cacfb6ff4d.slice. Sep 12 22:55:56.719923 kubelet[2862]: I0912 22:55:56.719074 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/38e6ffe2-e67a-470d-bdff-14cacfb6ff4d-var-lib-calico\") pod \"tigera-operator-755d956888-9nhvh\" (UID: \"38e6ffe2-e67a-470d-bdff-14cacfb6ff4d\") " pod="tigera-operator/tigera-operator-755d956888-9nhvh" Sep 12 22:55:56.719923 kubelet[2862]: I0912 22:55:56.719155 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttf72\" (UniqueName: \"kubernetes.io/projected/38e6ffe2-e67a-470d-bdff-14cacfb6ff4d-kube-api-access-ttf72\") pod \"tigera-operator-755d956888-9nhvh\" (UID: \"38e6ffe2-e67a-470d-bdff-14cacfb6ff4d\") " pod="tigera-operator/tigera-operator-755d956888-9nhvh" Sep 12 22:55:56.771868 kubelet[2862]: E0912 22:55:56.769224 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:56.772122 containerd[1605]: time="2025-09-12T22:55:56.770079852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sf8rk,Uid:fd602e32-c386-4704-bc5b-e4c918f8ffa1,Namespace:kube-system,Attempt:0,}" Sep 12 22:55:56.922187 containerd[1605]: time="2025-09-12T22:55:56.922091242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9nhvh,Uid:38e6ffe2-e67a-470d-bdff-14cacfb6ff4d,Namespace:tigera-operator,Attempt:0,}" Sep 12 22:55:57.136380 containerd[1605]: time="2025-09-12T22:55:57.136232516Z" level=info msg="connecting to shim 2941e0fd97e980a8650fc2e36422dd4a8a6acf5256d6edb499fffc95a67db076" address="unix:///run/containerd/s/18eead8d2702027c76951c89853f698ef4394882ba4c26b02bbe96a942a8eaa1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:57.169474 containerd[1605]: time="2025-09-12T22:55:57.169211974Z" level=info msg="connecting to shim 641b532fa6641330871352a6335a22cc6e3534e322e437e5e8b576c21fa72a21" address="unix:///run/containerd/s/49d09e9b3a85da1baa105d6c88ada4cdc6ae1872ddacb5081adf5c185ffa1f8d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:57.214688 systemd[1]: Started cri-containerd-2941e0fd97e980a8650fc2e36422dd4a8a6acf5256d6edb499fffc95a67db076.scope - libcontainer container 2941e0fd97e980a8650fc2e36422dd4a8a6acf5256d6edb499fffc95a67db076. Sep 12 22:55:57.242792 systemd[1]: Started cri-containerd-641b532fa6641330871352a6335a22cc6e3534e322e437e5e8b576c21fa72a21.scope - libcontainer container 641b532fa6641330871352a6335a22cc6e3534e322e437e5e8b576c21fa72a21. Sep 12 22:55:57.317763 containerd[1605]: time="2025-09-12T22:55:57.317496871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sf8rk,Uid:fd602e32-c386-4704-bc5b-e4c918f8ffa1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2941e0fd97e980a8650fc2e36422dd4a8a6acf5256d6edb499fffc95a67db076\"" Sep 12 22:55:57.321144 kubelet[2862]: E0912 22:55:57.321042 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:57.341494 containerd[1605]: time="2025-09-12T22:55:57.341375697Z" level=info msg="CreateContainer within sandbox \"2941e0fd97e980a8650fc2e36422dd4a8a6acf5256d6edb499fffc95a67db076\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 22:55:57.396661 containerd[1605]: time="2025-09-12T22:55:57.396162465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9nhvh,Uid:38e6ffe2-e67a-470d-bdff-14cacfb6ff4d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"641b532fa6641330871352a6335a22cc6e3534e322e437e5e8b576c21fa72a21\"" Sep 12 22:55:57.412721 containerd[1605]: time="2025-09-12T22:55:57.412412369Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 22:55:57.415647 containerd[1605]: time="2025-09-12T22:55:57.415556478Z" level=info msg="Container 3a2af1591961747f1e9ee17d8cb59d22acba85f0737673b300fb4772bd679218: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:57.448865 containerd[1605]: time="2025-09-12T22:55:57.448554762Z" level=info msg="CreateContainer within sandbox \"2941e0fd97e980a8650fc2e36422dd4a8a6acf5256d6edb499fffc95a67db076\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a2af1591961747f1e9ee17d8cb59d22acba85f0737673b300fb4772bd679218\"" Sep 12 22:55:57.452853 containerd[1605]: time="2025-09-12T22:55:57.451276669Z" level=info msg="StartContainer for \"3a2af1591961747f1e9ee17d8cb59d22acba85f0737673b300fb4772bd679218\"" Sep 12 22:55:57.457422 containerd[1605]: time="2025-09-12T22:55:57.457316138Z" level=info msg="connecting to shim 3a2af1591961747f1e9ee17d8cb59d22acba85f0737673b300fb4772bd679218" address="unix:///run/containerd/s/18eead8d2702027c76951c89853f698ef4394882ba4c26b02bbe96a942a8eaa1" protocol=ttrpc version=3 Sep 12 22:55:57.543854 systemd[1]: Started cri-containerd-3a2af1591961747f1e9ee17d8cb59d22acba85f0737673b300fb4772bd679218.scope - libcontainer container 3a2af1591961747f1e9ee17d8cb59d22acba85f0737673b300fb4772bd679218. Sep 12 22:55:57.662303 containerd[1605]: time="2025-09-12T22:55:57.658387772Z" level=info msg="StartContainer for \"3a2af1591961747f1e9ee17d8cb59d22acba85f0737673b300fb4772bd679218\" returns successfully" Sep 12 22:55:58.241698 kubelet[2862]: E0912 22:55:58.240132 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:58.461781 kubelet[2862]: E0912 22:55:58.461565 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:58.461781 kubelet[2862]: E0912 22:55:58.461666 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:58.531282 kubelet[2862]: I0912 22:55:58.530979 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sf8rk" podStartSLOduration=2.530962947 podStartE2EDuration="2.530962947s" podCreationTimestamp="2025-09-12 22:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:55:58.53056994 +0000 UTC m=+7.554878389" watchObservedRunningTime="2025-09-12 22:55:58.530962947 +0000 UTC m=+7.555271396" Sep 12 22:55:59.466928 kubelet[2862]: E0912 22:55:59.463179 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:59.469416 kubelet[2862]: E0912 22:55:59.469356 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:00.981156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952723513.mount: Deactivated successfully. Sep 12 22:56:01.167513 kubelet[2862]: E0912 22:56:01.167441 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:01.477253 kubelet[2862]: E0912 22:56:01.477208 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:02.485234 kubelet[2862]: E0912 22:56:02.481283 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:02.943045 containerd[1605]: time="2025-09-12T22:56:02.941892397Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:02.943045 containerd[1605]: time="2025-09-12T22:56:02.941976405Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 22:56:02.949715 containerd[1605]: time="2025-09-12T22:56:02.947994805Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:02.961515 containerd[1605]: time="2025-09-12T22:56:02.959921398Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:02.961515 containerd[1605]: time="2025-09-12T22:56:02.960945149Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 5.548481262s" Sep 12 22:56:02.961515 containerd[1605]: time="2025-09-12T22:56:02.960978171Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 22:56:02.981843 containerd[1605]: time="2025-09-12T22:56:02.981761826Z" level=info msg="CreateContainer within sandbox \"641b532fa6641330871352a6335a22cc6e3534e322e437e5e8b576c21fa72a21\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 22:56:03.052794 containerd[1605]: time="2025-09-12T22:56:03.052010117Z" level=info msg="Container 31c799f34a3907f6514f3d5d8c268ef263147cf933943f59afded41447f39a03: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:03.077653 containerd[1605]: time="2025-09-12T22:56:03.076820660Z" level=info msg="CreateContainer within sandbox \"641b532fa6641330871352a6335a22cc6e3534e322e437e5e8b576c21fa72a21\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"31c799f34a3907f6514f3d5d8c268ef263147cf933943f59afded41447f39a03\"" Sep 12 22:56:03.086692 containerd[1605]: time="2025-09-12T22:56:03.082174475Z" level=info msg="StartContainer for \"31c799f34a3907f6514f3d5d8c268ef263147cf933943f59afded41447f39a03\"" Sep 12 22:56:03.086692 containerd[1605]: time="2025-09-12T22:56:03.083360433Z" level=info msg="connecting to shim 31c799f34a3907f6514f3d5d8c268ef263147cf933943f59afded41447f39a03" address="unix:///run/containerd/s/49d09e9b3a85da1baa105d6c88ada4cdc6ae1872ddacb5081adf5c185ffa1f8d" protocol=ttrpc version=3 Sep 12 22:56:03.229618 systemd[1]: Started cri-containerd-31c799f34a3907f6514f3d5d8c268ef263147cf933943f59afded41447f39a03.scope - libcontainer container 31c799f34a3907f6514f3d5d8c268ef263147cf933943f59afded41447f39a03. Sep 12 22:56:03.323825 containerd[1605]: time="2025-09-12T22:56:03.319212814Z" level=info msg="StartContainer for \"31c799f34a3907f6514f3d5d8c268ef263147cf933943f59afded41447f39a03\" returns successfully" Sep 12 22:56:03.537443 kubelet[2862]: I0912 22:56:03.534685 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-9nhvh" podStartSLOduration=1.978366854 podStartE2EDuration="7.534666473s" podCreationTimestamp="2025-09-12 22:55:56 +0000 UTC" firstStartedPulling="2025-09-12 22:55:57.406318057 +0000 UTC m=+6.430626506" lastFinishedPulling="2025-09-12 22:56:02.962617676 +0000 UTC m=+11.986926125" observedRunningTime="2025-09-12 22:56:03.533623105 +0000 UTC m=+12.557931564" watchObservedRunningTime="2025-09-12 22:56:03.534666473 +0000 UTC m=+12.558974932" Sep 12 22:56:09.490674 sudo[1836]: pam_unix(sudo:session): session closed for user root Sep 12 22:56:09.500512 sshd[1835]: Connection closed by 10.0.0.1 port 34454 Sep 12 22:56:09.501990 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:09.513872 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:34454.service: Deactivated successfully. Sep 12 22:56:09.520098 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 22:56:09.522370 systemd[1]: session-9.scope: Consumed 10.237s CPU time, 234.7M memory peak. Sep 12 22:56:09.532157 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Sep 12 22:56:09.536436 systemd-logind[1583]: Removed session 9. Sep 12 22:56:15.295494 systemd[1]: Created slice kubepods-besteffort-poda317fffb_51f7_4eb2_8def_add773996e89.slice - libcontainer container kubepods-besteffort-poda317fffb_51f7_4eb2_8def_add773996e89.slice. Sep 12 22:56:15.331964 kubelet[2862]: I0912 22:56:15.331890 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a317fffb-51f7-4eb2-8def-add773996e89-typha-certs\") pod \"calico-typha-f5bc99dbc-tnp9p\" (UID: \"a317fffb-51f7-4eb2-8def-add773996e89\") " pod="calico-system/calico-typha-f5bc99dbc-tnp9p" Sep 12 22:56:15.335047 kubelet[2862]: I0912 22:56:15.334584 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a317fffb-51f7-4eb2-8def-add773996e89-tigera-ca-bundle\") pod \"calico-typha-f5bc99dbc-tnp9p\" (UID: \"a317fffb-51f7-4eb2-8def-add773996e89\") " pod="calico-system/calico-typha-f5bc99dbc-tnp9p" Sep 12 22:56:15.335047 kubelet[2862]: I0912 22:56:15.334652 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n5nf\" (UniqueName: \"kubernetes.io/projected/a317fffb-51f7-4eb2-8def-add773996e89-kube-api-access-5n5nf\") pod \"calico-typha-f5bc99dbc-tnp9p\" (UID: \"a317fffb-51f7-4eb2-8def-add773996e89\") " pod="calico-system/calico-typha-f5bc99dbc-tnp9p" Sep 12 22:56:15.415256 systemd[1]: Created slice kubepods-besteffort-podcfd810ff_7f2e_4a0c_9944_a91a8839bdd1.slice - libcontainer container kubepods-besteffort-podcfd810ff_7f2e_4a0c_9944_a91a8839bdd1.slice. Sep 12 22:56:15.435697 kubelet[2862]: I0912 22:56:15.435600 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-cni-net-dir\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.435697 kubelet[2862]: I0912 22:56:15.435665 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-node-certs\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.435697 kubelet[2862]: I0912 22:56:15.435687 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-cni-bin-dir\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.435697 kubelet[2862]: I0912 22:56:15.435708 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-cni-log-dir\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.435998 kubelet[2862]: I0912 22:56:15.435783 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-lib-modules\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.435998 kubelet[2862]: I0912 22:56:15.435812 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-var-run-calico\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.435998 kubelet[2862]: I0912 22:56:15.435851 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-flexvol-driver-host\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.435998 kubelet[2862]: I0912 22:56:15.435877 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhkqj\" (UniqueName: \"kubernetes.io/projected/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-kube-api-access-zhkqj\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.435998 kubelet[2862]: I0912 22:56:15.435898 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-policysync\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.436148 kubelet[2862]: I0912 22:56:15.435922 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-tigera-ca-bundle\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.436148 kubelet[2862]: I0912 22:56:15.435959 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-var-lib-calico\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.436148 kubelet[2862]: I0912 22:56:15.435978 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfd810ff-7f2e-4a0c-9944-a91a8839bdd1-xtables-lock\") pod \"calico-node-5fc4q\" (UID: \"cfd810ff-7f2e-4a0c-9944-a91a8839bdd1\") " pod="calico-system/calico-node-5fc4q" Sep 12 22:56:15.545497 kubelet[2862]: E0912 22:56:15.545384 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.545497 kubelet[2862]: W0912 22:56:15.545445 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.546418 kubelet[2862]: E0912 22:56:15.545759 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.555910 kubelet[2862]: E0912 22:56:15.553987 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.555910 kubelet[2862]: W0912 22:56:15.554029 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.555910 kubelet[2862]: E0912 22:56:15.554060 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.581757 kubelet[2862]: E0912 22:56:15.581716 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.582140 kubelet[2862]: W0912 22:56:15.581810 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.582140 kubelet[2862]: E0912 22:56:15.581839 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.591345 kubelet[2862]: E0912 22:56:15.591261 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:15.602544 kubelet[2862]: E0912 22:56:15.601718 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:15.608015 containerd[1605]: time="2025-09-12T22:56:15.607907948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f5bc99dbc-tnp9p,Uid:a317fffb-51f7-4eb2-8def-add773996e89,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:15.612297 kubelet[2862]: E0912 22:56:15.612121 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.612297 kubelet[2862]: W0912 22:56:15.612150 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.612297 kubelet[2862]: E0912 22:56:15.612178 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.612548 kubelet[2862]: E0912 22:56:15.612440 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.612548 kubelet[2862]: W0912 22:56:15.612481 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.612548 kubelet[2862]: E0912 22:56:15.612495 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.613896 kubelet[2862]: E0912 22:56:15.612842 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.613896 kubelet[2862]: W0912 22:56:15.612861 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.613896 kubelet[2862]: E0912 22:56:15.612876 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.614723 kubelet[2862]: E0912 22:56:15.614699 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.614826 kubelet[2862]: W0912 22:56:15.614807 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.614988 kubelet[2862]: E0912 22:56:15.614897 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.615997 kubelet[2862]: E0912 22:56:15.615978 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.616112 kubelet[2862]: W0912 22:56:15.616094 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.616260 kubelet[2862]: E0912 22:56:15.616194 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.620869 kubelet[2862]: E0912 22:56:15.620650 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.620869 kubelet[2862]: W0912 22:56:15.620681 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.620869 kubelet[2862]: E0912 22:56:15.620707 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.624320 kubelet[2862]: E0912 22:56:15.624056 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.624320 kubelet[2862]: W0912 22:56:15.624093 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.624320 kubelet[2862]: E0912 22:56:15.624118 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.625098 kubelet[2862]: E0912 22:56:15.625019 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.625098 kubelet[2862]: W0912 22:56:15.625059 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.625098 kubelet[2862]: E0912 22:56:15.625091 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.629620 kubelet[2862]: E0912 22:56:15.629544 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.629620 kubelet[2862]: W0912 22:56:15.629608 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.629620 kubelet[2862]: E0912 22:56:15.629637 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.647072 kubelet[2862]: E0912 22:56:15.646876 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.647072 kubelet[2862]: W0912 22:56:15.646912 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.647072 kubelet[2862]: E0912 22:56:15.646937 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.647340 kubelet[2862]: E0912 22:56:15.647222 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.647340 kubelet[2862]: W0912 22:56:15.647231 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.647340 kubelet[2862]: E0912 22:56:15.647241 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.647535 kubelet[2862]: E0912 22:56:15.647447 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.647535 kubelet[2862]: W0912 22:56:15.647464 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.647535 kubelet[2862]: E0912 22:56:15.647474 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.656517 kubelet[2862]: E0912 22:56:15.656449 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.656517 kubelet[2862]: W0912 22:56:15.656497 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.656517 kubelet[2862]: E0912 22:56:15.656527 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.659716 kubelet[2862]: E0912 22:56:15.659668 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.659716 kubelet[2862]: W0912 22:56:15.659697 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.659716 kubelet[2862]: E0912 22:56:15.659719 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.663959 kubelet[2862]: E0912 22:56:15.663916 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.663959 kubelet[2862]: W0912 22:56:15.663941 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.663959 kubelet[2862]: E0912 22:56:15.663957 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.668657 kubelet[2862]: E0912 22:56:15.668609 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.668657 kubelet[2862]: W0912 22:56:15.668635 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.668657 kubelet[2862]: E0912 22:56:15.668654 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.669244 kubelet[2862]: E0912 22:56:15.669212 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.669244 kubelet[2862]: W0912 22:56:15.669233 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.669244 kubelet[2862]: E0912 22:56:15.669246 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.675055 kubelet[2862]: E0912 22:56:15.674577 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.675055 kubelet[2862]: W0912 22:56:15.674604 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.675055 kubelet[2862]: E0912 22:56:15.674672 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.675484 kubelet[2862]: E0912 22:56:15.675450 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.675484 kubelet[2862]: W0912 22:56:15.675474 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.675566 kubelet[2862]: E0912 22:56:15.675487 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.679869 kubelet[2862]: E0912 22:56:15.675908 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.679869 kubelet[2862]: W0912 22:56:15.679053 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.679869 kubelet[2862]: E0912 22:56:15.679075 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.679869 kubelet[2862]: E0912 22:56:15.679657 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.679869 kubelet[2862]: W0912 22:56:15.679668 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.679869 kubelet[2862]: E0912 22:56:15.679755 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.680188 kubelet[2862]: I0912 22:56:15.679934 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e017806f-04da-4501-94aa-d21ceb92cfe6-registration-dir\") pod \"csi-node-driver-ktqqr\" (UID: \"e017806f-04da-4501-94aa-d21ceb92cfe6\") " pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:15.682693 kubelet[2862]: E0912 22:56:15.682225 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.682693 kubelet[2862]: W0912 22:56:15.682244 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.682693 kubelet[2862]: E0912 22:56:15.682256 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.682693 kubelet[2862]: I0912 22:56:15.682274 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e017806f-04da-4501-94aa-d21ceb92cfe6-varrun\") pod \"csi-node-driver-ktqqr\" (UID: \"e017806f-04da-4501-94aa-d21ceb92cfe6\") " pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:15.682693 kubelet[2862]: E0912 22:56:15.682488 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.682693 kubelet[2862]: W0912 22:56:15.682498 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.682693 kubelet[2862]: E0912 22:56:15.682507 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.682693 kubelet[2862]: I0912 22:56:15.682523 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkc6\" (UniqueName: \"kubernetes.io/projected/e017806f-04da-4501-94aa-d21ceb92cfe6-kube-api-access-6wkc6\") pod \"csi-node-driver-ktqqr\" (UID: \"e017806f-04da-4501-94aa-d21ceb92cfe6\") " pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:15.684538 kubelet[2862]: E0912 22:56:15.683788 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.684538 kubelet[2862]: W0912 22:56:15.684274 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.684538 kubelet[2862]: E0912 22:56:15.684289 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.684538 kubelet[2862]: I0912 22:56:15.684313 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e017806f-04da-4501-94aa-d21ceb92cfe6-socket-dir\") pod \"csi-node-driver-ktqqr\" (UID: \"e017806f-04da-4501-94aa-d21ceb92cfe6\") " pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:15.686605 kubelet[2862]: E0912 22:56:15.686013 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.686605 kubelet[2862]: W0912 22:56:15.686032 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.686605 kubelet[2862]: E0912 22:56:15.686045 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.686605 kubelet[2862]: I0912 22:56:15.686364 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e017806f-04da-4501-94aa-d21ceb92cfe6-kubelet-dir\") pod \"csi-node-driver-ktqqr\" (UID: \"e017806f-04da-4501-94aa-d21ceb92cfe6\") " pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:15.687369 kubelet[2862]: E0912 22:56:15.687025 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.687369 kubelet[2862]: W0912 22:56:15.687042 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.687369 kubelet[2862]: E0912 22:56:15.687053 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.689459 kubelet[2862]: E0912 22:56:15.689080 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.689459 kubelet[2862]: W0912 22:56:15.689278 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.689813 kubelet[2862]: E0912 22:56:15.689673 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.690951 kubelet[2862]: E0912 22:56:15.690760 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.690951 kubelet[2862]: W0912 22:56:15.690774 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.690951 kubelet[2862]: E0912 22:56:15.690788 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.691997 kubelet[2862]: E0912 22:56:15.691740 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.691997 kubelet[2862]: W0912 22:56:15.691752 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.691997 kubelet[2862]: E0912 22:56:15.691764 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.699675 kubelet[2862]: E0912 22:56:15.694522 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.699675 kubelet[2862]: W0912 22:56:15.694537 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.699675 kubelet[2862]: E0912 22:56:15.694551 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.700367 kubelet[2862]: E0912 22:56:15.700280 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.700367 kubelet[2862]: W0912 22:56:15.700322 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.700367 kubelet[2862]: E0912 22:56:15.700352 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.708623 kubelet[2862]: E0912 22:56:15.708551 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.708623 kubelet[2862]: W0912 22:56:15.708593 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.708623 kubelet[2862]: E0912 22:56:15.708622 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.709081 kubelet[2862]: E0912 22:56:15.709050 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.709081 kubelet[2862]: W0912 22:56:15.709065 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.709081 kubelet[2862]: E0912 22:56:15.709075 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.709863 kubelet[2862]: E0912 22:56:15.709547 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.709863 kubelet[2862]: W0912 22:56:15.709579 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.709863 kubelet[2862]: E0912 22:56:15.709610 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.711035 kubelet[2862]: E0912 22:56:15.710193 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.711035 kubelet[2862]: W0912 22:56:15.710616 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.711434 kubelet[2862]: E0912 22:56:15.711302 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.739670 containerd[1605]: time="2025-09-12T22:56:15.737642092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fc4q,Uid:cfd810ff-7f2e-4a0c-9944-a91a8839bdd1,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:15.784528 containerd[1605]: time="2025-09-12T22:56:15.784426692Z" level=info msg="connecting to shim 533fae0b24a4ae0ade92de7c7d5441c24e869bf392d639ad0d0ce12375eb4b44" address="unix:///run/containerd/s/439d12a06693fe91418f5ebc6a8e245b8db388bd2acf7090705a2d48eadc1a9d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:15.788734 kubelet[2862]: E0912 22:56:15.788653 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.789288 kubelet[2862]: W0912 22:56:15.789130 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.789288 kubelet[2862]: E0912 22:56:15.789174 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.789921 kubelet[2862]: E0912 22:56:15.789903 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.790012 kubelet[2862]: W0912 22:56:15.789997 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.790093 kubelet[2862]: E0912 22:56:15.790075 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.791158 kubelet[2862]: E0912 22:56:15.791111 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.791158 kubelet[2862]: W0912 22:56:15.791126 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.791158 kubelet[2862]: E0912 22:56:15.791142 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.791654 kubelet[2862]: E0912 22:56:15.791610 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.791654 kubelet[2862]: W0912 22:56:15.791626 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.791654 kubelet[2862]: E0912 22:56:15.791637 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.792288 kubelet[2862]: E0912 22:56:15.792242 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.792288 kubelet[2862]: W0912 22:56:15.792257 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.792288 kubelet[2862]: E0912 22:56:15.792269 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.793115 kubelet[2862]: E0912 22:56:15.793070 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.793115 kubelet[2862]: W0912 22:56:15.793085 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.793115 kubelet[2862]: E0912 22:56:15.793098 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.795740 kubelet[2862]: E0912 22:56:15.795613 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.795740 kubelet[2862]: W0912 22:56:15.795633 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.795740 kubelet[2862]: E0912 22:56:15.795649 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.796544 kubelet[2862]: E0912 22:56:15.796336 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.796544 kubelet[2862]: W0912 22:56:15.796353 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.796544 kubelet[2862]: E0912 22:56:15.796369 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.800140 kubelet[2862]: E0912 22:56:15.800095 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.801253 kubelet[2862]: W0912 22:56:15.800307 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.801253 kubelet[2862]: E0912 22:56:15.800345 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.802179 kubelet[2862]: E0912 22:56:15.802146 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.802179 kubelet[2862]: W0912 22:56:15.802176 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.802283 kubelet[2862]: E0912 22:56:15.802192 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.809505 kubelet[2862]: E0912 22:56:15.809196 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.809505 kubelet[2862]: W0912 22:56:15.809244 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.809505 kubelet[2862]: E0912 22:56:15.809278 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.812054 kubelet[2862]: E0912 22:56:15.812008 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.812054 kubelet[2862]: W0912 22:56:15.812037 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.812054 kubelet[2862]: E0912 22:56:15.812056 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.812463 kubelet[2862]: E0912 22:56:15.812424 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.812463 kubelet[2862]: W0912 22:56:15.812441 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.814660 kubelet[2862]: E0912 22:56:15.814528 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.817553 containerd[1605]: time="2025-09-12T22:56:15.815162317Z" level=info msg="connecting to shim c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4" address="unix:///run/containerd/s/695a8de3dfd705c5da7eab8c374331237b9eb51bc5efae78501292264c1e392b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:15.817640 kubelet[2862]: E0912 22:56:15.816618 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.817640 kubelet[2862]: W0912 22:56:15.816636 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.817640 kubelet[2862]: E0912 22:56:15.816653 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.817640 kubelet[2862]: E0912 22:56:15.817324 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.817640 kubelet[2862]: W0912 22:56:15.817334 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.817640 kubelet[2862]: E0912 22:56:15.817346 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.819034 kubelet[2862]: E0912 22:56:15.818942 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.819176 kubelet[2862]: W0912 22:56:15.819143 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.819269 kubelet[2862]: E0912 22:56:15.819198 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.819633 kubelet[2862]: E0912 22:56:15.819603 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.819633 kubelet[2862]: W0912 22:56:15.819622 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.819633 kubelet[2862]: E0912 22:56:15.819634 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.820166 kubelet[2862]: E0912 22:56:15.820092 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.820166 kubelet[2862]: W0912 22:56:15.820112 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.820166 kubelet[2862]: E0912 22:56:15.820124 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.821977 kubelet[2862]: E0912 22:56:15.820505 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.821977 kubelet[2862]: W0912 22:56:15.820524 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.821977 kubelet[2862]: E0912 22:56:15.820536 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.821977 kubelet[2862]: E0912 22:56:15.820801 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.821977 kubelet[2862]: W0912 22:56:15.820811 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.821977 kubelet[2862]: E0912 22:56:15.820825 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.821977 kubelet[2862]: E0912 22:56:15.821088 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.821977 kubelet[2862]: W0912 22:56:15.821100 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.821977 kubelet[2862]: E0912 22:56:15.821111 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.821977 kubelet[2862]: E0912 22:56:15.821367 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.822295 kubelet[2862]: W0912 22:56:15.821378 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.822295 kubelet[2862]: E0912 22:56:15.821388 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.822295 kubelet[2862]: E0912 22:56:15.821708 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.822295 kubelet[2862]: W0912 22:56:15.821719 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.822295 kubelet[2862]: E0912 22:56:15.821729 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.822295 kubelet[2862]: E0912 22:56:15.822076 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.822295 kubelet[2862]: W0912 22:56:15.822089 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.822295 kubelet[2862]: E0912 22:56:15.822101 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.825989 kubelet[2862]: E0912 22:56:15.822854 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.825989 kubelet[2862]: W0912 22:56:15.822887 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.825989 kubelet[2862]: E0912 22:56:15.822936 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.842922 systemd[1]: Started cri-containerd-533fae0b24a4ae0ade92de7c7d5441c24e869bf392d639ad0d0ce12375eb4b44.scope - libcontainer container 533fae0b24a4ae0ade92de7c7d5441c24e869bf392d639ad0d0ce12375eb4b44. Sep 12 22:56:15.848516 kubelet[2862]: E0912 22:56:15.848477 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:15.848516 kubelet[2862]: W0912 22:56:15.848508 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:15.848700 kubelet[2862]: E0912 22:56:15.848531 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:15.894828 systemd[1]: Started cri-containerd-c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4.scope - libcontainer container c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4. Sep 12 22:56:16.002679 containerd[1605]: time="2025-09-12T22:56:16.002598781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f5bc99dbc-tnp9p,Uid:a317fffb-51f7-4eb2-8def-add773996e89,Namespace:calico-system,Attempt:0,} returns sandbox id \"533fae0b24a4ae0ade92de7c7d5441c24e869bf392d639ad0d0ce12375eb4b44\"" Sep 12 22:56:16.009162 kubelet[2862]: E0912 22:56:16.009102 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:16.014630 containerd[1605]: time="2025-09-12T22:56:16.014271302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fc4q,Uid:cfd810ff-7f2e-4a0c-9944-a91a8839bdd1,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4\"" Sep 12 22:56:16.019128 containerd[1605]: time="2025-09-12T22:56:16.019072132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 22:56:17.330723 kubelet[2862]: E0912 22:56:17.330606 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:17.927100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690358081.mount: Deactivated successfully. Sep 12 22:56:19.341062 kubelet[2862]: E0912 22:56:19.336778 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:21.186016 containerd[1605]: time="2025-09-12T22:56:21.185925149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:21.187453 containerd[1605]: time="2025-09-12T22:56:21.187343376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 22:56:21.191059 containerd[1605]: time="2025-09-12T22:56:21.190974049Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:21.198471 containerd[1605]: time="2025-09-12T22:56:21.196353382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:21.200031 containerd[1605]: time="2025-09-12T22:56:21.199977663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.180849195s" Sep 12 22:56:21.200267 containerd[1605]: time="2025-09-12T22:56:21.200143686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 22:56:21.205025 containerd[1605]: time="2025-09-12T22:56:21.204740172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 22:56:21.235665 containerd[1605]: time="2025-09-12T22:56:21.235016037Z" level=info msg="CreateContainer within sandbox \"533fae0b24a4ae0ade92de7c7d5441c24e869bf392d639ad0d0ce12375eb4b44\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 22:56:21.264502 containerd[1605]: time="2025-09-12T22:56:21.260936462Z" level=info msg="Container e958c584cf4588596f46c763b6d74739046e69c4d4bd4021af67a8d12492bc4c: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:21.282675 containerd[1605]: time="2025-09-12T22:56:21.282613637Z" level=info msg="CreateContainer within sandbox \"533fae0b24a4ae0ade92de7c7d5441c24e869bf392d639ad0d0ce12375eb4b44\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e958c584cf4588596f46c763b6d74739046e69c4d4bd4021af67a8d12492bc4c\"" Sep 12 22:56:21.283334 containerd[1605]: time="2025-09-12T22:56:21.283278302Z" level=info msg="StartContainer for \"e958c584cf4588596f46c763b6d74739046e69c4d4bd4021af67a8d12492bc4c\"" Sep 12 22:56:21.285433 containerd[1605]: time="2025-09-12T22:56:21.285337779Z" level=info msg="connecting to shim e958c584cf4588596f46c763b6d74739046e69c4d4bd4021af67a8d12492bc4c" address="unix:///run/containerd/s/439d12a06693fe91418f5ebc6a8e245b8db388bd2acf7090705a2d48eadc1a9d" protocol=ttrpc version=3 Sep 12 22:56:21.332219 kubelet[2862]: E0912 22:56:21.331692 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:21.344938 systemd[1]: Started cri-containerd-e958c584cf4588596f46c763b6d74739046e69c4d4bd4021af67a8d12492bc4c.scope - libcontainer container e958c584cf4588596f46c763b6d74739046e69c4d4bd4021af67a8d12492bc4c. Sep 12 22:56:21.453438 containerd[1605]: time="2025-09-12T22:56:21.453235555Z" level=info msg="StartContainer for \"e958c584cf4588596f46c763b6d74739046e69c4d4bd4021af67a8d12492bc4c\" returns successfully" Sep 12 22:56:21.645782 kubelet[2862]: E0912 22:56:21.645710 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:21.741475 kubelet[2862]: E0912 22:56:21.739588 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.741475 kubelet[2862]: W0912 22:56:21.739622 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.741475 kubelet[2862]: E0912 22:56:21.739651 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.741475 kubelet[2862]: E0912 22:56:21.740146 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.741475 kubelet[2862]: W0912 22:56:21.740159 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.741475 kubelet[2862]: E0912 22:56:21.740171 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.741475 kubelet[2862]: E0912 22:56:21.740836 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.741475 kubelet[2862]: W0912 22:56:21.740848 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.741475 kubelet[2862]: E0912 22:56:21.740860 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.741918 kubelet[2862]: E0912 22:56:21.741765 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.741918 kubelet[2862]: W0912 22:56:21.741779 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.741918 kubelet[2862]: E0912 22:56:21.741794 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.743542 kubelet[2862]: E0912 22:56:21.743511 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.743542 kubelet[2862]: W0912 22:56:21.743533 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.743634 kubelet[2862]: E0912 22:56:21.743547 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.744187 kubelet[2862]: E0912 22:56:21.744151 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.744187 kubelet[2862]: W0912 22:56:21.744176 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.744275 kubelet[2862]: E0912 22:56:21.744192 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.748255 kubelet[2862]: E0912 22:56:21.747854 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.748255 kubelet[2862]: W0912 22:56:21.747883 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.748255 kubelet[2862]: E0912 22:56:21.747907 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.751067 kubelet[2862]: E0912 22:56:21.751020 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.751189 kubelet[2862]: W0912 22:56:21.751056 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.751189 kubelet[2862]: E0912 22:56:21.751106 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.751495 kubelet[2862]: E0912 22:56:21.751469 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.751495 kubelet[2862]: W0912 22:56:21.751489 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.751585 kubelet[2862]: E0912 22:56:21.751502 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.751717 kubelet[2862]: E0912 22:56:21.751692 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.751717 kubelet[2862]: W0912 22:56:21.751711 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.751791 kubelet[2862]: E0912 22:56:21.751723 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.752109 kubelet[2862]: E0912 22:56:21.751925 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.752109 kubelet[2862]: W0912 22:56:21.751939 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.752109 kubelet[2862]: E0912 22:56:21.751951 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.752208 kubelet[2862]: E0912 22:56:21.752179 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.752208 kubelet[2862]: W0912 22:56:21.752191 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.752208 kubelet[2862]: E0912 22:56:21.752203 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.752781 kubelet[2862]: E0912 22:56:21.752722 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.752781 kubelet[2862]: W0912 22:56:21.752743 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.752781 kubelet[2862]: E0912 22:56:21.752767 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.753154 kubelet[2862]: E0912 22:56:21.752967 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.753154 kubelet[2862]: W0912 22:56:21.752982 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.753154 kubelet[2862]: E0912 22:56:21.752993 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.753270 kubelet[2862]: E0912 22:56:21.753192 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.753270 kubelet[2862]: W0912 22:56:21.753203 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.753270 kubelet[2862]: E0912 22:56:21.753213 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.754115 kubelet[2862]: E0912 22:56:21.753529 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.754115 kubelet[2862]: W0912 22:56:21.753545 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.754115 kubelet[2862]: E0912 22:56:21.753557 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.754115 kubelet[2862]: E0912 22:56:21.753841 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.754115 kubelet[2862]: W0912 22:56:21.753851 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.754115 kubelet[2862]: E0912 22:56:21.753881 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.754313 kubelet[2862]: E0912 22:56:21.754269 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.754313 kubelet[2862]: W0912 22:56:21.754281 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.754313 kubelet[2862]: E0912 22:56:21.754291 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.754791 kubelet[2862]: E0912 22:56:21.754613 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.754791 kubelet[2862]: W0912 22:56:21.754625 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.754791 kubelet[2862]: E0912 22:56:21.754637 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.756741 kubelet[2862]: E0912 22:56:21.756528 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.756741 kubelet[2862]: W0912 22:56:21.756556 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.756741 kubelet[2862]: E0912 22:56:21.756575 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.757196 kubelet[2862]: E0912 22:56:21.757151 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.757196 kubelet[2862]: W0912 22:56:21.757170 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.757196 kubelet[2862]: E0912 22:56:21.757184 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.759749 kubelet[2862]: E0912 22:56:21.759706 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.759749 kubelet[2862]: W0912 22:56:21.759740 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.759843 kubelet[2862]: E0912 22:56:21.759762 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.761533 kubelet[2862]: E0912 22:56:21.761386 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.761533 kubelet[2862]: W0912 22:56:21.761521 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.761639 kubelet[2862]: E0912 22:56:21.761539 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.761849 kubelet[2862]: E0912 22:56:21.761817 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.761849 kubelet[2862]: W0912 22:56:21.761839 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.761923 kubelet[2862]: E0912 22:56:21.761851 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.762809 kubelet[2862]: E0912 22:56:21.762387 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.762809 kubelet[2862]: W0912 22:56:21.762421 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.762809 kubelet[2862]: E0912 22:56:21.762433 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.762809 kubelet[2862]: E0912 22:56:21.762688 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.762809 kubelet[2862]: W0912 22:56:21.762701 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.762809 kubelet[2862]: E0912 22:56:21.762712 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.763024 kubelet[2862]: E0912 22:56:21.763001 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.763024 kubelet[2862]: W0912 22:56:21.763014 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.763075 kubelet[2862]: E0912 22:56:21.763025 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.763521 kubelet[2862]: E0912 22:56:21.763469 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.763521 kubelet[2862]: W0912 22:56:21.763490 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.763521 kubelet[2862]: E0912 22:56:21.763502 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.765595 kubelet[2862]: E0912 22:56:21.764546 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.765595 kubelet[2862]: W0912 22:56:21.764559 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.765595 kubelet[2862]: E0912 22:56:21.764573 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.765595 kubelet[2862]: E0912 22:56:21.764876 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.765595 kubelet[2862]: W0912 22:56:21.764888 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.765595 kubelet[2862]: E0912 22:56:21.764900 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.765595 kubelet[2862]: E0912 22:56:21.765177 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.765595 kubelet[2862]: W0912 22:56:21.765188 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.765595 kubelet[2862]: E0912 22:56:21.765200 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.765595 kubelet[2862]: E0912 22:56:21.765472 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.765984 kubelet[2862]: W0912 22:56:21.765483 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.765984 kubelet[2862]: E0912 22:56:21.765494 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:21.766434 kubelet[2862]: E0912 22:56:21.766403 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:21.766434 kubelet[2862]: W0912 22:56:21.766424 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:21.766517 kubelet[2862]: E0912 22:56:21.766438 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.646516 kubelet[2862]: E0912 22:56:22.646464 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:22.675419 kubelet[2862]: E0912 22:56:22.671625 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.675419 kubelet[2862]: W0912 22:56:22.671661 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.675419 kubelet[2862]: E0912 22:56:22.671719 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.675419 kubelet[2862]: E0912 22:56:22.672052 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.675419 kubelet[2862]: W0912 22:56:22.672061 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.675419 kubelet[2862]: E0912 22:56:22.672072 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.675419 kubelet[2862]: E0912 22:56:22.672335 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.675831 kubelet[2862]: W0912 22:56:22.672344 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.675831 kubelet[2862]: E0912 22:56:22.675582 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.675934 kubelet[2862]: E0912 22:56:22.675887 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.675934 kubelet[2862]: W0912 22:56:22.675929 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.676028 kubelet[2862]: E0912 22:56:22.675943 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.682674 kubelet[2862]: E0912 22:56:22.680115 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.682674 kubelet[2862]: W0912 22:56:22.680141 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.682674 kubelet[2862]: E0912 22:56:22.680162 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.682674 kubelet[2862]: E0912 22:56:22.680406 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.682674 kubelet[2862]: W0912 22:56:22.680419 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.682674 kubelet[2862]: E0912 22:56:22.680431 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.682674 kubelet[2862]: E0912 22:56:22.680615 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.682674 kubelet[2862]: W0912 22:56:22.680625 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.682674 kubelet[2862]: E0912 22:56:22.680636 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.682674 kubelet[2862]: E0912 22:56:22.680822 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.683818 kubelet[2862]: W0912 22:56:22.680832 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.683818 kubelet[2862]: E0912 22:56:22.680844 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.683818 kubelet[2862]: E0912 22:56:22.681071 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.683818 kubelet[2862]: W0912 22:56:22.681082 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.683818 kubelet[2862]: E0912 22:56:22.681093 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.686439 kubelet[2862]: I0912 22:56:22.684747 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f5bc99dbc-tnp9p" podStartSLOduration=2.493189054 podStartE2EDuration="7.684726656s" podCreationTimestamp="2025-09-12 22:56:15 +0000 UTC" firstStartedPulling="2025-09-12 22:56:16.011297864 +0000 UTC m=+25.035606314" lastFinishedPulling="2025-09-12 22:56:21.202835467 +0000 UTC m=+30.227143916" observedRunningTime="2025-09-12 22:56:21.691097426 +0000 UTC m=+30.715405875" watchObservedRunningTime="2025-09-12 22:56:22.684726656 +0000 UTC m=+31.709035105" Sep 12 22:56:22.687343 kubelet[2862]: E0912 22:56:22.687292 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.687343 kubelet[2862]: W0912 22:56:22.687324 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.687478 kubelet[2862]: E0912 22:56:22.687361 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.690064 kubelet[2862]: E0912 22:56:22.687780 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.690064 kubelet[2862]: W0912 22:56:22.687817 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.690064 kubelet[2862]: E0912 22:56:22.687829 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.690064 kubelet[2862]: E0912 22:56:22.689695 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.690064 kubelet[2862]: W0912 22:56:22.689712 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.690064 kubelet[2862]: E0912 22:56:22.689760 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.692432 kubelet[2862]: E0912 22:56:22.690802 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.692432 kubelet[2862]: W0912 22:56:22.690817 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.692432 kubelet[2862]: E0912 22:56:22.690829 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.692813 kubelet[2862]: E0912 22:56:22.692784 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.692813 kubelet[2862]: W0912 22:56:22.692802 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.692949 kubelet[2862]: E0912 22:56:22.692816 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.696684 kubelet[2862]: E0912 22:56:22.696646 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.697064 kubelet[2862]: W0912 22:56:22.696839 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.697064 kubelet[2862]: E0912 22:56:22.696872 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.703597 kubelet[2862]: E0912 22:56:22.702937 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.703597 kubelet[2862]: W0912 22:56:22.702970 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.703597 kubelet[2862]: E0912 22:56:22.702999 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.711952 kubelet[2862]: E0912 22:56:22.711651 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.711952 kubelet[2862]: W0912 22:56:22.711690 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.711952 kubelet[2862]: E0912 22:56:22.711724 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.712471 kubelet[2862]: E0912 22:56:22.712454 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.712548 kubelet[2862]: W0912 22:56:22.712533 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.712629 kubelet[2862]: E0912 22:56:22.712604 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.713066 kubelet[2862]: E0912 22:56:22.713049 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.713448 kubelet[2862]: W0912 22:56:22.713131 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.713592 kubelet[2862]: E0912 22:56:22.713575 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.713951 kubelet[2862]: E0912 22:56:22.713933 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.714039 kubelet[2862]: W0912 22:56:22.714023 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.717554 kubelet[2862]: E0912 22:56:22.714108 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.718887 kubelet[2862]: E0912 22:56:22.718838 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.719052 kubelet[2862]: W0912 22:56:22.719031 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.719244 kubelet[2862]: E0912 22:56:22.719222 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.723876 kubelet[2862]: E0912 22:56:22.723795 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.727248 kubelet[2862]: W0912 22:56:22.725635 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.727248 kubelet[2862]: E0912 22:56:22.725711 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.728181 kubelet[2862]: E0912 22:56:22.728044 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.728181 kubelet[2862]: W0912 22:56:22.728072 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.728181 kubelet[2862]: E0912 22:56:22.728100 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.731175 kubelet[2862]: E0912 22:56:22.730574 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.731175 kubelet[2862]: W0912 22:56:22.730600 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.731175 kubelet[2862]: E0912 22:56:22.730626 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.731175 kubelet[2862]: E0912 22:56:22.730914 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.731175 kubelet[2862]: W0912 22:56:22.730926 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.731175 kubelet[2862]: E0912 22:56:22.730949 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.734068 kubelet[2862]: E0912 22:56:22.732829 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.734068 kubelet[2862]: W0912 22:56:22.732847 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.734068 kubelet[2862]: E0912 22:56:22.732861 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.737700 kubelet[2862]: E0912 22:56:22.737635 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.737700 kubelet[2862]: W0912 22:56:22.737680 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.737700 kubelet[2862]: E0912 22:56:22.737713 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.738765 kubelet[2862]: E0912 22:56:22.738693 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.738765 kubelet[2862]: W0912 22:56:22.738716 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.738765 kubelet[2862]: E0912 22:56:22.738739 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.743957 kubelet[2862]: E0912 22:56:22.743890 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.743957 kubelet[2862]: W0912 22:56:22.743935 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.743957 kubelet[2862]: E0912 22:56:22.743971 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.751500 kubelet[2862]: E0912 22:56:22.747498 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.751500 kubelet[2862]: W0912 22:56:22.747519 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.751500 kubelet[2862]: E0912 22:56:22.747541 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.751500 kubelet[2862]: E0912 22:56:22.750494 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.751500 kubelet[2862]: W0912 22:56:22.750511 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.751500 kubelet[2862]: E0912 22:56:22.750530 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.751500 kubelet[2862]: E0912 22:56:22.750983 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.751500 kubelet[2862]: W0912 22:56:22.750994 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.751500 kubelet[2862]: E0912 22:56:22.751007 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:22.751500 kubelet[2862]: E0912 22:56:22.751276 2862 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:56:22.751931 kubelet[2862]: W0912 22:56:22.751287 2862 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:56:22.751931 kubelet[2862]: E0912 22:56:22.751298 2862 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:56:23.168092 containerd[1605]: time="2025-09-12T22:56:23.167993043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:23.177067 containerd[1605]: time="2025-09-12T22:56:23.176900401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 22:56:23.186760 containerd[1605]: time="2025-09-12T22:56:23.186667740Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:23.203435 containerd[1605]: time="2025-09-12T22:56:23.202544378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:23.204511 containerd[1605]: time="2025-09-12T22:56:23.204434104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.999644419s" Sep 12 22:56:23.204570 containerd[1605]: time="2025-09-12T22:56:23.204517611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 22:56:23.234410 containerd[1605]: time="2025-09-12T22:56:23.234292634Z" level=info msg="CreateContainer within sandbox \"c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 22:56:23.267526 containerd[1605]: time="2025-09-12T22:56:23.264680743Z" level=info msg="Container 294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:23.288087 containerd[1605]: time="2025-09-12T22:56:23.287989394Z" level=info msg="CreateContainer within sandbox \"c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca\"" Sep 12 22:56:23.291009 containerd[1605]: time="2025-09-12T22:56:23.290657608Z" level=info msg="StartContainer for \"294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca\"" Sep 12 22:56:23.293008 containerd[1605]: time="2025-09-12T22:56:23.292938533Z" level=info msg="connecting to shim 294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca" address="unix:///run/containerd/s/695a8de3dfd705c5da7eab8c374331237b9eb51bc5efae78501292264c1e392b" protocol=ttrpc version=3 Sep 12 22:56:23.333961 kubelet[2862]: E0912 22:56:23.331731 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:23.365744 systemd[1]: Started cri-containerd-294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca.scope - libcontainer container 294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca. Sep 12 22:56:23.515711 systemd[1]: cri-containerd-294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca.scope: Deactivated successfully. Sep 12 22:56:23.516162 systemd[1]: cri-containerd-294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca.scope: Consumed 68ms CPU time, 6.3M memory peak, 3.8M written to disk. Sep 12 22:56:23.527174 containerd[1605]: time="2025-09-12T22:56:23.527083730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca\" id:\"294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca\" pid:3601 exited_at:{seconds:1757717783 nanos:525692855}" Sep 12 22:56:23.532778 containerd[1605]: time="2025-09-12T22:56:23.527474938Z" level=info msg="received exit event container_id:\"294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca\" id:\"294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca\" pid:3601 exited_at:{seconds:1757717783 nanos:525692855}" Sep 12 22:56:23.532778 containerd[1605]: time="2025-09-12T22:56:23.531708176Z" level=info msg="StartContainer for \"294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca\" returns successfully" Sep 12 22:56:23.599069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-294984ed5608ccca9b14362e58d035132f898cc8c48015f406a53f752b8af6ca-rootfs.mount: Deactivated successfully. Sep 12 22:56:23.672919 kubelet[2862]: E0912 22:56:23.672424 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:24.694739 containerd[1605]: time="2025-09-12T22:56:24.694642186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 22:56:25.336383 kubelet[2862]: E0912 22:56:25.331850 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:27.333795 kubelet[2862]: E0912 22:56:27.333694 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:29.342930 kubelet[2862]: E0912 22:56:29.336171 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:31.335333 kubelet[2862]: E0912 22:56:31.332273 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:32.343732 containerd[1605]: time="2025-09-12T22:56:32.342226227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:32.343732 containerd[1605]: time="2025-09-12T22:56:32.343048016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 22:56:32.344938 containerd[1605]: time="2025-09-12T22:56:32.344882313Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:32.349288 containerd[1605]: time="2025-09-12T22:56:32.348491965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:32.350478 containerd[1605]: time="2025-09-12T22:56:32.350429598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 7.655681592s" Sep 12 22:56:32.350478 containerd[1605]: time="2025-09-12T22:56:32.350475584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 22:56:32.368811 containerd[1605]: time="2025-09-12T22:56:32.368749535Z" level=info msg="CreateContainer within sandbox \"c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 22:56:32.404445 containerd[1605]: time="2025-09-12T22:56:32.404110297Z" level=info msg="Container 940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:32.437810 containerd[1605]: time="2025-09-12T22:56:32.434426302Z" level=info msg="CreateContainer within sandbox \"c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0\"" Sep 12 22:56:32.437810 containerd[1605]: time="2025-09-12T22:56:32.437508250Z" level=info msg="StartContainer for \"940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0\"" Sep 12 22:56:32.445019 containerd[1605]: time="2025-09-12T22:56:32.444873271Z" level=info msg="connecting to shim 940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0" address="unix:///run/containerd/s/695a8de3dfd705c5da7eab8c374331237b9eb51bc5efae78501292264c1e392b" protocol=ttrpc version=3 Sep 12 22:56:32.483900 systemd[1]: Started cri-containerd-940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0.scope - libcontainer container 940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0. Sep 12 22:56:32.648507 containerd[1605]: time="2025-09-12T22:56:32.647356531Z" level=info msg="StartContainer for \"940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0\" returns successfully" Sep 12 22:56:33.331977 kubelet[2862]: E0912 22:56:33.330689 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:35.190938 kubelet[2862]: I0912 22:56:35.186976 2862 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 22:56:35.207916 systemd[1]: cri-containerd-940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0.scope: Deactivated successfully. Sep 12 22:56:35.208918 systemd[1]: cri-containerd-940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0.scope: Consumed 956ms CPU time, 182.5M memory peak, 2.7M read from disk, 171.3M written to disk. Sep 12 22:56:35.217252 containerd[1605]: time="2025-09-12T22:56:35.210902403Z" level=info msg="received exit event container_id:\"940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0\" id:\"940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0\" pid:3659 exited_at:{seconds:1757717795 nanos:207278265}" Sep 12 22:56:35.217252 containerd[1605]: time="2025-09-12T22:56:35.211601821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0\" id:\"940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0\" pid:3659 exited_at:{seconds:1757717795 nanos:207278265}" Sep 12 22:56:35.314890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-940240e68fdd99fbc7009d776c3aaad7a470110547c17f4112b953307a97abf0-rootfs.mount: Deactivated successfully. Sep 12 22:56:35.424655 kubelet[2862]: I0912 22:56:35.421736 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b43354cb-8c3f-4399-a45a-0f4a8dfe860f-config-volume\") pod \"coredns-674b8bbfcf-j7nvf\" (UID: \"b43354cb-8c3f-4399-a45a-0f4a8dfe860f\") " pod="kube-system/coredns-674b8bbfcf-j7nvf" Sep 12 22:56:35.424655 kubelet[2862]: I0912 22:56:35.421950 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8f4a5b7b-14f1-40e2-b837-127fbfcb716e-calico-apiserver-certs\") pod \"calico-apiserver-5648cf76df-8sfrx\" (UID: \"8f4a5b7b-14f1-40e2-b837-127fbfcb716e\") " pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" Sep 12 22:56:35.424655 kubelet[2862]: I0912 22:56:35.422223 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5dsq\" (UniqueName: \"kubernetes.io/projected/8f4a5b7b-14f1-40e2-b837-127fbfcb716e-kube-api-access-k5dsq\") pod \"calico-apiserver-5648cf76df-8sfrx\" (UID: \"8f4a5b7b-14f1-40e2-b837-127fbfcb716e\") " pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" Sep 12 22:56:35.424655 kubelet[2862]: I0912 22:56:35.422409 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wqhq\" (UniqueName: \"kubernetes.io/projected/b43354cb-8c3f-4399-a45a-0f4a8dfe860f-kube-api-access-6wqhq\") pod \"coredns-674b8bbfcf-j7nvf\" (UID: \"b43354cb-8c3f-4399-a45a-0f4a8dfe860f\") " pod="kube-system/coredns-674b8bbfcf-j7nvf" Sep 12 22:56:35.455690 systemd[1]: Created slice kubepods-besteffort-pod8f4a5b7b_14f1_40e2_b837_127fbfcb716e.slice - libcontainer container kubepods-besteffort-pod8f4a5b7b_14f1_40e2_b837_127fbfcb716e.slice. Sep 12 22:56:35.486693 systemd[1]: Created slice kubepods-burstable-podb43354cb_8c3f_4399_a45a_0f4a8dfe860f.slice - libcontainer container kubepods-burstable-podb43354cb_8c3f_4399_a45a_0f4a8dfe860f.slice. Sep 12 22:56:35.569266 systemd[1]: Created slice kubepods-besteffort-pode017806f_04da_4501_94aa_d21ceb92cfe6.slice - libcontainer container kubepods-besteffort-pode017806f_04da_4501_94aa_d21ceb92cfe6.slice. Sep 12 22:56:35.576432 containerd[1605]: time="2025-09-12T22:56:35.575321850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktqqr,Uid:e017806f-04da-4501-94aa-d21ceb92cfe6,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:35.580228 systemd[1]: Created slice kubepods-besteffort-pod0ccbc9c8_51b9_4405_930a_410f3a255603.slice - libcontainer container kubepods-besteffort-pod0ccbc9c8_51b9_4405_930a_410f3a255603.slice. Sep 12 22:56:35.625418 kubelet[2862]: I0912 22:56:35.625313 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfw9p\" (UniqueName: \"kubernetes.io/projected/0ccbc9c8-51b9-4405-930a-410f3a255603-kube-api-access-xfw9p\") pod \"whisker-5cd9cd46cb-6t5k9\" (UID: \"0ccbc9c8-51b9-4405-930a-410f3a255603\") " pod="calico-system/whisker-5cd9cd46cb-6t5k9" Sep 12 22:56:35.625590 kubelet[2862]: I0912 22:56:35.625457 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48684744-98ac-48bc-8701-ed3f0eaabebd-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-d4j5k\" (UID: \"48684744-98ac-48bc-8701-ed3f0eaabebd\") " pod="calico-system/goldmane-54d579b49d-d4j5k" Sep 12 22:56:35.625676 kubelet[2862]: I0912 22:56:35.625625 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95981e6b-8b61-4ef7-8054-72e5fdafc228-config-volume\") pod \"coredns-674b8bbfcf-ffh8k\" (UID: \"95981e6b-8b61-4ef7-8054-72e5fdafc228\") " pod="kube-system/coredns-674b8bbfcf-ffh8k" Sep 12 22:56:35.625759 kubelet[2862]: I0912 22:56:35.625665 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48684744-98ac-48bc-8701-ed3f0eaabebd-config\") pod \"goldmane-54d579b49d-d4j5k\" (UID: \"48684744-98ac-48bc-8701-ed3f0eaabebd\") " pod="calico-system/goldmane-54d579b49d-d4j5k" Sep 12 22:56:35.625759 kubelet[2862]: I0912 22:56:35.625745 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58c292a3-07a9-46f0-b413-47faa698b42a-tigera-ca-bundle\") pod \"calico-kube-controllers-78498cbbc6-zw9jr\" (UID: \"58c292a3-07a9-46f0-b413-47faa698b42a\") " pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" Sep 12 22:56:35.626019 kubelet[2862]: I0912 22:56:35.625961 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/48684744-98ac-48bc-8701-ed3f0eaabebd-goldmane-key-pair\") pod \"goldmane-54d579b49d-d4j5k\" (UID: \"48684744-98ac-48bc-8701-ed3f0eaabebd\") " pod="calico-system/goldmane-54d579b49d-d4j5k" Sep 12 22:56:35.628084 kubelet[2862]: I0912 22:56:35.626085 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm55r\" (UniqueName: \"kubernetes.io/projected/95981e6b-8b61-4ef7-8054-72e5fdafc228-kube-api-access-bm55r\") pod \"coredns-674b8bbfcf-ffh8k\" (UID: \"95981e6b-8b61-4ef7-8054-72e5fdafc228\") " pod="kube-system/coredns-674b8bbfcf-ffh8k" Sep 12 22:56:35.628084 kubelet[2862]: I0912 22:56:35.626231 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ccbc9c8-51b9-4405-930a-410f3a255603-whisker-ca-bundle\") pod \"whisker-5cd9cd46cb-6t5k9\" (UID: \"0ccbc9c8-51b9-4405-930a-410f3a255603\") " pod="calico-system/whisker-5cd9cd46cb-6t5k9" Sep 12 22:56:35.628084 kubelet[2862]: I0912 22:56:35.626312 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9sq\" (UniqueName: \"kubernetes.io/projected/58c292a3-07a9-46f0-b413-47faa698b42a-kube-api-access-zh9sq\") pod \"calico-kube-controllers-78498cbbc6-zw9jr\" (UID: \"58c292a3-07a9-46f0-b413-47faa698b42a\") " pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" Sep 12 22:56:35.628084 kubelet[2862]: I0912 22:56:35.626343 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnqn9\" (UniqueName: \"kubernetes.io/projected/48684744-98ac-48bc-8701-ed3f0eaabebd-kube-api-access-xnqn9\") pod \"goldmane-54d579b49d-d4j5k\" (UID: \"48684744-98ac-48bc-8701-ed3f0eaabebd\") " pod="calico-system/goldmane-54d579b49d-d4j5k" Sep 12 22:56:35.628084 kubelet[2862]: I0912 22:56:35.626463 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ccbc9c8-51b9-4405-930a-410f3a255603-whisker-backend-key-pair\") pod \"whisker-5cd9cd46cb-6t5k9\" (UID: \"0ccbc9c8-51b9-4405-930a-410f3a255603\") " pod="calico-system/whisker-5cd9cd46cb-6t5k9" Sep 12 22:56:35.677771 systemd[1]: Created slice kubepods-besteffort-pod48684744_98ac_48bc_8701_ed3f0eaabebd.slice - libcontainer container kubepods-besteffort-pod48684744_98ac_48bc_8701_ed3f0eaabebd.slice. Sep 12 22:56:35.723362 systemd[1]: Created slice kubepods-besteffort-pod58c292a3_07a9_46f0_b413_47faa698b42a.slice - libcontainer container kubepods-besteffort-pod58c292a3_07a9_46f0_b413_47faa698b42a.slice. Sep 12 22:56:35.760088 systemd[1]: Created slice kubepods-besteffort-podc3de251e_ea13_4305_87e9_4ed8035dd202.slice - libcontainer container kubepods-besteffort-podc3de251e_ea13_4305_87e9_4ed8035dd202.slice. Sep 12 22:56:35.774424 containerd[1605]: time="2025-09-12T22:56:35.774356578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-8sfrx,Uid:8f4a5b7b-14f1-40e2-b837-127fbfcb716e,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:56:35.777015 systemd[1]: Created slice kubepods-burstable-pod95981e6b_8b61_4ef7_8054_72e5fdafc228.slice - libcontainer container kubepods-burstable-pod95981e6b_8b61_4ef7_8054_72e5fdafc228.slice. Sep 12 22:56:35.787538 containerd[1605]: time="2025-09-12T22:56:35.787489510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 22:56:35.804453 kubelet[2862]: E0912 22:56:35.804370 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:35.810518 containerd[1605]: time="2025-09-12T22:56:35.810432996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j7nvf,Uid:b43354cb-8c3f-4399-a45a-0f4a8dfe860f,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:35.837427 kubelet[2862]: I0912 22:56:35.835239 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c3de251e-ea13-4305-87e9-4ed8035dd202-calico-apiserver-certs\") pod \"calico-apiserver-5648cf76df-dcqpq\" (UID: \"c3de251e-ea13-4305-87e9-4ed8035dd202\") " pod="calico-apiserver/calico-apiserver-5648cf76df-dcqpq" Sep 12 22:56:35.837427 kubelet[2862]: I0912 22:56:35.835306 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqg5l\" (UniqueName: \"kubernetes.io/projected/c3de251e-ea13-4305-87e9-4ed8035dd202-kube-api-access-mqg5l\") pod \"calico-apiserver-5648cf76df-dcqpq\" (UID: \"c3de251e-ea13-4305-87e9-4ed8035dd202\") " pod="calico-apiserver/calico-apiserver-5648cf76df-dcqpq" Sep 12 22:56:35.926245 containerd[1605]: time="2025-09-12T22:56:35.923831673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd9cd46cb-6t5k9,Uid:0ccbc9c8-51b9-4405-930a-410f3a255603,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:35.930983 containerd[1605]: time="2025-09-12T22:56:35.930843614Z" level=error msg="Failed to destroy network for sandbox \"293b4a184afde63f70e6556c901f53ef7ac29022fac1a5e8ee6ba3fb50013359\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:35.959515 containerd[1605]: time="2025-09-12T22:56:35.957690434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktqqr,Uid:e017806f-04da-4501-94aa-d21ceb92cfe6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"293b4a184afde63f70e6556c901f53ef7ac29022fac1a5e8ee6ba3fb50013359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:35.959721 kubelet[2862]: E0912 22:56:35.958009 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293b4a184afde63f70e6556c901f53ef7ac29022fac1a5e8ee6ba3fb50013359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:35.959721 kubelet[2862]: E0912 22:56:35.958103 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293b4a184afde63f70e6556c901f53ef7ac29022fac1a5e8ee6ba3fb50013359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:35.959721 kubelet[2862]: E0912 22:56:35.958143 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293b4a184afde63f70e6556c901f53ef7ac29022fac1a5e8ee6ba3fb50013359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:35.959846 kubelet[2862]: E0912 22:56:35.958210 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ktqqr_calico-system(e017806f-04da-4501-94aa-d21ceb92cfe6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ktqqr_calico-system(e017806f-04da-4501-94aa-d21ceb92cfe6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"293b4a184afde63f70e6556c901f53ef7ac29022fac1a5e8ee6ba3fb50013359\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:35.982997 containerd[1605]: time="2025-09-12T22:56:35.982814638Z" level=error msg="Failed to destroy network for sandbox \"5ecf964f1555f445e5966e1262f238c58005c118b0eff115c46bea469888d8fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:35.993602 containerd[1605]: time="2025-09-12T22:56:35.993521872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-8sfrx,Uid:8f4a5b7b-14f1-40e2-b837-127fbfcb716e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ecf964f1555f445e5966e1262f238c58005c118b0eff115c46bea469888d8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:35.994330 kubelet[2862]: E0912 22:56:35.994223 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ecf964f1555f445e5966e1262f238c58005c118b0eff115c46bea469888d8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:35.994480 kubelet[2862]: E0912 22:56:35.994359 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ecf964f1555f445e5966e1262f238c58005c118b0eff115c46bea469888d8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" Sep 12 22:56:35.994480 kubelet[2862]: E0912 22:56:35.994419 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ecf964f1555f445e5966e1262f238c58005c118b0eff115c46bea469888d8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" Sep 12 22:56:35.994935 kubelet[2862]: E0912 22:56:35.994882 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5648cf76df-8sfrx_calico-apiserver(8f4a5b7b-14f1-40e2-b837-127fbfcb716e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5648cf76df-8sfrx_calico-apiserver(8f4a5b7b-14f1-40e2-b837-127fbfcb716e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ecf964f1555f445e5966e1262f238c58005c118b0eff115c46bea469888d8fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" podUID="8f4a5b7b-14f1-40e2-b837-127fbfcb716e" Sep 12 22:56:36.020875 containerd[1605]: time="2025-09-12T22:56:36.020430535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d4j5k,Uid:48684744-98ac-48bc-8701-ed3f0eaabebd,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:36.020875 containerd[1605]: time="2025-09-12T22:56:36.020722756Z" level=error msg="Failed to destroy network for sandbox \"46aace18ca565af18a57543b07ae96fcbcfe976d522ff795b175ee348ccb9fcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.030210 containerd[1605]: time="2025-09-12T22:56:36.030051089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j7nvf,Uid:b43354cb-8c3f-4399-a45a-0f4a8dfe860f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46aace18ca565af18a57543b07ae96fcbcfe976d522ff795b175ee348ccb9fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.030788 kubelet[2862]: E0912 22:56:36.030706 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46aace18ca565af18a57543b07ae96fcbcfe976d522ff795b175ee348ccb9fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.030935 kubelet[2862]: E0912 22:56:36.030913 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46aace18ca565af18a57543b07ae96fcbcfe976d522ff795b175ee348ccb9fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j7nvf" Sep 12 22:56:36.031025 kubelet[2862]: E0912 22:56:36.031008 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46aace18ca565af18a57543b07ae96fcbcfe976d522ff795b175ee348ccb9fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j7nvf" Sep 12 22:56:36.031204 kubelet[2862]: E0912 22:56:36.031151 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j7nvf_kube-system(b43354cb-8c3f-4399-a45a-0f4a8dfe860f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j7nvf_kube-system(b43354cb-8c3f-4399-a45a-0f4a8dfe860f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46aace18ca565af18a57543b07ae96fcbcfe976d522ff795b175ee348ccb9fcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j7nvf" podUID="b43354cb-8c3f-4399-a45a-0f4a8dfe860f" Sep 12 22:56:36.043936 containerd[1605]: time="2025-09-12T22:56:36.043614379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78498cbbc6-zw9jr,Uid:58c292a3-07a9-46f0-b413-47faa698b42a,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:36.071235 containerd[1605]: time="2025-09-12T22:56:36.071180569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-dcqpq,Uid:c3de251e-ea13-4305-87e9-4ed8035dd202,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:56:36.084214 kubelet[2862]: E0912 22:56:36.082687 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:36.084382 containerd[1605]: time="2025-09-12T22:56:36.083931320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffh8k,Uid:95981e6b-8b61-4ef7-8054-72e5fdafc228,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:36.100168 containerd[1605]: time="2025-09-12T22:56:36.100075332Z" level=error msg="Failed to destroy network for sandbox \"a4fa9ec43a06d18babad99b9f07aa4c983a110190643347df3f636b8edad7142\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.104768 containerd[1605]: time="2025-09-12T22:56:36.104696408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd9cd46cb-6t5k9,Uid:0ccbc9c8-51b9-4405-930a-410f3a255603,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fa9ec43a06d18babad99b9f07aa4c983a110190643347df3f636b8edad7142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.105073 kubelet[2862]: E0912 22:56:36.104996 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fa9ec43a06d18babad99b9f07aa4c983a110190643347df3f636b8edad7142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.105159 kubelet[2862]: E0912 22:56:36.105088 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fa9ec43a06d18babad99b9f07aa4c983a110190643347df3f636b8edad7142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd9cd46cb-6t5k9" Sep 12 22:56:36.105159 kubelet[2862]: E0912 22:56:36.105133 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fa9ec43a06d18babad99b9f07aa4c983a110190643347df3f636b8edad7142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd9cd46cb-6t5k9" Sep 12 22:56:36.105260 kubelet[2862]: E0912 22:56:36.105213 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cd9cd46cb-6t5k9_calico-system(0ccbc9c8-51b9-4405-930a-410f3a255603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cd9cd46cb-6t5k9_calico-system(0ccbc9c8-51b9-4405-930a-410f3a255603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4fa9ec43a06d18babad99b9f07aa4c983a110190643347df3f636b8edad7142\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cd9cd46cb-6t5k9" podUID="0ccbc9c8-51b9-4405-930a-410f3a255603" Sep 12 22:56:36.157704 containerd[1605]: time="2025-09-12T22:56:36.157599499Z" level=error msg="Failed to destroy network for sandbox \"41ef7dad3d9801ba786f548fa753e5c0d525a4420b0db056d0c25ce11316c61a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.160677 containerd[1605]: time="2025-09-12T22:56:36.160619247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d4j5k,Uid:48684744-98ac-48bc-8701-ed3f0eaabebd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ef7dad3d9801ba786f548fa753e5c0d525a4420b0db056d0c25ce11316c61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.161414 kubelet[2862]: E0912 22:56:36.161108 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ef7dad3d9801ba786f548fa753e5c0d525a4420b0db056d0c25ce11316c61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.161414 kubelet[2862]: E0912 22:56:36.161209 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ef7dad3d9801ba786f548fa753e5c0d525a4420b0db056d0c25ce11316c61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d4j5k" Sep 12 22:56:36.161414 kubelet[2862]: E0912 22:56:36.161239 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ef7dad3d9801ba786f548fa753e5c0d525a4420b0db056d0c25ce11316c61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d4j5k" Sep 12 22:56:36.161573 kubelet[2862]: E0912 22:56:36.161300 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-d4j5k_calico-system(48684744-98ac-48bc-8701-ed3f0eaabebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-d4j5k_calico-system(48684744-98ac-48bc-8701-ed3f0eaabebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41ef7dad3d9801ba786f548fa753e5c0d525a4420b0db056d0c25ce11316c61a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-d4j5k" podUID="48684744-98ac-48bc-8701-ed3f0eaabebd" Sep 12 22:56:36.163526 containerd[1605]: time="2025-09-12T22:56:36.163466761Z" level=error msg="Failed to destroy network for sandbox \"be79ffeea1706cc4d9abc92b6dbcbff4834002bfd8340f8c93a68cdd04e77e8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.168210 containerd[1605]: time="2025-09-12T22:56:36.168060065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78498cbbc6-zw9jr,Uid:58c292a3-07a9-46f0-b413-47faa698b42a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be79ffeea1706cc4d9abc92b6dbcbff4834002bfd8340f8c93a68cdd04e77e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.168792 kubelet[2862]: E0912 22:56:36.168567 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be79ffeea1706cc4d9abc92b6dbcbff4834002bfd8340f8c93a68cdd04e77e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.168792 kubelet[2862]: E0912 22:56:36.168648 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be79ffeea1706cc4d9abc92b6dbcbff4834002bfd8340f8c93a68cdd04e77e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" Sep 12 22:56:36.168792 kubelet[2862]: E0912 22:56:36.168676 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be79ffeea1706cc4d9abc92b6dbcbff4834002bfd8340f8c93a68cdd04e77e8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" Sep 12 22:56:36.168928 kubelet[2862]: E0912 22:56:36.168731 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78498cbbc6-zw9jr_calico-system(58c292a3-07a9-46f0-b413-47faa698b42a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78498cbbc6-zw9jr_calico-system(58c292a3-07a9-46f0-b413-47faa698b42a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be79ffeea1706cc4d9abc92b6dbcbff4834002bfd8340f8c93a68cdd04e77e8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" podUID="58c292a3-07a9-46f0-b413-47faa698b42a" Sep 12 22:56:36.206500 containerd[1605]: time="2025-09-12T22:56:36.206307236Z" level=error msg="Failed to destroy network for sandbox \"acc7c9267bfae27aed58f632fbc2be91ab22645150407eee828aadce18b6fdd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.220379 containerd[1605]: time="2025-09-12T22:56:36.220291811Z" level=error msg="Failed to destroy network for sandbox \"4d79d8f2daa005e82f17b32e8e6c146afac1bd260e311c1a1cf42a0bda6982b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.279838 containerd[1605]: time="2025-09-12T22:56:36.279600599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffh8k,Uid:95981e6b-8b61-4ef7-8054-72e5fdafc228,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc7c9267bfae27aed58f632fbc2be91ab22645150407eee828aadce18b6fdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.280032 kubelet[2862]: E0912 22:56:36.279924 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc7c9267bfae27aed58f632fbc2be91ab22645150407eee828aadce18b6fdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.280540 kubelet[2862]: E0912 22:56:36.280015 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc7c9267bfae27aed58f632fbc2be91ab22645150407eee828aadce18b6fdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ffh8k" Sep 12 22:56:36.280540 kubelet[2862]: E0912 22:56:36.280113 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc7c9267bfae27aed58f632fbc2be91ab22645150407eee828aadce18b6fdd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ffh8k" Sep 12 22:56:36.280540 kubelet[2862]: E0912 22:56:36.280179 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ffh8k_kube-system(95981e6b-8b61-4ef7-8054-72e5fdafc228)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ffh8k_kube-system(95981e6b-8b61-4ef7-8054-72e5fdafc228)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acc7c9267bfae27aed58f632fbc2be91ab22645150407eee828aadce18b6fdd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ffh8k" podUID="95981e6b-8b61-4ef7-8054-72e5fdafc228" Sep 12 22:56:36.325963 systemd[1]: run-netns-cni\x2d789974a1\x2d89c2\x2de3ad\x2d56c8\x2d5c541e3aa18e.mount: Deactivated successfully. Sep 12 22:56:36.547662 containerd[1605]: time="2025-09-12T22:56:36.547332713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-dcqpq,Uid:c3de251e-ea13-4305-87e9-4ed8035dd202,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d79d8f2daa005e82f17b32e8e6c146afac1bd260e311c1a1cf42a0bda6982b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.549184 kubelet[2862]: E0912 22:56:36.548283 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d79d8f2daa005e82f17b32e8e6c146afac1bd260e311c1a1cf42a0bda6982b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:36.549184 kubelet[2862]: E0912 22:56:36.548371 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d79d8f2daa005e82f17b32e8e6c146afac1bd260e311c1a1cf42a0bda6982b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5648cf76df-dcqpq" Sep 12 22:56:36.549184 kubelet[2862]: E0912 22:56:36.548412 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d79d8f2daa005e82f17b32e8e6c146afac1bd260e311c1a1cf42a0bda6982b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5648cf76df-dcqpq" Sep 12 22:56:36.549484 kubelet[2862]: E0912 22:56:36.548478 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5648cf76df-dcqpq_calico-apiserver(c3de251e-ea13-4305-87e9-4ed8035dd202)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5648cf76df-dcqpq_calico-apiserver(c3de251e-ea13-4305-87e9-4ed8035dd202)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d79d8f2daa005e82f17b32e8e6c146afac1bd260e311c1a1cf42a0bda6982b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5648cf76df-dcqpq" podUID="c3de251e-ea13-4305-87e9-4ed8035dd202" Sep 12 22:56:45.199158 kernel: hrtimer: interrupt took 4562708 ns Sep 12 22:56:47.366790 containerd[1605]: time="2025-09-12T22:56:47.366715557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78498cbbc6-zw9jr,Uid:58c292a3-07a9-46f0-b413-47faa698b42a,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:47.367352 containerd[1605]: time="2025-09-12T22:56:47.367241577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d4j5k,Uid:48684744-98ac-48bc-8701-ed3f0eaabebd,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:47.373126 containerd[1605]: time="2025-09-12T22:56:47.368082500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd9cd46cb-6t5k9,Uid:0ccbc9c8-51b9-4405-930a-410f3a255603,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:47.936080 containerd[1605]: time="2025-09-12T22:56:47.935979383Z" level=error msg="Failed to destroy network for sandbox \"2b738e2e0aca53737801e01e4ab8b60b5a7662521a26912c7eb7852421c584c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:47.942583 containerd[1605]: time="2025-09-12T22:56:47.942519572Z" level=error msg="Failed to destroy network for sandbox \"ded37c97d6294f5849f67b5ba742077e53f0de9e926a473a1f34d1136b5fb028\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:47.943305 systemd[1]: run-netns-cni\x2d27154074\x2d19db\x2d8b94\x2dd3f0\x2dd44b51e841ea.mount: Deactivated successfully. Sep 12 22:56:47.948501 containerd[1605]: time="2025-09-12T22:56:47.947904116Z" level=error msg="Failed to destroy network for sandbox \"8200fd887b23d7f4b92a75605841e0958fc0d72be42dc45e98d79aa09e1bb68a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:47.949401 systemd[1]: run-netns-cni\x2d09ba2d48\x2d60d6\x2df08e\x2d3d5a\x2dfd6dd953775b.mount: Deactivated successfully. Sep 12 22:56:47.952423 systemd[1]: run-netns-cni\x2d33714ebe\x2d4ec1\x2dc100\x2d4bc8\x2d296a5a44c639.mount: Deactivated successfully. Sep 12 22:56:48.219593 containerd[1605]: time="2025-09-12T22:56:48.219500511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78498cbbc6-zw9jr,Uid:58c292a3-07a9-46f0-b413-47faa698b42a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b738e2e0aca53737801e01e4ab8b60b5a7662521a26912c7eb7852421c584c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.228952 kubelet[2862]: E0912 22:56:48.226678 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b738e2e0aca53737801e01e4ab8b60b5a7662521a26912c7eb7852421c584c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.228952 kubelet[2862]: E0912 22:56:48.226816 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b738e2e0aca53737801e01e4ab8b60b5a7662521a26912c7eb7852421c584c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" Sep 12 22:56:48.228952 kubelet[2862]: E0912 22:56:48.226847 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b738e2e0aca53737801e01e4ab8b60b5a7662521a26912c7eb7852421c584c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" Sep 12 22:56:48.229591 kubelet[2862]: E0912 22:56:48.226931 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78498cbbc6-zw9jr_calico-system(58c292a3-07a9-46f0-b413-47faa698b42a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78498cbbc6-zw9jr_calico-system(58c292a3-07a9-46f0-b413-47faa698b42a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b738e2e0aca53737801e01e4ab8b60b5a7662521a26912c7eb7852421c584c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" podUID="58c292a3-07a9-46f0-b413-47faa698b42a" Sep 12 22:56:48.241766 containerd[1605]: time="2025-09-12T22:56:48.239093775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd9cd46cb-6t5k9,Uid:0ccbc9c8-51b9-4405-930a-410f3a255603,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ded37c97d6294f5849f67b5ba742077e53f0de9e926a473a1f34d1136b5fb028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.241766 containerd[1605]: time="2025-09-12T22:56:48.240574963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d4j5k,Uid:48684744-98ac-48bc-8701-ed3f0eaabebd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8200fd887b23d7f4b92a75605841e0958fc0d72be42dc45e98d79aa09e1bb68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.247301 kubelet[2862]: E0912 22:56:48.241226 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ded37c97d6294f5849f67b5ba742077e53f0de9e926a473a1f34d1136b5fb028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.247301 kubelet[2862]: E0912 22:56:48.241304 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ded37c97d6294f5849f67b5ba742077e53f0de9e926a473a1f34d1136b5fb028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd9cd46cb-6t5k9" Sep 12 22:56:48.247301 kubelet[2862]: E0912 22:56:48.241333 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ded37c97d6294f5849f67b5ba742077e53f0de9e926a473a1f34d1136b5fb028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd9cd46cb-6t5k9" Sep 12 22:56:48.247485 kubelet[2862]: E0912 22:56:48.241402 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cd9cd46cb-6t5k9_calico-system(0ccbc9c8-51b9-4405-930a-410f3a255603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cd9cd46cb-6t5k9_calico-system(0ccbc9c8-51b9-4405-930a-410f3a255603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ded37c97d6294f5849f67b5ba742077e53f0de9e926a473a1f34d1136b5fb028\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cd9cd46cb-6t5k9" podUID="0ccbc9c8-51b9-4405-930a-410f3a255603" Sep 12 22:56:48.247485 kubelet[2862]: E0912 22:56:48.241465 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8200fd887b23d7f4b92a75605841e0958fc0d72be42dc45e98d79aa09e1bb68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.247485 kubelet[2862]: E0912 22:56:48.241494 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8200fd887b23d7f4b92a75605841e0958fc0d72be42dc45e98d79aa09e1bb68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d4j5k" Sep 12 22:56:48.247603 kubelet[2862]: E0912 22:56:48.241511 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8200fd887b23d7f4b92a75605841e0958fc0d72be42dc45e98d79aa09e1bb68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d4j5k" Sep 12 22:56:48.247603 kubelet[2862]: E0912 22:56:48.241560 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-d4j5k_calico-system(48684744-98ac-48bc-8701-ed3f0eaabebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-d4j5k_calico-system(48684744-98ac-48bc-8701-ed3f0eaabebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8200fd887b23d7f4b92a75605841e0958fc0d72be42dc45e98d79aa09e1bb68a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-d4j5k" podUID="48684744-98ac-48bc-8701-ed3f0eaabebd" Sep 12 22:56:48.334087 kubelet[2862]: E0912 22:56:48.331608 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:48.334087 kubelet[2862]: E0912 22:56:48.332346 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:48.334891 containerd[1605]: time="2025-09-12T22:56:48.334824515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffh8k,Uid:95981e6b-8b61-4ef7-8054-72e5fdafc228,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:48.335071 containerd[1605]: time="2025-09-12T22:56:48.335033798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j7nvf,Uid:b43354cb-8c3f-4399-a45a-0f4a8dfe860f,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:48.339727 containerd[1605]: time="2025-09-12T22:56:48.339421466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktqqr,Uid:e017806f-04da-4501-94aa-d21ceb92cfe6,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:48.340231 containerd[1605]: time="2025-09-12T22:56:48.340177949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-8sfrx,Uid:8f4a5b7b-14f1-40e2-b837-127fbfcb716e,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:56:48.525821 containerd[1605]: time="2025-09-12T22:56:48.524865340Z" level=error msg="Failed to destroy network for sandbox \"c3efda4a0a54ed4429fbc31ca4bcf8634b7e41cab43322cbb0662e22d526f2e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.536069 containerd[1605]: time="2025-09-12T22:56:48.535977423Z" level=error msg="Failed to destroy network for sandbox \"62e8da62f06bb9b9a5c889b31c997308197c1664b793c598148e3d6137d6f7a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.568763 containerd[1605]: time="2025-09-12T22:56:48.567498974Z" level=error msg="Failed to destroy network for sandbox \"784c93259bf3aafc8563e63c0b9c190047fa2049e3b9928bba1b396767808672\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.594868 containerd[1605]: time="2025-09-12T22:56:48.594791447Z" level=error msg="Failed to destroy network for sandbox \"d78837da7a8213fc802a8ee0fee2d63f782eedc01635c7fbecbff4900ddf03dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.757490 containerd[1605]: time="2025-09-12T22:56:48.756326586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktqqr,Uid:e017806f-04da-4501-94aa-d21ceb92cfe6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3efda4a0a54ed4429fbc31ca4bcf8634b7e41cab43322cbb0662e22d526f2e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.759623 kubelet[2862]: E0912 22:56:48.758855 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3efda4a0a54ed4429fbc31ca4bcf8634b7e41cab43322cbb0662e22d526f2e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.759623 kubelet[2862]: E0912 22:56:48.759027 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3efda4a0a54ed4429fbc31ca4bcf8634b7e41cab43322cbb0662e22d526f2e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:48.759623 kubelet[2862]: E0912 22:56:48.759059 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3efda4a0a54ed4429fbc31ca4bcf8634b7e41cab43322cbb0662e22d526f2e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ktqqr" Sep 12 22:56:48.771437 kubelet[2862]: E0912 22:56:48.768407 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ktqqr_calico-system(e017806f-04da-4501-94aa-d21ceb92cfe6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ktqqr_calico-system(e017806f-04da-4501-94aa-d21ceb92cfe6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3efda4a0a54ed4429fbc31ca4bcf8634b7e41cab43322cbb0662e22d526f2e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ktqqr" podUID="e017806f-04da-4501-94aa-d21ceb92cfe6" Sep 12 22:56:48.868562 containerd[1605]: time="2025-09-12T22:56:48.866287637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j7nvf,Uid:b43354cb-8c3f-4399-a45a-0f4a8dfe860f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e8da62f06bb9b9a5c889b31c997308197c1664b793c598148e3d6137d6f7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.868765 kubelet[2862]: E0912 22:56:48.866780 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e8da62f06bb9b9a5c889b31c997308197c1664b793c598148e3d6137d6f7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.868765 kubelet[2862]: E0912 22:56:48.866875 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e8da62f06bb9b9a5c889b31c997308197c1664b793c598148e3d6137d6f7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j7nvf" Sep 12 22:56:48.868765 kubelet[2862]: E0912 22:56:48.866911 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e8da62f06bb9b9a5c889b31c997308197c1664b793c598148e3d6137d6f7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j7nvf" Sep 12 22:56:48.868903 kubelet[2862]: E0912 22:56:48.866994 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j7nvf_kube-system(b43354cb-8c3f-4399-a45a-0f4a8dfe860f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j7nvf_kube-system(b43354cb-8c3f-4399-a45a-0f4a8dfe860f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62e8da62f06bb9b9a5c889b31c997308197c1664b793c598148e3d6137d6f7a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j7nvf" podUID="b43354cb-8c3f-4399-a45a-0f4a8dfe860f" Sep 12 22:56:48.871013 containerd[1605]: time="2025-09-12T22:56:48.870754854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffh8k,Uid:95981e6b-8b61-4ef7-8054-72e5fdafc228,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"784c93259bf3aafc8563e63c0b9c190047fa2049e3b9928bba1b396767808672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.871235 kubelet[2862]: E0912 22:56:48.871153 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"784c93259bf3aafc8563e63c0b9c190047fa2049e3b9928bba1b396767808672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.871298 kubelet[2862]: E0912 22:56:48.871253 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"784c93259bf3aafc8563e63c0b9c190047fa2049e3b9928bba1b396767808672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ffh8k" Sep 12 22:56:48.871338 kubelet[2862]: E0912 22:56:48.871309 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"784c93259bf3aafc8563e63c0b9c190047fa2049e3b9928bba1b396767808672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ffh8k" Sep 12 22:56:48.872415 kubelet[2862]: E0912 22:56:48.871461 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ffh8k_kube-system(95981e6b-8b61-4ef7-8054-72e5fdafc228)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ffh8k_kube-system(95981e6b-8b61-4ef7-8054-72e5fdafc228)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"784c93259bf3aafc8563e63c0b9c190047fa2049e3b9928bba1b396767808672\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ffh8k" podUID="95981e6b-8b61-4ef7-8054-72e5fdafc228" Sep 12 22:56:48.874622 containerd[1605]: time="2025-09-12T22:56:48.874294154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-8sfrx,Uid:8f4a5b7b-14f1-40e2-b837-127fbfcb716e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d78837da7a8213fc802a8ee0fee2d63f782eedc01635c7fbecbff4900ddf03dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.875460 kubelet[2862]: E0912 22:56:48.875364 2862 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d78837da7a8213fc802a8ee0fee2d63f782eedc01635c7fbecbff4900ddf03dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:56:48.875545 kubelet[2862]: E0912 22:56:48.875482 2862 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d78837da7a8213fc802a8ee0fee2d63f782eedc01635c7fbecbff4900ddf03dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" Sep 12 22:56:48.875595 kubelet[2862]: E0912 22:56:48.875541 2862 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d78837da7a8213fc802a8ee0fee2d63f782eedc01635c7fbecbff4900ddf03dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" Sep 12 22:56:48.875644 kubelet[2862]: E0912 22:56:48.875614 2862 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5648cf76df-8sfrx_calico-apiserver(8f4a5b7b-14f1-40e2-b837-127fbfcb716e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5648cf76df-8sfrx_calico-apiserver(8f4a5b7b-14f1-40e2-b837-127fbfcb716e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d78837da7a8213fc802a8ee0fee2d63f782eedc01635c7fbecbff4900ddf03dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" podUID="8f4a5b7b-14f1-40e2-b837-127fbfcb716e" Sep 12 22:56:49.948008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427144647.mount: Deactivated successfully. Sep 12 22:56:50.006928 containerd[1605]: time="2025-09-12T22:56:50.005984800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:50.009275 containerd[1605]: time="2025-09-12T22:56:50.008797782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 22:56:50.011253 containerd[1605]: time="2025-09-12T22:56:50.011175216Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:50.055535 containerd[1605]: time="2025-09-12T22:56:50.050648653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:50.055535 containerd[1605]: time="2025-09-12T22:56:50.051435944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 14.263191501s" Sep 12 22:56:50.055535 containerd[1605]: time="2025-09-12T22:56:50.053910690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 22:56:50.148496 containerd[1605]: time="2025-09-12T22:56:50.148322444Z" level=info msg="CreateContainer within sandbox \"c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 22:56:50.180151 containerd[1605]: time="2025-09-12T22:56:50.179324240Z" level=info msg="Container 2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:50.213278 containerd[1605]: time="2025-09-12T22:56:50.212422841Z" level=info msg="CreateContainer within sandbox \"c2aeb5080758182ee35e810e55358e8f209b4401c6f4e7940fc0dd388d06e9d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793\"" Sep 12 22:56:50.218524 containerd[1605]: time="2025-09-12T22:56:50.216607175Z" level=info msg="StartContainer for \"2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793\"" Sep 12 22:56:50.221018 containerd[1605]: time="2025-09-12T22:56:50.220968691Z" level=info msg="connecting to shim 2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793" address="unix:///run/containerd/s/695a8de3dfd705c5da7eab8c374331237b9eb51bc5efae78501292264c1e392b" protocol=ttrpc version=3 Sep 12 22:56:50.325431 systemd[1]: Started cri-containerd-2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793.scope - libcontainer container 2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793. Sep 12 22:56:50.434842 containerd[1605]: time="2025-09-12T22:56:50.434768426Z" level=info msg="StartContainer for \"2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793\" returns successfully" Sep 12 22:56:50.641319 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 22:56:50.641485 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 22:56:51.036537 kubelet[2862]: I0912 22:56:51.034522 2862 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ccbc9c8-51b9-4405-930a-410f3a255603-whisker-ca-bundle\") pod \"0ccbc9c8-51b9-4405-930a-410f3a255603\" (UID: \"0ccbc9c8-51b9-4405-930a-410f3a255603\") " Sep 12 22:56:51.036537 kubelet[2862]: I0912 22:56:51.034600 2862 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ccbc9c8-51b9-4405-930a-410f3a255603-whisker-backend-key-pair\") pod \"0ccbc9c8-51b9-4405-930a-410f3a255603\" (UID: \"0ccbc9c8-51b9-4405-930a-410f3a255603\") " Sep 12 22:56:51.036537 kubelet[2862]: I0912 22:56:51.034642 2862 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfw9p\" (UniqueName: \"kubernetes.io/projected/0ccbc9c8-51b9-4405-930a-410f3a255603-kube-api-access-xfw9p\") pod \"0ccbc9c8-51b9-4405-930a-410f3a255603\" (UID: \"0ccbc9c8-51b9-4405-930a-410f3a255603\") " Sep 12 22:56:51.036537 kubelet[2862]: I0912 22:56:51.035150 2862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ccbc9c8-51b9-4405-930a-410f3a255603-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0ccbc9c8-51b9-4405-930a-410f3a255603" (UID: "0ccbc9c8-51b9-4405-930a-410f3a255603"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 22:56:51.062127 systemd[1]: var-lib-kubelet-pods-0ccbc9c8\x2d51b9\x2d4405\x2d930a\x2d410f3a255603-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 22:56:51.072182 systemd[1]: var-lib-kubelet-pods-0ccbc9c8\x2d51b9\x2d4405\x2d930a\x2d410f3a255603-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxfw9p.mount: Deactivated successfully. Sep 12 22:56:51.076450 kubelet[2862]: I0912 22:56:51.076338 2862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ccbc9c8-51b9-4405-930a-410f3a255603-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0ccbc9c8-51b9-4405-930a-410f3a255603" (UID: "0ccbc9c8-51b9-4405-930a-410f3a255603"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 22:56:51.084547 kubelet[2862]: I0912 22:56:51.084285 2862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ccbc9c8-51b9-4405-930a-410f3a255603-kube-api-access-xfw9p" (OuterVolumeSpecName: "kube-api-access-xfw9p") pod "0ccbc9c8-51b9-4405-930a-410f3a255603" (UID: "0ccbc9c8-51b9-4405-930a-410f3a255603"). InnerVolumeSpecName "kube-api-access-xfw9p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 22:56:51.137141 kubelet[2862]: I0912 22:56:51.135983 2862 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ccbc9c8-51b9-4405-930a-410f3a255603-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 22:56:51.137141 kubelet[2862]: I0912 22:56:51.136027 2862 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ccbc9c8-51b9-4405-930a-410f3a255603-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 22:56:51.137141 kubelet[2862]: I0912 22:56:51.136039 2862 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfw9p\" (UniqueName: \"kubernetes.io/projected/0ccbc9c8-51b9-4405-930a-410f3a255603-kube-api-access-xfw9p\") on node \"localhost\" DevicePath \"\"" Sep 12 22:56:51.372402 systemd[1]: Removed slice kubepods-besteffort-pod0ccbc9c8_51b9_4405_930a_410f3a255603.slice - libcontainer container kubepods-besteffort-pod0ccbc9c8_51b9_4405_930a_410f3a255603.slice. Sep 12 22:56:51.494092 containerd[1605]: time="2025-09-12T22:56:51.488601943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793\" id:\"2195838bfee73c8aa50f37891dcda93167e0f5e2a7c4b855798602b805986ad0\" pid:4257 exit_status:1 exited_at:{seconds:1757717811 nanos:487014877}" Sep 12 22:56:51.498895 kubelet[2862]: I0912 22:56:51.483728 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5fc4q" podStartSLOduration=2.446706964 podStartE2EDuration="36.4837053s" podCreationTimestamp="2025-09-12 22:56:15 +0000 UTC" firstStartedPulling="2025-09-12 22:56:16.029459474 +0000 UTC m=+25.053767933" lastFinishedPulling="2025-09-12 22:56:50.06645782 +0000 UTC m=+59.090766269" observedRunningTime="2025-09-12 22:56:51.364542708 +0000 UTC m=+60.388851177" watchObservedRunningTime="2025-09-12 22:56:51.4837053 +0000 UTC m=+60.508013769" Sep 12 22:56:52.331543 systemd[1]: Created slice kubepods-besteffort-pod1dc6aeae_2417_4331_804b_dd66042a496d.slice - libcontainer container kubepods-besteffort-pod1dc6aeae_2417_4331_804b_dd66042a496d.slice. Sep 12 22:56:52.381892 containerd[1605]: time="2025-09-12T22:56:52.378829062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-dcqpq,Uid:c3de251e-ea13-4305-87e9-4ed8035dd202,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:56:52.386902 kubelet[2862]: I0912 22:56:52.384364 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dc6aeae-2417-4331-804b-dd66042a496d-whisker-backend-key-pair\") pod \"whisker-748d6f8c75-ktlbs\" (UID: \"1dc6aeae-2417-4331-804b-dd66042a496d\") " pod="calico-system/whisker-748d6f8c75-ktlbs" Sep 12 22:56:52.386902 kubelet[2862]: I0912 22:56:52.384511 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr756\" (UniqueName: \"kubernetes.io/projected/1dc6aeae-2417-4331-804b-dd66042a496d-kube-api-access-sr756\") pod \"whisker-748d6f8c75-ktlbs\" (UID: \"1dc6aeae-2417-4331-804b-dd66042a496d\") " pod="calico-system/whisker-748d6f8c75-ktlbs" Sep 12 22:56:52.386902 kubelet[2862]: I0912 22:56:52.384544 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dc6aeae-2417-4331-804b-dd66042a496d-whisker-ca-bundle\") pod \"whisker-748d6f8c75-ktlbs\" (UID: \"1dc6aeae-2417-4331-804b-dd66042a496d\") " pod="calico-system/whisker-748d6f8c75-ktlbs" Sep 12 22:56:52.691756 containerd[1605]: time="2025-09-12T22:56:52.691008523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-748d6f8c75-ktlbs,Uid:1dc6aeae-2417-4331-804b-dd66042a496d,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:52.780277 containerd[1605]: time="2025-09-12T22:56:52.780210451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793\" id:\"6d099299f9988328d1469d361de96515c08b51a02620611ad28d37e0d8178c73\" pid:4299 exit_status:1 exited_at:{seconds:1757717812 nanos:779593431}" Sep 12 22:56:53.032339 systemd-networkd[1508]: califa1f34a7098: Link UP Sep 12 22:56:53.034439 systemd-networkd[1508]: califa1f34a7098: Gained carrier Sep 12 22:56:53.072212 containerd[1605]: 2025-09-12 22:56:52.764 [INFO][4328] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 22:56:53.072212 containerd[1605]: 2025-09-12 22:56:52.801 [INFO][4328] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--748d6f8c75--ktlbs-eth0 whisker-748d6f8c75- calico-system 1dc6aeae-2417-4331-804b-dd66042a496d 962 0 2025-09-12 22:56:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:748d6f8c75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-748d6f8c75-ktlbs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califa1f34a7098 [] [] }} ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Namespace="calico-system" Pod="whisker-748d6f8c75-ktlbs" WorkloadEndpoint="localhost-k8s-whisker--748d6f8c75--ktlbs-" Sep 12 22:56:53.072212 containerd[1605]: 2025-09-12 22:56:52.801 [INFO][4328] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Namespace="calico-system" Pod="whisker-748d6f8c75-ktlbs" WorkloadEndpoint="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" Sep 12 22:56:53.072212 containerd[1605]: 2025-09-12 22:56:52.878 [INFO][4347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" HandleID="k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Workload="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.878 [INFO][4347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" HandleID="k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Workload="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-748d6f8c75-ktlbs", "timestamp":"2025-09-12 22:56:52.878000068 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.878 [INFO][4347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.879 [INFO][4347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.882 [INFO][4347] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.900 [INFO][4347] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" host="localhost" Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.925 [INFO][4347] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.947 [INFO][4347] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.954 [INFO][4347] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.960 [INFO][4347] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:56:53.072520 containerd[1605]: 2025-09-12 22:56:52.960 [INFO][4347] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" host="localhost" Sep 12 22:56:53.072835 containerd[1605]: 2025-09-12 22:56:52.967 [INFO][4347] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1 Sep 12 22:56:53.072835 containerd[1605]: 2025-09-12 22:56:52.975 [INFO][4347] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" host="localhost" Sep 12 22:56:53.072835 containerd[1605]: 2025-09-12 22:56:52.996 [INFO][4347] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" host="localhost" Sep 12 22:56:53.072835 containerd[1605]: 2025-09-12 22:56:52.996 [INFO][4347] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" host="localhost" Sep 12 22:56:53.072835 containerd[1605]: 2025-09-12 22:56:52.996 [INFO][4347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:56:53.072835 containerd[1605]: 2025-09-12 22:56:52.996 [INFO][4347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" HandleID="k8s-pod-network.6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Workload="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" Sep 12 22:56:53.073078 containerd[1605]: 2025-09-12 22:56:53.002 [INFO][4328] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Namespace="calico-system" Pod="whisker-748d6f8c75-ktlbs" WorkloadEndpoint="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--748d6f8c75--ktlbs-eth0", GenerateName:"whisker-748d6f8c75-", Namespace:"calico-system", SelfLink:"", UID:"1dc6aeae-2417-4331-804b-dd66042a496d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"748d6f8c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-748d6f8c75-ktlbs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califa1f34a7098", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:56:53.073078 containerd[1605]: 2025-09-12 22:56:53.002 [INFO][4328] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Namespace="calico-system" Pod="whisker-748d6f8c75-ktlbs" WorkloadEndpoint="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" Sep 12 22:56:53.073192 containerd[1605]: 2025-09-12 22:56:53.002 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa1f34a7098 ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Namespace="calico-system" Pod="whisker-748d6f8c75-ktlbs" WorkloadEndpoint="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" Sep 12 22:56:53.073192 containerd[1605]: 2025-09-12 22:56:53.036 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Namespace="calico-system" Pod="whisker-748d6f8c75-ktlbs" WorkloadEndpoint="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" Sep 12 22:56:53.073253 containerd[1605]: 2025-09-12 22:56:53.036 [INFO][4328] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Namespace="calico-system" Pod="whisker-748d6f8c75-ktlbs" WorkloadEndpoint="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--748d6f8c75--ktlbs-eth0", GenerateName:"whisker-748d6f8c75-", Namespace:"calico-system", SelfLink:"", UID:"1dc6aeae-2417-4331-804b-dd66042a496d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"748d6f8c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1", Pod:"whisker-748d6f8c75-ktlbs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califa1f34a7098", MAC:"86:02:19:47:a4:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:56:53.073320 containerd[1605]: 2025-09-12 22:56:53.069 [INFO][4328] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" Namespace="calico-system" Pod="whisker-748d6f8c75-ktlbs" WorkloadEndpoint="localhost-k8s-whisker--748d6f8c75--ktlbs-eth0" Sep 12 22:56:53.128352 systemd-networkd[1508]: cali06f8250c43d: Link UP Sep 12 22:56:53.134417 systemd-networkd[1508]: cali06f8250c43d: Gained carrier Sep 12 22:56:53.177334 containerd[1605]: 2025-09-12 22:56:52.608 [INFO][4272] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 22:56:53.177334 containerd[1605]: 2025-09-12 22:56:52.683 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0 calico-apiserver-5648cf76df- calico-apiserver c3de251e-ea13-4305-87e9-4ed8035dd202 873 0 2025-09-12 22:56:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5648cf76df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5648cf76df-dcqpq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06f8250c43d [] [] }} ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-dcqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-" Sep 12 22:56:53.177334 containerd[1605]: 2025-09-12 22:56:52.683 [INFO][4272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-dcqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" Sep 12 22:56:53.177334 containerd[1605]: 2025-09-12 22:56:52.878 [INFO][4330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" HandleID="k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Workload="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:52.878 [INFO][4330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" HandleID="k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Workload="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030f990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5648cf76df-dcqpq", "timestamp":"2025-09-12 22:56:52.877991742 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:52.879 [INFO][4330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:52.996 [INFO][4330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:52.996 [INFO][4330] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:53.017 [INFO][4330] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" host="localhost" Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:53.036 [INFO][4330] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:53.064 [INFO][4330] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:53.069 [INFO][4330] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:53.077 [INFO][4330] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:56:53.177689 containerd[1605]: 2025-09-12 22:56:53.077 [INFO][4330] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" host="localhost" Sep 12 22:56:53.177984 containerd[1605]: 2025-09-12 22:56:53.080 [INFO][4330] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b Sep 12 22:56:53.177984 containerd[1605]: 2025-09-12 22:56:53.098 [INFO][4330] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" host="localhost" Sep 12 22:56:53.177984 containerd[1605]: 2025-09-12 22:56:53.113 [INFO][4330] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" host="localhost" Sep 12 22:56:53.177984 containerd[1605]: 2025-09-12 22:56:53.113 [INFO][4330] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" host="localhost" Sep 12 22:56:53.177984 containerd[1605]: 2025-09-12 22:56:53.113 [INFO][4330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:56:53.177984 containerd[1605]: 2025-09-12 22:56:53.113 [INFO][4330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" HandleID="k8s-pod-network.a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Workload="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" Sep 12 22:56:53.178138 containerd[1605]: 2025-09-12 22:56:53.121 [INFO][4272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-dcqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0", GenerateName:"calico-apiserver-5648cf76df-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3de251e-ea13-4305-87e9-4ed8035dd202", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5648cf76df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5648cf76df-dcqpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06f8250c43d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:56:53.178209 containerd[1605]: 2025-09-12 22:56:53.121 [INFO][4272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-dcqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" Sep 12 22:56:53.178209 containerd[1605]: 2025-09-12 22:56:53.121 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06f8250c43d ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-dcqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" Sep 12 22:56:53.178209 containerd[1605]: 2025-09-12 22:56:53.134 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-dcqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" Sep 12 22:56:53.178289 containerd[1605]: 2025-09-12 22:56:53.135 [INFO][4272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-dcqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0", GenerateName:"calico-apiserver-5648cf76df-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3de251e-ea13-4305-87e9-4ed8035dd202", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5648cf76df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b", Pod:"calico-apiserver-5648cf76df-dcqpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06f8250c43d", MAC:"0e:fe:f2:36:1c:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:56:53.178353 containerd[1605]: 2025-09-12 22:56:53.161 [INFO][4272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-dcqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--dcqpq-eth0" Sep 12 22:56:53.325059 containerd[1605]: time="2025-09-12T22:56:53.321498463Z" level=info msg="connecting to shim a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b" address="unix:///run/containerd/s/be5176b646764781623391a8c96cd8babbe217855cfaf5f57671425d85362029" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:53.329900 containerd[1605]: time="2025-09-12T22:56:53.327578281Z" level=info msg="connecting to shim 6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1" address="unix:///run/containerd/s/28ce2e2b831ab76aae6c59efdd2437854aa5cea73a64a813bf00a3d86ffbf5a3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:53.337925 kubelet[2862]: I0912 22:56:53.337721 2862 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ccbc9c8-51b9-4405-930a-410f3a255603" path="/var/lib/kubelet/pods/0ccbc9c8-51b9-4405-930a-410f3a255603/volumes" Sep 12 22:56:53.382741 systemd[1]: Started cri-containerd-a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b.scope - libcontainer container a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b. Sep 12 22:56:53.402516 systemd[1]: Started cri-containerd-6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1.scope - libcontainer container 6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1. Sep 12 22:56:53.456361 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:56:53.460280 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:56:53.542984 containerd[1605]: time="2025-09-12T22:56:53.542892430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-748d6f8c75-ktlbs,Uid:1dc6aeae-2417-4331-804b-dd66042a496d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1\"" Sep 12 22:56:53.545024 containerd[1605]: time="2025-09-12T22:56:53.544906990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-dcqpq,Uid:c3de251e-ea13-4305-87e9-4ed8035dd202,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b\"" Sep 12 22:56:53.554576 containerd[1605]: time="2025-09-12T22:56:53.554097650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 22:56:54.243207 systemd-networkd[1508]: cali06f8250c43d: Gained IPv6LL Sep 12 22:56:54.956902 systemd-networkd[1508]: vxlan.calico: Link UP Sep 12 22:56:54.956911 systemd-networkd[1508]: vxlan.calico: Gained carrier Sep 12 22:56:55.072648 systemd-networkd[1508]: califa1f34a7098: Gained IPv6LL Sep 12 22:56:56.359047 systemd-networkd[1508]: vxlan.calico: Gained IPv6LL Sep 12 22:56:58.980864 containerd[1605]: time="2025-09-12T22:56:58.980757780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:58.983214 containerd[1605]: time="2025-09-12T22:56:58.983063587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 22:56:58.986882 containerd[1605]: time="2025-09-12T22:56:58.985636406Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:59.001211 containerd[1605]: time="2025-09-12T22:56:59.001020601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:56:59.010766 containerd[1605]: time="2025-09-12T22:56:59.007784783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.453625465s" Sep 12 22:56:59.010766 containerd[1605]: time="2025-09-12T22:56:59.007843012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 22:56:59.011340 containerd[1605]: time="2025-09-12T22:56:59.011209033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 22:56:59.049022 containerd[1605]: time="2025-09-12T22:56:59.048963569Z" level=info msg="CreateContainer within sandbox \"a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 22:56:59.092412 containerd[1605]: time="2025-09-12T22:56:59.092293100Z" level=info msg="Container dc9b8b1ee6d8c3b80715e4d312a49a748e1eb214b34c09bdf6a7c46f417b8a12: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:56:59.126942 containerd[1605]: time="2025-09-12T22:56:59.126855473Z" level=info msg="CreateContainer within sandbox \"a14f92239445311c59afc09d75f999be70d55a41a41e4223e4e315b78f06ea8b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc9b8b1ee6d8c3b80715e4d312a49a748e1eb214b34c09bdf6a7c46f417b8a12\"" Sep 12 22:56:59.128462 containerd[1605]: time="2025-09-12T22:56:59.127740628Z" level=info msg="StartContainer for \"dc9b8b1ee6d8c3b80715e4d312a49a748e1eb214b34c09bdf6a7c46f417b8a12\"" Sep 12 22:56:59.129548 containerd[1605]: time="2025-09-12T22:56:59.129515456Z" level=info msg="connecting to shim dc9b8b1ee6d8c3b80715e4d312a49a748e1eb214b34c09bdf6a7c46f417b8a12" address="unix:///run/containerd/s/be5176b646764781623391a8c96cd8babbe217855cfaf5f57671425d85362029" protocol=ttrpc version=3 Sep 12 22:56:59.234740 systemd[1]: Started cri-containerd-dc9b8b1ee6d8c3b80715e4d312a49a748e1eb214b34c09bdf6a7c46f417b8a12.scope - libcontainer container dc9b8b1ee6d8c3b80715e4d312a49a748e1eb214b34c09bdf6a7c46f417b8a12. Sep 12 22:56:59.333227 kubelet[2862]: E0912 22:56:59.332368 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:59.335422 containerd[1605]: time="2025-09-12T22:56:59.334466655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffh8k,Uid:95981e6b-8b61-4ef7-8054-72e5fdafc228,Namespace:kube-system,Attempt:0,}" Sep 12 22:56:59.335422 containerd[1605]: time="2025-09-12T22:56:59.334890612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78498cbbc6-zw9jr,Uid:58c292a3-07a9-46f0-b413-47faa698b42a,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:59.335422 containerd[1605]: time="2025-09-12T22:56:59.335044862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d4j5k,Uid:48684744-98ac-48bc-8701-ed3f0eaabebd,Namespace:calico-system,Attempt:0,}" Sep 12 22:56:59.365502 containerd[1605]: time="2025-09-12T22:56:59.365445086Z" level=info msg="StartContainer for \"dc9b8b1ee6d8c3b80715e4d312a49a748e1eb214b34c09bdf6a7c46f417b8a12\" returns successfully" Sep 12 22:56:59.679483 systemd-networkd[1508]: calieae2a558d10: Link UP Sep 12 22:56:59.681880 systemd-networkd[1508]: calieae2a558d10: Gained carrier Sep 12 22:56:59.741433 containerd[1605]: 2025-09-12 22:56:59.457 [INFO][4731] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--d4j5k-eth0 goldmane-54d579b49d- calico-system 48684744-98ac-48bc-8701-ed3f0eaabebd 870 0 2025-09-12 22:56:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-d4j5k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calieae2a558d10 [] [] }} ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Namespace="calico-system" Pod="goldmane-54d579b49d-d4j5k" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d4j5k-" Sep 12 22:56:59.741433 containerd[1605]: 2025-09-12 22:56:59.458 [INFO][4731] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Namespace="calico-system" Pod="goldmane-54d579b49d-d4j5k" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" Sep 12 22:56:59.741433 containerd[1605]: 2025-09-12 22:56:59.528 [INFO][4759] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" HandleID="k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Workload="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.528 [INFO][4759] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" HandleID="k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Workload="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-d4j5k", "timestamp":"2025-09-12 22:56:59.528487836 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.528 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.528 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.528 [INFO][4759] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.545 [INFO][4759] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" host="localhost" Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.571 [INFO][4759] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.584 [INFO][4759] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.590 [INFO][4759] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.599 [INFO][4759] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:56:59.741742 containerd[1605]: 2025-09-12 22:56:59.599 [INFO][4759] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" host="localhost" Sep 12 22:56:59.742014 containerd[1605]: 2025-09-12 22:56:59.604 [INFO][4759] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56 Sep 12 22:56:59.742014 containerd[1605]: 2025-09-12 22:56:59.620 [INFO][4759] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" host="localhost" Sep 12 22:56:59.742014 containerd[1605]: 2025-09-12 22:56:59.645 [INFO][4759] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" host="localhost" Sep 12 22:56:59.742014 containerd[1605]: 2025-09-12 22:56:59.654 [INFO][4759] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" host="localhost" Sep 12 22:56:59.742014 containerd[1605]: 2025-09-12 22:56:59.654 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:56:59.742014 containerd[1605]: 2025-09-12 22:56:59.655 [INFO][4759] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" HandleID="k8s-pod-network.fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Workload="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" Sep 12 22:56:59.742168 containerd[1605]: 2025-09-12 22:56:59.666 [INFO][4731] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Namespace="calico-system" Pod="goldmane-54d579b49d-d4j5k" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--d4j5k-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"48684744-98ac-48bc-8701-ed3f0eaabebd", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-d4j5k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieae2a558d10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:56:59.742168 containerd[1605]: 2025-09-12 22:56:59.666 [INFO][4731] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Namespace="calico-system" Pod="goldmane-54d579b49d-d4j5k" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" Sep 12 22:56:59.742263 containerd[1605]: 2025-09-12 22:56:59.666 [INFO][4731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieae2a558d10 ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Namespace="calico-system" Pod="goldmane-54d579b49d-d4j5k" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" Sep 12 22:56:59.742263 containerd[1605]: 2025-09-12 22:56:59.684 [INFO][4731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Namespace="calico-system" Pod="goldmane-54d579b49d-d4j5k" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" Sep 12 22:56:59.742323 containerd[1605]: 2025-09-12 22:56:59.686 [INFO][4731] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Namespace="calico-system" Pod="goldmane-54d579b49d-d4j5k" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--d4j5k-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"48684744-98ac-48bc-8701-ed3f0eaabebd", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56", Pod:"goldmane-54d579b49d-d4j5k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieae2a558d10", MAC:"52:09:c6:36:1f:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:56:59.742465 containerd[1605]: 2025-09-12 22:56:59.727 [INFO][4731] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" Namespace="calico-system" Pod="goldmane-54d579b49d-d4j5k" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d4j5k-eth0" Sep 12 22:56:59.880498 systemd-networkd[1508]: cali8aac7dc295a: Link UP Sep 12 22:56:59.882370 systemd-networkd[1508]: cali8aac7dc295a: Gained carrier Sep 12 22:56:59.914459 containerd[1605]: time="2025-09-12T22:56:59.914187625Z" level=info msg="connecting to shim fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56" address="unix:///run/containerd/s/ef8457a0bf98d8715490b5069fb369e1066a9cb6003f296c532ddd9079ff1ca2" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:56:59.944047 containerd[1605]: 2025-09-12 22:56:59.453 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0 coredns-674b8bbfcf- kube-system 95981e6b-8b61-4ef7-8054-72e5fdafc228 871 0 2025-09-12 22:55:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ffh8k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8aac7dc295a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffh8k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ffh8k-" Sep 12 22:56:59.944047 containerd[1605]: 2025-09-12 22:56:59.453 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffh8k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" Sep 12 22:56:59.944047 containerd[1605]: 2025-09-12 22:56:59.529 [INFO][4757] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" HandleID="k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Workload="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.530 [INFO][4757] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" HandleID="k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Workload="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ffh8k", "timestamp":"2025-09-12 22:56:59.529730865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.530 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.655 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.656 [INFO][4757] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.677 [INFO][4757] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" host="localhost" Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.696 [INFO][4757] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.740 [INFO][4757] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.747 [INFO][4757] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.769 [INFO][4757] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:56:59.944305 containerd[1605]: 2025-09-12 22:56:59.769 [INFO][4757] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" host="localhost" Sep 12 22:56:59.944693 containerd[1605]: 2025-09-12 22:56:59.777 [INFO][4757] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff Sep 12 22:56:59.944693 containerd[1605]: 2025-09-12 22:56:59.813 [INFO][4757] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" host="localhost" Sep 12 22:56:59.944693 containerd[1605]: 2025-09-12 22:56:59.859 [INFO][4757] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" host="localhost" Sep 12 22:56:59.944693 containerd[1605]: 2025-09-12 22:56:59.859 [INFO][4757] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" host="localhost" Sep 12 22:56:59.944693 containerd[1605]: 2025-09-12 22:56:59.860 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:56:59.944693 containerd[1605]: 2025-09-12 22:56:59.860 [INFO][4757] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" HandleID="k8s-pod-network.821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Workload="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" Sep 12 22:56:59.944869 containerd[1605]: 2025-09-12 22:56:59.875 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffh8k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95981e6b-8b61-4ef7-8054-72e5fdafc228", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ffh8k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8aac7dc295a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:56:59.944957 containerd[1605]: 2025-09-12 22:56:59.875 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffh8k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" Sep 12 22:56:59.944957 containerd[1605]: 2025-09-12 22:56:59.875 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8aac7dc295a ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffh8k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" Sep 12 22:56:59.944957 containerd[1605]: 2025-09-12 22:56:59.880 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffh8k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" Sep 12 22:56:59.945069 containerd[1605]: 2025-09-12 22:56:59.888 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffh8k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95981e6b-8b61-4ef7-8054-72e5fdafc228", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff", Pod:"coredns-674b8bbfcf-ffh8k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8aac7dc295a", MAC:"12:47:f3:a9:23:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:56:59.945069 containerd[1605]: 2025-09-12 22:56:59.926 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" Namespace="kube-system" Pod="coredns-674b8bbfcf-ffh8k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ffh8k-eth0" Sep 12 22:56:59.980931 systemd[1]: Started cri-containerd-fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56.scope - libcontainer container fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56. Sep 12 22:57:00.043595 containerd[1605]: time="2025-09-12T22:57:00.043452854Z" level=info msg="connecting to shim 821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff" address="unix:///run/containerd/s/f4ecece745be33b09b0fab2918f734fb4b03c11e5c3dbcbcc664f962057ce018" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:00.121090 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:57:00.159271 kubelet[2862]: I0912 22:57:00.158590 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5648cf76df-dcqpq" podStartSLOduration=44.698375455 podStartE2EDuration="50.158564028s" podCreationTimestamp="2025-09-12 22:56:10 +0000 UTC" firstStartedPulling="2025-09-12 22:56:53.548822777 +0000 UTC m=+62.573131236" lastFinishedPulling="2025-09-12 22:56:59.009011359 +0000 UTC m=+68.033319809" observedRunningTime="2025-09-12 22:57:00.155217784 +0000 UTC m=+69.179526233" watchObservedRunningTime="2025-09-12 22:57:00.158564028 +0000 UTC m=+69.182872477" Sep 12 22:57:00.165290 systemd-networkd[1508]: cali1acd618a3f6: Link UP Sep 12 22:57:00.177277 systemd-networkd[1508]: cali1acd618a3f6: Gained carrier Sep 12 22:57:00.185241 systemd[1]: Started cri-containerd-821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff.scope - libcontainer container 821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff. Sep 12 22:57:00.239270 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.463 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0 calico-kube-controllers-78498cbbc6- calico-system 58c292a3-07a9-46f0-b413-47faa698b42a 875 0 2025-09-12 22:56:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78498cbbc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78498cbbc6-zw9jr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1acd618a3f6 [] [] }} ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Namespace="calico-system" Pod="calico-kube-controllers-78498cbbc6-zw9jr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.463 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Namespace="calico-system" Pod="calico-kube-controllers-78498cbbc6-zw9jr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.555 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" HandleID="k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Workload="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.555 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" HandleID="k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Workload="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00064a660), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78498cbbc6-zw9jr", "timestamp":"2025-09-12 22:56:59.555218089 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.555 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.860 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.860 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.886 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.911 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.970 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.975 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.990 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:56:59.990 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:57:00.000 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:57:00.033 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:57:00.103 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:57:00.103 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" host="localhost" Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:57:00.104 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:00.265017 containerd[1605]: 2025-09-12 22:57:00.104 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" HandleID="k8s-pod-network.a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Workload="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" Sep 12 22:57:00.265876 containerd[1605]: 2025-09-12 22:57:00.150 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Namespace="calico-system" Pod="calico-kube-controllers-78498cbbc6-zw9jr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0", GenerateName:"calico-kube-controllers-78498cbbc6-", Namespace:"calico-system", SelfLink:"", UID:"58c292a3-07a9-46f0-b413-47faa698b42a", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78498cbbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78498cbbc6-zw9jr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1acd618a3f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:00.265876 containerd[1605]: 2025-09-12 22:57:00.151 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Namespace="calico-system" Pod="calico-kube-controllers-78498cbbc6-zw9jr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" Sep 12 22:57:00.265876 containerd[1605]: 2025-09-12 22:57:00.151 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1acd618a3f6 ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Namespace="calico-system" Pod="calico-kube-controllers-78498cbbc6-zw9jr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" Sep 12 22:57:00.265876 containerd[1605]: 2025-09-12 22:57:00.178 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Namespace="calico-system" Pod="calico-kube-controllers-78498cbbc6-zw9jr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" Sep 12 22:57:00.265876 containerd[1605]: 2025-09-12 22:57:00.184 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Namespace="calico-system" Pod="calico-kube-controllers-78498cbbc6-zw9jr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0", GenerateName:"calico-kube-controllers-78498cbbc6-", Namespace:"calico-system", SelfLink:"", UID:"58c292a3-07a9-46f0-b413-47faa698b42a", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78498cbbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e", Pod:"calico-kube-controllers-78498cbbc6-zw9jr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1acd618a3f6", MAC:"1e:87:5a:c3:f3:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:00.265876 containerd[1605]: 2025-09-12 22:57:00.236 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" Namespace="calico-system" Pod="calico-kube-controllers-78498cbbc6-zw9jr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78498cbbc6--zw9jr-eth0" Sep 12 22:57:00.331697 containerd[1605]: time="2025-09-12T22:57:00.329150830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d4j5k,Uid:48684744-98ac-48bc-8701-ed3f0eaabebd,Namespace:calico-system,Attempt:0,} returns sandbox id \"fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56\"" Sep 12 22:57:00.376160 containerd[1605]: time="2025-09-12T22:57:00.375875553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ffh8k,Uid:95981e6b-8b61-4ef7-8054-72e5fdafc228,Namespace:kube-system,Attempt:0,} returns sandbox id \"821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff\"" Sep 12 22:57:00.378711 kubelet[2862]: E0912 22:57:00.378655 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:00.397795 containerd[1605]: time="2025-09-12T22:57:00.395489991Z" level=info msg="CreateContainer within sandbox \"821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:57:00.424474 containerd[1605]: time="2025-09-12T22:57:00.423750660Z" level=info msg="connecting to shim a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e" address="unix:///run/containerd/s/247558a8f3cde8e069b2e3b26259ec5f65fcee94ab86bb57898ab7263a304a17" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:00.502441 systemd[1]: Started cri-containerd-a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e.scope - libcontainer container a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e. Sep 12 22:57:00.514644 containerd[1605]: time="2025-09-12T22:57:00.514570790Z" level=info msg="Container 6898fd161b94141505a17758ce599be845ad9bf9412bdcd13ea421d3f8398307: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:00.521272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314674290.mount: Deactivated successfully. Sep 12 22:57:00.530664 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:57:00.550239 containerd[1605]: time="2025-09-12T22:57:00.550100118Z" level=info msg="CreateContainer within sandbox \"821a6d6326ac20aba3ba9d6edfcf86900a5b0a1ab03bef823d7a78b11d33f0ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6898fd161b94141505a17758ce599be845ad9bf9412bdcd13ea421d3f8398307\"" Sep 12 22:57:00.551800 containerd[1605]: time="2025-09-12T22:57:00.550985673Z" level=info msg="StartContainer for \"6898fd161b94141505a17758ce599be845ad9bf9412bdcd13ea421d3f8398307\"" Sep 12 22:57:00.554681 containerd[1605]: time="2025-09-12T22:57:00.554454887Z" level=info msg="connecting to shim 6898fd161b94141505a17758ce599be845ad9bf9412bdcd13ea421d3f8398307" address="unix:///run/containerd/s/f4ecece745be33b09b0fab2918f734fb4b03c11e5c3dbcbcc664f962057ce018" protocol=ttrpc version=3 Sep 12 22:57:00.590297 containerd[1605]: time="2025-09-12T22:57:00.589406660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78498cbbc6-zw9jr,Uid:58c292a3-07a9-46f0-b413-47faa698b42a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e\"" Sep 12 22:57:00.608252 systemd[1]: Started cri-containerd-6898fd161b94141505a17758ce599be845ad9bf9412bdcd13ea421d3f8398307.scope - libcontainer container 6898fd161b94141505a17758ce599be845ad9bf9412bdcd13ea421d3f8398307. Sep 12 22:57:00.731312 containerd[1605]: time="2025-09-12T22:57:00.731249775Z" level=info msg="StartContainer for \"6898fd161b94141505a17758ce599be845ad9bf9412bdcd13ea421d3f8398307\" returns successfully" Sep 12 22:57:01.002979 containerd[1605]: time="2025-09-12T22:57:01.001894702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:01.004924 containerd[1605]: time="2025-09-12T22:57:01.004599288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 22:57:01.011191 containerd[1605]: time="2025-09-12T22:57:01.009176356Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:01.027754 containerd[1605]: time="2025-09-12T22:57:01.025436774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:01.028805 containerd[1605]: time="2025-09-12T22:57:01.028756848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.017456222s" Sep 12 22:57:01.028888 containerd[1605]: time="2025-09-12T22:57:01.028805159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 22:57:01.032959 systemd-networkd[1508]: calieae2a558d10: Gained IPv6LL Sep 12 22:57:01.036322 containerd[1605]: time="2025-09-12T22:57:01.031299630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 22:57:01.053642 containerd[1605]: time="2025-09-12T22:57:01.053571614Z" level=info msg="CreateContainer within sandbox \"6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 22:57:01.131089 kubelet[2862]: E0912 22:57:01.131040 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:01.136883 kubelet[2862]: I0912 22:57:01.136826 2862 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:01.369733 containerd[1605]: time="2025-09-12T22:57:01.366810176Z" level=info msg="Container f42b99e7e029536fb79439c4e3c6768691cc2807c3aec5adef40164e811fa903: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:01.484628 kubelet[2862]: I0912 22:57:01.480566 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ffh8k" podStartSLOduration=65.480541855 podStartE2EDuration="1m5.480541855s" podCreationTimestamp="2025-09-12 22:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:57:01.47013211 +0000 UTC m=+70.494440559" watchObservedRunningTime="2025-09-12 22:57:01.480541855 +0000 UTC m=+70.504850304" Sep 12 22:57:01.532698 containerd[1605]: time="2025-09-12T22:57:01.532549669Z" level=info msg="CreateContainer within sandbox \"6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f42b99e7e029536fb79439c4e3c6768691cc2807c3aec5adef40164e811fa903\"" Sep 12 22:57:01.536583 containerd[1605]: time="2025-09-12T22:57:01.536464992Z" level=info msg="StartContainer for \"f42b99e7e029536fb79439c4e3c6768691cc2807c3aec5adef40164e811fa903\"" Sep 12 22:57:01.547484 containerd[1605]: time="2025-09-12T22:57:01.547292935Z" level=info msg="connecting to shim f42b99e7e029536fb79439c4e3c6768691cc2807c3aec5adef40164e811fa903" address="unix:///run/containerd/s/28ce2e2b831ab76aae6c59efdd2437854aa5cea73a64a813bf00a3d86ffbf5a3" protocol=ttrpc version=3 Sep 12 22:57:01.621905 systemd[1]: Started cri-containerd-f42b99e7e029536fb79439c4e3c6768691cc2807c3aec5adef40164e811fa903.scope - libcontainer container f42b99e7e029536fb79439c4e3c6768691cc2807c3aec5adef40164e811fa903. Sep 12 22:57:01.673742 systemd-networkd[1508]: cali8aac7dc295a: Gained IPv6LL Sep 12 22:57:01.749856 containerd[1605]: time="2025-09-12T22:57:01.749726132Z" level=info msg="StartContainer for \"f42b99e7e029536fb79439c4e3c6768691cc2807c3aec5adef40164e811fa903\" returns successfully" Sep 12 22:57:02.050343 systemd-networkd[1508]: cali1acd618a3f6: Gained IPv6LL Sep 12 22:57:02.147237 kubelet[2862]: E0912 22:57:02.146173 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:02.338067 containerd[1605]: time="2025-09-12T22:57:02.337123768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktqqr,Uid:e017806f-04da-4501-94aa-d21ceb92cfe6,Namespace:calico-system,Attempt:0,}" Sep 12 22:57:02.338969 containerd[1605]: time="2025-09-12T22:57:02.338814979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-8sfrx,Uid:8f4a5b7b-14f1-40e2-b837-127fbfcb716e,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:57:03.170452 kubelet[2862]: E0912 22:57:03.170006 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:03.290688 systemd-networkd[1508]: calia2452751b64: Link UP Sep 12 22:57:03.292984 systemd-networkd[1508]: calia2452751b64: Gained carrier Sep 12 22:57:03.334846 kubelet[2862]: E0912 22:57:03.333552 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:03.336115 containerd[1605]: time="2025-09-12T22:57:03.335312827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j7nvf,Uid:b43354cb-8c3f-4399-a45a-0f4a8dfe860f,Namespace:kube-system,Attempt:0,}" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.541 [INFO][5035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ktqqr-eth0 csi-node-driver- calico-system e017806f-04da-4501-94aa-d21ceb92cfe6 728 0 2025-09-12 22:56:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ktqqr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia2452751b64 [] [] }} ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Namespace="calico-system" Pod="csi-node-driver-ktqqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqqr-" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.542 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Namespace="calico-system" Pod="csi-node-driver-ktqqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqqr-eth0" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.632 [INFO][5063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" HandleID="k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Workload="localhost-k8s-csi--node--driver--ktqqr-eth0" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.632 [INFO][5063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" HandleID="k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Workload="localhost-k8s-csi--node--driver--ktqqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ktqqr", "timestamp":"2025-09-12 22:57:02.632694229 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.632 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.632 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.633 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.654 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.697 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.728 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.732 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.806 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.806 [INFO][5063] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:02.888 [INFO][5063] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:03.071 [INFO][5063] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:03.263 [INFO][5063] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:03.276 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" host="localhost" Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:03.276 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:03.443259 containerd[1605]: 2025-09-12 22:57:03.276 [INFO][5063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" HandleID="k8s-pod-network.3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Workload="localhost-k8s-csi--node--driver--ktqqr-eth0" Sep 12 22:57:03.449807 containerd[1605]: 2025-09-12 22:57:03.285 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Namespace="calico-system" Pod="csi-node-driver-ktqqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ktqqr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e017806f-04da-4501-94aa-d21ceb92cfe6", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ktqqr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2452751b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:03.449807 containerd[1605]: 2025-09-12 22:57:03.285 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Namespace="calico-system" Pod="csi-node-driver-ktqqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqqr-eth0" Sep 12 22:57:03.449807 containerd[1605]: 2025-09-12 22:57:03.285 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2452751b64 ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Namespace="calico-system" Pod="csi-node-driver-ktqqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqqr-eth0" Sep 12 22:57:03.449807 containerd[1605]: 2025-09-12 22:57:03.294 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Namespace="calico-system" Pod="csi-node-driver-ktqqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqqr-eth0" Sep 12 22:57:03.449807 containerd[1605]: 2025-09-12 22:57:03.295 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Namespace="calico-system" Pod="csi-node-driver-ktqqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ktqqr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e017806f-04da-4501-94aa-d21ceb92cfe6", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa", Pod:"csi-node-driver-ktqqr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2452751b64", MAC:"42:0b:6a:2d:14:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:03.449807 containerd[1605]: 2025-09-12 22:57:03.418 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" Namespace="calico-system" Pod="csi-node-driver-ktqqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqqr-eth0" Sep 12 22:57:04.164110 systemd-networkd[1508]: califc0380fa336: Link UP Sep 12 22:57:04.166094 systemd-networkd[1508]: califc0380fa336: Gained carrier Sep 12 22:57:04.186461 kubelet[2862]: E0912 22:57:04.186116 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:02.592 [INFO][5045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0 calico-apiserver-5648cf76df- calico-apiserver 8f4a5b7b-14f1-40e2-b837-127fbfcb716e 867 0 2025-09-12 22:56:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5648cf76df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5648cf76df-8sfrx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc0380fa336 [] [] }} ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-8sfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:02.593 [INFO][5045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-8sfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:02.675 [INFO][5070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" HandleID="k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Workload="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:02.676 [INFO][5070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" HandleID="k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Workload="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000131b50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5648cf76df-8sfrx", "timestamp":"2025-09-12 22:57:02.675508682 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:02.676 [INFO][5070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.278 [INFO][5070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.285 [INFO][5070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.435 [INFO][5070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.489 [INFO][5070] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.819 [INFO][5070] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.833 [INFO][5070] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.842 [INFO][5070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.842 [INFO][5070] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:03.847 [INFO][5070] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5 Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:04.105 [INFO][5070] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:04.138 [INFO][5070] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:04.138 [INFO][5070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" host="localhost" Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:04.138 [INFO][5070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:04.214244 containerd[1605]: 2025-09-12 22:57:04.138 [INFO][5070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" HandleID="k8s-pod-network.8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Workload="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" Sep 12 22:57:04.217281 containerd[1605]: 2025-09-12 22:57:04.154 [INFO][5045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-8sfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0", GenerateName:"calico-apiserver-5648cf76df-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f4a5b7b-14f1-40e2-b837-127fbfcb716e", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5648cf76df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5648cf76df-8sfrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc0380fa336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:04.217281 containerd[1605]: 2025-09-12 22:57:04.154 [INFO][5045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-8sfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" Sep 12 22:57:04.217281 containerd[1605]: 2025-09-12 22:57:04.154 [INFO][5045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc0380fa336 ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-8sfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" Sep 12 22:57:04.217281 containerd[1605]: 2025-09-12 22:57:04.165 [INFO][5045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-8sfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" Sep 12 22:57:04.217281 containerd[1605]: 2025-09-12 22:57:04.168 [INFO][5045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-8sfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0", GenerateName:"calico-apiserver-5648cf76df-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f4a5b7b-14f1-40e2-b837-127fbfcb716e", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5648cf76df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5", Pod:"calico-apiserver-5648cf76df-8sfrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc0380fa336", MAC:"fa:c6:8f:db:4a:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:04.217281 containerd[1605]: 2025-09-12 22:57:04.205 [INFO][5045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" Namespace="calico-apiserver" Pod="calico-apiserver-5648cf76df-8sfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5648cf76df--8sfrx-eth0" Sep 12 22:57:04.360219 containerd[1605]: time="2025-09-12T22:57:04.360156535Z" level=info msg="connecting to shim 3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa" address="unix:///run/containerd/s/5e8ede404b385a77af52f2879a87d279c826672ed7a5d48f7d0b7a38684cf900" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:04.377209 containerd[1605]: time="2025-09-12T22:57:04.377126894Z" level=info msg="connecting to shim 8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5" address="unix:///run/containerd/s/4d6206a4e358dc8b5459ca851b3b05731c33505f57bbb26eccf891b432a7ea06" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:04.427608 systemd[1]: Started cri-containerd-3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa.scope - libcontainer container 3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa. Sep 12 22:57:04.432026 systemd[1]: Started cri-containerd-8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5.scope - libcontainer container 8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5. Sep 12 22:57:04.467095 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:57:04.475137 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:57:04.537432 containerd[1605]: time="2025-09-12T22:57:04.537350294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktqqr,Uid:e017806f-04da-4501-94aa-d21ceb92cfe6,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa\"" Sep 12 22:57:04.572962 containerd[1605]: time="2025-09-12T22:57:04.572750057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5648cf76df-8sfrx,Uid:8f4a5b7b-14f1-40e2-b837-127fbfcb716e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5\"" Sep 12 22:57:04.579801 systemd-networkd[1508]: cali247faa65020: Link UP Sep 12 22:57:04.585963 containerd[1605]: time="2025-09-12T22:57:04.585902568Z" level=info msg="CreateContainer within sandbox \"8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 22:57:04.592013 systemd-networkd[1508]: cali247faa65020: Gained carrier Sep 12 22:57:04.616122 containerd[1605]: time="2025-09-12T22:57:04.615984491Z" level=info msg="Container cb2dd0f51838535d238b8ea574576eae84e4053261fbd6f4f2bda2382cccaf5f: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:04.697738 containerd[1605]: time="2025-09-12T22:57:04.697022788Z" level=info msg="CreateContainer within sandbox \"8e7e6844969c485e8c5d2adcb4e51b3e66aa5748c78c11122cf6b39a1bf840b5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cb2dd0f51838535d238b8ea574576eae84e4053261fbd6f4f2bda2382cccaf5f\"" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.276 [INFO][5098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0 coredns-674b8bbfcf- kube-system b43354cb-8c3f-4399-a45a-0f4a8dfe860f 868 0 2025-09-12 22:55:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-j7nvf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali247faa65020 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j7nvf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j7nvf-" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.276 [INFO][5098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j7nvf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.404 [INFO][5125] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" HandleID="k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Workload="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.405 [INFO][5125] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" HandleID="k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Workload="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c090), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-j7nvf", "timestamp":"2025-09-12 22:57:04.404159955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.406 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.406 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.406 [INFO][5125] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.430 [INFO][5125] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.458 [INFO][5125] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.478 [INFO][5125] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.491 [INFO][5125] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.514 [INFO][5125] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.516 [INFO][5125] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.527 [INFO][5125] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.542 [INFO][5125] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.567 [INFO][5125] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.567 [INFO][5125] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" host="localhost" Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.567 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:57:04.702851 containerd[1605]: 2025-09-12 22:57:04.567 [INFO][5125] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" HandleID="k8s-pod-network.ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Workload="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" Sep 12 22:57:04.703710 containerd[1605]: 2025-09-12 22:57:04.574 [INFO][5098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j7nvf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b43354cb-8c3f-4399-a45a-0f4a8dfe860f", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-j7nvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali247faa65020", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:04.703710 containerd[1605]: 2025-09-12 22:57:04.575 [INFO][5098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j7nvf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" Sep 12 22:57:04.703710 containerd[1605]: 2025-09-12 22:57:04.575 [INFO][5098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali247faa65020 ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j7nvf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" Sep 12 22:57:04.703710 containerd[1605]: 2025-09-12 22:57:04.594 [INFO][5098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j7nvf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" Sep 12 22:57:04.703710 containerd[1605]: 2025-09-12 22:57:04.598 [INFO][5098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j7nvf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b43354cb-8c3f-4399-a45a-0f4a8dfe860f", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d", Pod:"coredns-674b8bbfcf-j7nvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali247faa65020", MAC:"1a:2f:8f:f5:7e:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:57:04.703710 containerd[1605]: 2025-09-12 22:57:04.649 [INFO][5098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j7nvf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j7nvf-eth0" Sep 12 22:57:04.703710 containerd[1605]: time="2025-09-12T22:57:04.702158334Z" level=info msg="StartContainer for \"cb2dd0f51838535d238b8ea574576eae84e4053261fbd6f4f2bda2382cccaf5f\"" Sep 12 22:57:04.703710 containerd[1605]: time="2025-09-12T22:57:04.704689804Z" level=info msg="connecting to shim cb2dd0f51838535d238b8ea574576eae84e4053261fbd6f4f2bda2382cccaf5f" address="unix:///run/containerd/s/4d6206a4e358dc8b5459ca851b3b05731c33505f57bbb26eccf891b432a7ea06" protocol=ttrpc version=3 Sep 12 22:57:04.774864 systemd[1]: Started cri-containerd-cb2dd0f51838535d238b8ea574576eae84e4053261fbd6f4f2bda2382cccaf5f.scope - libcontainer container cb2dd0f51838535d238b8ea574576eae84e4053261fbd6f4f2bda2382cccaf5f. Sep 12 22:57:04.822740 containerd[1605]: time="2025-09-12T22:57:04.822627423Z" level=info msg="connecting to shim ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d" address="unix:///run/containerd/s/7ac706d3cb74fd6b35633ce0cc135323c2da7f63567afe23bc15f79ad74003b0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:57:04.853607 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:41980.service - OpenSSH per-connection server daemon (10.0.0.1:41980). Sep 12 22:57:04.872373 systemd[1]: Started cri-containerd-ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d.scope - libcontainer container ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d. Sep 12 22:57:04.897198 containerd[1605]: time="2025-09-12T22:57:04.897132275Z" level=info msg="StartContainer for \"cb2dd0f51838535d238b8ea574576eae84e4053261fbd6f4f2bda2382cccaf5f\" returns successfully" Sep 12 22:57:04.923661 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:57:05.006145 containerd[1605]: time="2025-09-12T22:57:05.005953403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j7nvf,Uid:b43354cb-8c3f-4399-a45a-0f4a8dfe860f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d\"" Sep 12 22:57:05.009083 kubelet[2862]: E0912 22:57:05.007624 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:05.013801 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 41980 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:05.016512 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:05.025445 containerd[1605]: time="2025-09-12T22:57:05.022657149Z" level=info msg="CreateContainer within sandbox \"ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:57:05.033918 systemd-logind[1583]: New session 10 of user core. Sep 12 22:57:05.041647 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 22:57:05.059134 systemd-networkd[1508]: calia2452751b64: Gained IPv6LL Sep 12 22:57:05.102003 containerd[1605]: time="2025-09-12T22:57:05.101151156Z" level=info msg="Container b9ff8d5b8c16e90880fcccf70273cb7c092eaf9d35a9fb403f060f9b06522271: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:05.157239 containerd[1605]: time="2025-09-12T22:57:05.154149215Z" level=info msg="CreateContainer within sandbox \"ef6da8bc4729942d219ab0e312c39738693c6250549b29b842ee523aa82df38d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9ff8d5b8c16e90880fcccf70273cb7c092eaf9d35a9fb403f060f9b06522271\"" Sep 12 22:57:05.159844 containerd[1605]: time="2025-09-12T22:57:05.159778920Z" level=info msg="StartContainer for \"b9ff8d5b8c16e90880fcccf70273cb7c092eaf9d35a9fb403f060f9b06522271\"" Sep 12 22:57:05.168271 containerd[1605]: time="2025-09-12T22:57:05.168192620Z" level=info msg="connecting to shim b9ff8d5b8c16e90880fcccf70273cb7c092eaf9d35a9fb403f060f9b06522271" address="unix:///run/containerd/s/7ac706d3cb74fd6b35633ce0cc135323c2da7f63567afe23bc15f79ad74003b0" protocol=ttrpc version=3 Sep 12 22:57:05.322858 systemd-networkd[1508]: califc0380fa336: Gained IPv6LL Sep 12 22:57:05.332816 systemd[1]: Started cri-containerd-b9ff8d5b8c16e90880fcccf70273cb7c092eaf9d35a9fb403f060f9b06522271.scope - libcontainer container b9ff8d5b8c16e90880fcccf70273cb7c092eaf9d35a9fb403f060f9b06522271. Sep 12 22:57:05.600945 containerd[1605]: time="2025-09-12T22:57:05.600775787Z" level=info msg="StartContainer for \"b9ff8d5b8c16e90880fcccf70273cb7c092eaf9d35a9fb403f060f9b06522271\" returns successfully" Sep 12 22:57:05.605292 sshd[5325]: Connection closed by 10.0.0.1 port 41980 Sep 12 22:57:05.607643 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:05.624306 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:41980.service: Deactivated successfully. Sep 12 22:57:05.629191 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 22:57:05.634492 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Sep 12 22:57:05.641286 systemd-logind[1583]: Removed session 10. Sep 12 22:57:06.245726 kubelet[2862]: I0912 22:57:06.245635 2862 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:06.248337 kubelet[2862]: E0912 22:57:06.246814 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:06.276419 systemd-networkd[1508]: cali247faa65020: Gained IPv6LL Sep 12 22:57:06.315208 kubelet[2862]: I0912 22:57:06.315117 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j7nvf" podStartSLOduration=70.315095652 podStartE2EDuration="1m10.315095652s" podCreationTimestamp="2025-09-12 22:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:57:06.314879115 +0000 UTC m=+75.339187564" watchObservedRunningTime="2025-09-12 22:57:06.315095652 +0000 UTC m=+75.339404101" Sep 12 22:57:06.324133 kubelet[2862]: I0912 22:57:06.324026 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5648cf76df-8sfrx" podStartSLOduration=56.323980406 podStartE2EDuration="56.323980406s" podCreationTimestamp="2025-09-12 22:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:57:05.253691758 +0000 UTC m=+74.278000227" watchObservedRunningTime="2025-09-12 22:57:06.323980406 +0000 UTC m=+75.348288946" Sep 12 22:57:06.541826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2680680638.mount: Deactivated successfully. Sep 12 22:57:07.245571 kubelet[2862]: E0912 22:57:07.245461 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:08.250516 kubelet[2862]: E0912 22:57:08.250376 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:09.294290 containerd[1605]: time="2025-09-12T22:57:09.294198452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:09.296891 containerd[1605]: time="2025-09-12T22:57:09.296835810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 22:57:09.312051 containerd[1605]: time="2025-09-12T22:57:09.302024044Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:09.312051 containerd[1605]: time="2025-09-12T22:57:09.304760368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:09.312051 containerd[1605]: time="2025-09-12T22:57:09.306098344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.270242155s" Sep 12 22:57:09.312051 containerd[1605]: time="2025-09-12T22:57:09.306158276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 22:57:09.316690 containerd[1605]: time="2025-09-12T22:57:09.316254437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 22:57:09.344217 containerd[1605]: time="2025-09-12T22:57:09.342202017Z" level=info msg="CreateContainer within sandbox \"fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 22:57:09.399661 containerd[1605]: time="2025-09-12T22:57:09.399588340Z" level=info msg="Container ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:09.419961 containerd[1605]: time="2025-09-12T22:57:09.419754420Z" level=info msg="CreateContainer within sandbox \"fed67ce14839568c374876bdb81fe6ee6c389bd3958b91ddb9ae5127deaeba56\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483\"" Sep 12 22:57:09.421607 containerd[1605]: time="2025-09-12T22:57:09.421544466Z" level=info msg="StartContainer for \"ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483\"" Sep 12 22:57:09.424381 containerd[1605]: time="2025-09-12T22:57:09.423289798Z" level=info msg="connecting to shim ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483" address="unix:///run/containerd/s/ef8457a0bf98d8715490b5069fb369e1066a9cb6003f296c532ddd9079ff1ca2" protocol=ttrpc version=3 Sep 12 22:57:09.611817 systemd[1]: Started cri-containerd-ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483.scope - libcontainer container ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483. Sep 12 22:57:10.626461 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:44240.service - OpenSSH per-connection server daemon (10.0.0.1:44240). Sep 12 22:57:10.721484 containerd[1605]: time="2025-09-12T22:57:10.721035265Z" level=info msg="StartContainer for \"ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483\" returns successfully" Sep 12 22:57:10.950573 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 44240 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:10.956942 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:10.971687 systemd-logind[1583]: New session 11 of user core. Sep 12 22:57:11.007838 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 22:57:11.362519 sshd[5447]: Connection closed by 10.0.0.1 port 44240 Sep 12 22:57:11.366757 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:11.375256 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:44240.service: Deactivated successfully. Sep 12 22:57:11.377906 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 22:57:11.379620 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Sep 12 22:57:11.382184 systemd-logind[1583]: Removed session 11. Sep 12 22:57:11.761894 kubelet[2862]: I0912 22:57:11.760654 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-d4j5k" podStartSLOduration=47.790849244 podStartE2EDuration="56.76063383s" podCreationTimestamp="2025-09-12 22:56:15 +0000 UTC" firstStartedPulling="2025-09-12 22:57:00.34557587 +0000 UTC m=+69.369884319" lastFinishedPulling="2025-09-12 22:57:09.315360456 +0000 UTC m=+78.339668905" observedRunningTime="2025-09-12 22:57:11.760454122 +0000 UTC m=+80.784762572" watchObservedRunningTime="2025-09-12 22:57:11.76063383 +0000 UTC m=+80.784942279" Sep 12 22:57:11.924109 containerd[1605]: time="2025-09-12T22:57:11.923936229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483\" id:\"59131f74b7a5321aae27659611d189aff0230a027c6e52b523f3e0b4a37b0474\" pid:5475 exit_status:1 exited_at:{seconds:1757717831 nanos:915168888}" Sep 12 22:57:12.943460 containerd[1605]: time="2025-09-12T22:57:12.943372878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483\" id:\"27b0741d35ce7df125caaed27d12577e486f632cf546e377a4a895747342ce45\" pid:5501 exit_status:1 exited_at:{seconds:1757717832 nanos:942295513}" Sep 12 22:57:15.351197 kubelet[2862]: E0912 22:57:15.350661 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:15.974358 containerd[1605]: time="2025-09-12T22:57:15.971381596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:15.978024 containerd[1605]: time="2025-09-12T22:57:15.977592791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 22:57:15.980607 containerd[1605]: time="2025-09-12T22:57:15.980338762Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:15.986347 containerd[1605]: time="2025-09-12T22:57:15.984933148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:15.986347 containerd[1605]: time="2025-09-12T22:57:15.986013108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.669628758s" Sep 12 22:57:15.986347 containerd[1605]: time="2025-09-12T22:57:15.986044417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 22:57:15.990632 containerd[1605]: time="2025-09-12T22:57:15.990562780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 22:57:16.035245 containerd[1605]: time="2025-09-12T22:57:16.035185279Z" level=info msg="CreateContainer within sandbox \"a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 22:57:16.419150 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:44256.service - OpenSSH per-connection server daemon (10.0.0.1:44256). Sep 12 22:57:16.495772 containerd[1605]: time="2025-09-12T22:57:16.491275796Z" level=info msg="Container 990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:16.576431 containerd[1605]: time="2025-09-12T22:57:16.575058097Z" level=info msg="CreateContainer within sandbox \"a9a130da7565186d5feb06f578908b63a694bf85c47730973a20968451789c5e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060\"" Sep 12 22:57:16.576431 containerd[1605]: time="2025-09-12T22:57:16.576082091Z" level=info msg="StartContainer for \"990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060\"" Sep 12 22:57:16.585429 containerd[1605]: time="2025-09-12T22:57:16.581497670Z" level=info msg="connecting to shim 990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060" address="unix:///run/containerd/s/247558a8f3cde8e069b2e3b26259ec5f65fcee94ab86bb57898ab7263a304a17" protocol=ttrpc version=3 Sep 12 22:57:16.687928 systemd[1]: Started cri-containerd-990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060.scope - libcontainer container 990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060. Sep 12 22:57:16.815481 sshd[5528]: Accepted publickey for core from 10.0.0.1 port 44256 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:16.819597 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:16.835533 systemd-logind[1583]: New session 12 of user core. Sep 12 22:57:16.862994 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 22:57:16.996431 containerd[1605]: time="2025-09-12T22:57:16.993563370Z" level=info msg="StartContainer for \"990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060\" returns successfully" Sep 12 22:57:17.309685 sshd[5553]: Connection closed by 10.0.0.1 port 44256 Sep 12 22:57:17.310600 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:17.330615 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:44256.service: Deactivated successfully. Sep 12 22:57:17.342550 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 22:57:17.345750 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Sep 12 22:57:17.347458 systemd-logind[1583]: Removed session 12. Sep 12 22:57:17.825521 kubelet[2862]: I0912 22:57:17.824063 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78498cbbc6-zw9jr" podStartSLOduration=47.428232037 podStartE2EDuration="1m2.824043159s" podCreationTimestamp="2025-09-12 22:56:15 +0000 UTC" firstStartedPulling="2025-09-12 22:57:00.591495187 +0000 UTC m=+69.615803636" lastFinishedPulling="2025-09-12 22:57:15.987306308 +0000 UTC m=+85.011614758" observedRunningTime="2025-09-12 22:57:17.823553759 +0000 UTC m=+86.847862208" watchObservedRunningTime="2025-09-12 22:57:17.824043159 +0000 UTC m=+86.848351608" Sep 12 22:57:17.945024 containerd[1605]: time="2025-09-12T22:57:17.944959448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060\" id:\"0294ecbc0a692081e99244c121a58cbc43b7828702b9ebf44058d0c2605da783\" pid:5601 exited_at:{seconds:1757717837 nanos:933872470}" Sep 12 22:57:20.979929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057299295.mount: Deactivated successfully. Sep 12 22:57:21.219812 containerd[1605]: time="2025-09-12T22:57:21.219715196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:21.224579 containerd[1605]: time="2025-09-12T22:57:21.224487054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 22:57:21.227308 containerd[1605]: time="2025-09-12T22:57:21.226740138Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:21.230705 containerd[1605]: time="2025-09-12T22:57:21.230607817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:21.231494 containerd[1605]: time="2025-09-12T22:57:21.231329523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.240695208s" Sep 12 22:57:21.231494 containerd[1605]: time="2025-09-12T22:57:21.231384135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 22:57:21.235481 containerd[1605]: time="2025-09-12T22:57:21.233820394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 22:57:21.260322 containerd[1605]: time="2025-09-12T22:57:21.259259311Z" level=info msg="CreateContainer within sandbox \"6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 22:57:21.338316 kubelet[2862]: E0912 22:57:21.334080 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:21.390431 containerd[1605]: time="2025-09-12T22:57:21.386599339Z" level=info msg="Container 3877f11fd3dc3b8904105c08ffe2598f7881b81497b2196f7c44d2f1c3d82e74: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:21.431639 containerd[1605]: time="2025-09-12T22:57:21.431483789Z" level=info msg="CreateContainer within sandbox \"6468b71e913ada32cd717a21df7b86e2c5c29e2af9d4c071c9311fdfdf9546e1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3877f11fd3dc3b8904105c08ffe2598f7881b81497b2196f7c44d2f1c3d82e74\"" Sep 12 22:57:21.432728 containerd[1605]: time="2025-09-12T22:57:21.432700706Z" level=info msg="StartContainer for \"3877f11fd3dc3b8904105c08ffe2598f7881b81497b2196f7c44d2f1c3d82e74\"" Sep 12 22:57:21.435077 containerd[1605]: time="2025-09-12T22:57:21.434651682Z" level=info msg="connecting to shim 3877f11fd3dc3b8904105c08ffe2598f7881b81497b2196f7c44d2f1c3d82e74" address="unix:///run/containerd/s/28ce2e2b831ab76aae6c59efdd2437854aa5cea73a64a813bf00a3d86ffbf5a3" protocol=ttrpc version=3 Sep 12 22:57:21.468655 systemd[1]: Started cri-containerd-3877f11fd3dc3b8904105c08ffe2598f7881b81497b2196f7c44d2f1c3d82e74.scope - libcontainer container 3877f11fd3dc3b8904105c08ffe2598f7881b81497b2196f7c44d2f1c3d82e74. Sep 12 22:57:21.945791 containerd[1605]: time="2025-09-12T22:57:21.945020460Z" level=info msg="StartContainer for \"3877f11fd3dc3b8904105c08ffe2598f7881b81497b2196f7c44d2f1c3d82e74\" returns successfully" Sep 12 22:57:22.368465 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:34936.service - OpenSSH per-connection server daemon (10.0.0.1:34936). Sep 12 22:57:22.596111 containerd[1605]: time="2025-09-12T22:57:22.595880965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793\" id:\"85658aed2c53121eb3eddb344891193422e4c1d5d28226e1d597647601e6bcac\" pid:5664 exited_at:{seconds:1757717842 nanos:595462298}" Sep 12 22:57:22.658736 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 34936 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:22.664730 sshd-session[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:22.678588 systemd-logind[1583]: New session 13 of user core. Sep 12 22:57:22.710124 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 22:57:23.000186 kubelet[2862]: I0912 22:57:22.997968 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-748d6f8c75-ktlbs" podStartSLOduration=4.314057478 podStartE2EDuration="31.997947102s" podCreationTimestamp="2025-09-12 22:56:51 +0000 UTC" firstStartedPulling="2025-09-12 22:56:53.548870497 +0000 UTC m=+62.573178946" lastFinishedPulling="2025-09-12 22:57:21.232760121 +0000 UTC m=+90.257068570" observedRunningTime="2025-09-12 22:57:22.997578009 +0000 UTC m=+92.021886488" watchObservedRunningTime="2025-09-12 22:57:22.997947102 +0000 UTC m=+92.022255562" Sep 12 22:57:23.334489 kubelet[2862]: E0912 22:57:23.331648 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:23.413292 sshd[5680]: Connection closed by 10.0.0.1 port 34936 Sep 12 22:57:23.416170 sshd-session[5665]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:23.424088 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:34936.service: Deactivated successfully. Sep 12 22:57:23.433781 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 22:57:23.437463 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Sep 12 22:57:23.440026 systemd-logind[1583]: Removed session 13. Sep 12 22:57:23.894303 containerd[1605]: time="2025-09-12T22:57:23.894168489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:23.903891 containerd[1605]: time="2025-09-12T22:57:23.903733974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 22:57:23.911763 containerd[1605]: time="2025-09-12T22:57:23.910572124Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:23.915803 containerd[1605]: time="2025-09-12T22:57:23.915519060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:23.919664 containerd[1605]: time="2025-09-12T22:57:23.917461761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.683606221s" Sep 12 22:57:23.919664 containerd[1605]: time="2025-09-12T22:57:23.917518648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 22:57:23.940459 containerd[1605]: time="2025-09-12T22:57:23.934237996Z" level=info msg="CreateContainer within sandbox \"3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 22:57:24.020578 containerd[1605]: time="2025-09-12T22:57:24.014533546Z" level=info msg="Container 3ecbc92e11bd187e89a5ad3c8d11954db8512d5b4bb79af8a22b6224ec0e6945: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:24.177056 containerd[1605]: time="2025-09-12T22:57:24.176864011Z" level=info msg="CreateContainer within sandbox \"3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ecbc92e11bd187e89a5ad3c8d11954db8512d5b4bb79af8a22b6224ec0e6945\"" Sep 12 22:57:24.181218 containerd[1605]: time="2025-09-12T22:57:24.181021183Z" level=info msg="StartContainer for \"3ecbc92e11bd187e89a5ad3c8d11954db8512d5b4bb79af8a22b6224ec0e6945\"" Sep 12 22:57:24.183451 containerd[1605]: time="2025-09-12T22:57:24.182972730Z" level=info msg="connecting to shim 3ecbc92e11bd187e89a5ad3c8d11954db8512d5b4bb79af8a22b6224ec0e6945" address="unix:///run/containerd/s/5e8ede404b385a77af52f2879a87d279c826672ed7a5d48f7d0b7a38684cf900" protocol=ttrpc version=3 Sep 12 22:57:24.243731 systemd[1]: Started cri-containerd-3ecbc92e11bd187e89a5ad3c8d11954db8512d5b4bb79af8a22b6224ec0e6945.scope - libcontainer container 3ecbc92e11bd187e89a5ad3c8d11954db8512d5b4bb79af8a22b6224ec0e6945. Sep 12 22:57:24.376238 containerd[1605]: time="2025-09-12T22:57:24.376099796Z" level=info msg="StartContainer for \"3ecbc92e11bd187e89a5ad3c8d11954db8512d5b4bb79af8a22b6224ec0e6945\" returns successfully" Sep 12 22:57:24.408495 containerd[1605]: time="2025-09-12T22:57:24.408439166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 22:57:24.451235 kubelet[2862]: I0912 22:57:24.449595 2862 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:26.332580 kubelet[2862]: E0912 22:57:26.332527 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:28.339265 kubelet[2862]: I0912 22:57:28.335462 2862 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:57:28.440842 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:34948.service - OpenSSH per-connection server daemon (10.0.0.1:34948). Sep 12 22:57:28.658627 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 34948 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:28.672218 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:28.699446 systemd-logind[1583]: New session 14 of user core. Sep 12 22:57:28.721731 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 22:57:29.375020 sshd[5746]: Connection closed by 10.0.0.1 port 34948 Sep 12 22:57:29.379238 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:29.403176 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:34948.service: Deactivated successfully. Sep 12 22:57:29.410949 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 22:57:29.417387 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Sep 12 22:57:29.421980 systemd-logind[1583]: Removed session 14. Sep 12 22:57:29.685841 containerd[1605]: time="2025-09-12T22:57:29.682783372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:29.689507 containerd[1605]: time="2025-09-12T22:57:29.689358887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 22:57:29.694140 containerd[1605]: time="2025-09-12T22:57:29.692723689Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:29.697460 containerd[1605]: time="2025-09-12T22:57:29.697334052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:57:29.698148 containerd[1605]: time="2025-09-12T22:57:29.698101744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 5.289393513s" Sep 12 22:57:29.698148 containerd[1605]: time="2025-09-12T22:57:29.698146639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 22:57:29.716605 containerd[1605]: time="2025-09-12T22:57:29.714341528Z" level=info msg="CreateContainer within sandbox \"3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 22:57:29.758496 containerd[1605]: time="2025-09-12T22:57:29.758197151Z" level=info msg="Container 4280164039a79c00151c61747e23e3f605c2ee3a2b36715765ae5dd29b6c9afa: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:57:29.795091 containerd[1605]: time="2025-09-12T22:57:29.792859351Z" level=info msg="CreateContainer within sandbox \"3c8b783eed992ff8000f38f38cebda6597416b9d07ae13d69bbc26d32d1ab8fa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4280164039a79c00151c61747e23e3f605c2ee3a2b36715765ae5dd29b6c9afa\"" Sep 12 22:57:29.795373 containerd[1605]: time="2025-09-12T22:57:29.795332698Z" level=info msg="StartContainer for \"4280164039a79c00151c61747e23e3f605c2ee3a2b36715765ae5dd29b6c9afa\"" Sep 12 22:57:29.797753 containerd[1605]: time="2025-09-12T22:57:29.797705275Z" level=info msg="connecting to shim 4280164039a79c00151c61747e23e3f605c2ee3a2b36715765ae5dd29b6c9afa" address="unix:///run/containerd/s/5e8ede404b385a77af52f2879a87d279c826672ed7a5d48f7d0b7a38684cf900" protocol=ttrpc version=3 Sep 12 22:57:29.860361 systemd[1]: Started cri-containerd-4280164039a79c00151c61747e23e3f605c2ee3a2b36715765ae5dd29b6c9afa.scope - libcontainer container 4280164039a79c00151c61747e23e3f605c2ee3a2b36715765ae5dd29b6c9afa. Sep 12 22:57:29.986063 containerd[1605]: time="2025-09-12T22:57:29.985981509Z" level=info msg="StartContainer for \"4280164039a79c00151c61747e23e3f605c2ee3a2b36715765ae5dd29b6c9afa\" returns successfully" Sep 12 22:57:30.683477 kubelet[2862]: I0912 22:57:30.683206 2862 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 22:57:30.718053 kubelet[2862]: I0912 22:57:30.716579 2862 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 22:57:34.427212 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:56706.service - OpenSSH per-connection server daemon (10.0.0.1:56706). Sep 12 22:57:34.787906 sshd[5795]: Accepted publickey for core from 10.0.0.1 port 56706 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:34.795817 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:34.817756 systemd-logind[1583]: New session 15 of user core. Sep 12 22:57:34.838694 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 22:57:35.862653 sshd[5799]: Connection closed by 10.0.0.1 port 56706 Sep 12 22:57:35.863499 sshd-session[5795]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:35.877680 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:56706.service: Deactivated successfully. Sep 12 22:57:35.897095 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 22:57:35.907113 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Sep 12 22:57:35.912740 systemd-logind[1583]: Removed session 15. Sep 12 22:57:40.885265 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:48394.service - OpenSSH per-connection server daemon (10.0.0.1:48394). Sep 12 22:57:41.017009 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 48394 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:41.016620 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:41.032912 systemd-logind[1583]: New session 16 of user core. Sep 12 22:57:41.049762 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 22:57:41.391928 sshd[5844]: Connection closed by 10.0.0.1 port 48394 Sep 12 22:57:41.394369 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:41.420585 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:48394.service: Deactivated successfully. Sep 12 22:57:41.432033 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 22:57:41.443737 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:48398.service - OpenSSH per-connection server daemon (10.0.0.1:48398). Sep 12 22:57:41.444881 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Sep 12 22:57:41.453641 systemd-logind[1583]: Removed session 16. Sep 12 22:57:41.579659 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 48398 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:41.590637 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:41.624201 systemd-logind[1583]: New session 17 of user core. Sep 12 22:57:41.645448 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 22:57:42.109172 sshd[5861]: Connection closed by 10.0.0.1 port 48398 Sep 12 22:57:42.115936 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:42.175060 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:48398.service: Deactivated successfully. Sep 12 22:57:42.189943 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 22:57:42.211767 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Sep 12 22:57:42.215629 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:48400.service - OpenSSH per-connection server daemon (10.0.0.1:48400). Sep 12 22:57:42.220131 systemd-logind[1583]: Removed session 17. Sep 12 22:57:42.428117 sshd[5874]: Accepted publickey for core from 10.0.0.1 port 48400 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:42.433268 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:42.461831 systemd-logind[1583]: New session 18 of user core. Sep 12 22:57:42.493707 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 22:57:42.889731 sshd[5877]: Connection closed by 10.0.0.1 port 48400 Sep 12 22:57:42.887676 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:42.904373 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:48400.service: Deactivated successfully. Sep 12 22:57:42.916272 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 22:57:42.928258 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Sep 12 22:57:42.935379 systemd-logind[1583]: Removed session 18. Sep 12 22:57:43.073878 containerd[1605]: time="2025-09-12T22:57:43.071308253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483\" id:\"177db52eea6b132a73dc3f86d854040b167a3bb375bb9494eeeee868aeb9e00c\" pid:5901 exited_at:{seconds:1757717863 nanos:70328452}" Sep 12 22:57:43.181944 kubelet[2862]: I0912 22:57:43.181745 2862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ktqqr" podStartSLOduration=63.018598598 podStartE2EDuration="1m28.181724757s" podCreationTimestamp="2025-09-12 22:56:15 +0000 UTC" firstStartedPulling="2025-09-12 22:57:04.539682149 +0000 UTC m=+73.563990598" lastFinishedPulling="2025-09-12 22:57:29.702808298 +0000 UTC m=+98.727116757" observedRunningTime="2025-09-12 22:57:30.056429727 +0000 UTC m=+99.080738186" watchObservedRunningTime="2025-09-12 22:57:43.181724757 +0000 UTC m=+112.206033206" Sep 12 22:57:47.918948 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:48410.service - OpenSSH per-connection server daemon (10.0.0.1:48410). Sep 12 22:57:47.959677 containerd[1605]: time="2025-09-12T22:57:47.955838203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060\" id:\"8ba4971a9113f79ce18fc7ca39b9c96da8582aad3c315533cc1f4302d8cf60bc\" pid:5933 exited_at:{seconds:1757717867 nanos:955125264}" Sep 12 22:57:48.056348 sshd[5940]: Accepted publickey for core from 10.0.0.1 port 48410 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:48.062229 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:48.077206 systemd-logind[1583]: New session 19 of user core. Sep 12 22:57:48.089766 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 22:57:48.420203 sshd[5947]: Connection closed by 10.0.0.1 port 48410 Sep 12 22:57:48.422316 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:48.440528 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:48410.service: Deactivated successfully. Sep 12 22:57:48.447611 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 22:57:48.461258 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Sep 12 22:57:48.470539 systemd-logind[1583]: Removed session 19. Sep 12 22:57:49.334105 kubelet[2862]: E0912 22:57:49.333567 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:57:52.506625 containerd[1605]: time="2025-09-12T22:57:52.506271210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793\" id:\"17760034ba4ad16c24c6782bd383b36e9dcfce28686f7273a02369a0595f9057\" pid:5974 exited_at:{seconds:1757717872 nanos:505664550}" Sep 12 22:57:53.463788 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:56348.service - OpenSSH per-connection server daemon (10.0.0.1:56348). Sep 12 22:57:53.718997 sshd[5988]: Accepted publickey for core from 10.0.0.1 port 56348 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:53.722304 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:53.743466 systemd-logind[1583]: New session 20 of user core. Sep 12 22:57:53.758797 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 22:57:54.111680 sshd[5991]: Connection closed by 10.0.0.1 port 56348 Sep 12 22:57:54.110583 sshd-session[5988]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:54.120827 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:56348.service: Deactivated successfully. Sep 12 22:57:54.127800 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 22:57:54.134031 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Sep 12 22:57:54.137736 systemd-logind[1583]: Removed session 20. Sep 12 22:57:56.549188 containerd[1605]: time="2025-09-12T22:57:56.549130111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483\" id:\"4b629ae18b7c2ceef77c0209aeb482aff7f035e26dc760723c797ff8438ab8b1\" pid:6015 exited_at:{seconds:1757717876 nanos:548750037}" Sep 12 22:57:59.141778 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:56356.service - OpenSSH per-connection server daemon (10.0.0.1:56356). Sep 12 22:57:59.242601 sshd[6033]: Accepted publickey for core from 10.0.0.1 port 56356 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:57:59.245582 sshd-session[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:57:59.262973 systemd-logind[1583]: New session 21 of user core. Sep 12 22:57:59.288586 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 22:57:59.645108 sshd[6036]: Connection closed by 10.0.0.1 port 56356 Sep 12 22:57:59.646039 sshd-session[6033]: pam_unix(sshd:session): session closed for user core Sep 12 22:57:59.666933 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:56356.service: Deactivated successfully. Sep 12 22:57:59.672385 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 22:57:59.674464 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Sep 12 22:57:59.677766 systemd-logind[1583]: Removed session 21. Sep 12 22:58:04.693661 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:50566.service - OpenSSH per-connection server daemon (10.0.0.1:50566). Sep 12 22:58:04.842595 sshd[6053]: Accepted publickey for core from 10.0.0.1 port 50566 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:04.851140 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:04.875955 systemd-logind[1583]: New session 22 of user core. Sep 12 22:58:04.886929 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 22:58:05.139126 sshd[6056]: Connection closed by 10.0.0.1 port 50566 Sep 12 22:58:05.140650 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:05.154342 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:50566.service: Deactivated successfully. Sep 12 22:58:05.160761 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 22:58:05.167904 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Sep 12 22:58:05.174679 systemd-logind[1583]: Removed session 22. Sep 12 22:58:10.161632 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:49630.service - OpenSSH per-connection server daemon (10.0.0.1:49630). Sep 12 22:58:10.200338 containerd[1605]: time="2025-09-12T22:58:10.200284099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060\" id:\"6a194b40edff83951aed0c0f5784e538da4232fd7334c63840350f0b985fa223\" pid:6080 exited_at:{seconds:1757717890 nanos:199440715}" Sep 12 22:58:10.313213 sshd[6087]: Accepted publickey for core from 10.0.0.1 port 49630 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:10.316132 sshd-session[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:10.357916 systemd-logind[1583]: New session 23 of user core. Sep 12 22:58:10.378923 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 22:58:10.794089 sshd[6096]: Connection closed by 10.0.0.1 port 49630 Sep 12 22:58:10.797284 sshd-session[6087]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:10.811906 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:49630.service: Deactivated successfully. Sep 12 22:58:10.819455 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 22:58:10.823452 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Sep 12 22:58:10.832714 systemd-logind[1583]: Removed session 23. Sep 12 22:58:13.094213 containerd[1605]: time="2025-09-12T22:58:13.092529735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec8dff586d999a23f7bfe750f780f9a201c76e0faf7f64004c104e6288345483\" id:\"50664586c8cd4686956177a39a7beb41640904119797231af54cf13e7fe3d84e\" pid:6120 exited_at:{seconds:1757717893 nanos:91742330}" Sep 12 22:58:15.850588 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:49640.service - OpenSSH per-connection server daemon (10.0.0.1:49640). Sep 12 22:58:15.994320 sshd[6139]: Accepted publickey for core from 10.0.0.1 port 49640 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:16.005068 sshd-session[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:16.022919 systemd-logind[1583]: New session 24 of user core. Sep 12 22:58:16.037360 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 22:58:16.421968 sshd[6142]: Connection closed by 10.0.0.1 port 49640 Sep 12 22:58:16.423753 sshd-session[6139]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:16.459107 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:49640.service: Deactivated successfully. Sep 12 22:58:16.464771 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 22:58:16.469825 systemd-logind[1583]: Session 24 logged out. Waiting for processes to exit. Sep 12 22:58:16.482372 systemd-logind[1583]: Removed session 24. Sep 12 22:58:17.875944 containerd[1605]: time="2025-09-12T22:58:17.875895672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"990fb6fae149fcfc5e27402f1e0b8dfa174d2912f7a4ec0e0069e5d938a0e060\" id:\"af6b79a2bb3f0bfa8f2e8c9b0e3fb0926f508b14428942833c600362dfe89b9c\" pid:6167 exited_at:{seconds:1757717897 nanos:875670756}" Sep 12 22:58:20.331057 kubelet[2862]: E0912 22:58:20.330999 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:58:21.435284 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:33392.service - OpenSSH per-connection server daemon (10.0.0.1:33392). Sep 12 22:58:21.527489 sshd[6177]: Accepted publickey for core from 10.0.0.1 port 33392 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:21.529880 sshd-session[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:21.535789 systemd-logind[1583]: New session 25 of user core. Sep 12 22:58:21.546662 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 22:58:21.825574 sshd[6180]: Connection closed by 10.0.0.1 port 33392 Sep 12 22:58:21.826139 sshd-session[6177]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:21.835774 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:33392.service: Deactivated successfully. Sep 12 22:58:21.837922 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 22:58:21.838924 systemd-logind[1583]: Session 25 logged out. Waiting for processes to exit. Sep 12 22:58:21.842411 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:33402.service - OpenSSH per-connection server daemon (10.0.0.1:33402). Sep 12 22:58:21.843197 systemd-logind[1583]: Removed session 25. Sep 12 22:58:21.905429 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 33402 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:21.907158 sshd-session[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:21.912079 systemd-logind[1583]: New session 26 of user core. Sep 12 22:58:21.922229 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 22:58:22.265692 sshd[6197]: Connection closed by 10.0.0.1 port 33402 Sep 12 22:58:22.266151 sshd-session[6194]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:22.278458 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:33402.service: Deactivated successfully. Sep 12 22:58:22.281514 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 22:58:22.282873 systemd-logind[1583]: Session 26 logged out. Waiting for processes to exit. Sep 12 22:58:22.286846 systemd[1]: Started sshd@26-10.0.0.51:22-10.0.0.1:33412.service - OpenSSH per-connection server daemon (10.0.0.1:33412). Sep 12 22:58:22.289326 systemd-logind[1583]: Removed session 26. Sep 12 22:58:22.365808 containerd[1605]: time="2025-09-12T22:58:22.365737103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2421b0bee19e9553ca6858f13f78a2ba6ba6575581be1c42c1089dbd8c821793\" id:\"2899ee72a927c3bb1151d5f9f9d410206269de550140688a79eadd76312e22db\" pid:6221 exited_at:{seconds:1757717902 nanos:365233929}" Sep 12 22:58:22.368200 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 33412 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:22.370385 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:22.376835 systemd-logind[1583]: New session 27 of user core. Sep 12 22:58:22.384593 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 22:58:22.923878 sshd[6240]: Connection closed by 10.0.0.1 port 33412 Sep 12 22:58:22.925199 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:22.935762 systemd[1]: sshd@26-10.0.0.51:22-10.0.0.1:33412.service: Deactivated successfully. Sep 12 22:58:22.939193 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 22:58:22.940575 systemd-logind[1583]: Session 27 logged out. Waiting for processes to exit. Sep 12 22:58:22.950027 systemd[1]: Started sshd@27-10.0.0.51:22-10.0.0.1:33420.service - OpenSSH per-connection server daemon (10.0.0.1:33420). Sep 12 22:58:22.952410 systemd-logind[1583]: Removed session 27. Sep 12 22:58:23.010637 sshd[6261]: Accepted publickey for core from 10.0.0.1 port 33420 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:23.012626 sshd-session[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:23.018524 systemd-logind[1583]: New session 28 of user core. Sep 12 22:58:23.024615 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 22:58:23.388822 sshd[6264]: Connection closed by 10.0.0.1 port 33420 Sep 12 22:58:23.389426 sshd-session[6261]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:23.404993 systemd[1]: sshd@27-10.0.0.51:22-10.0.0.1:33420.service: Deactivated successfully. Sep 12 22:58:23.407485 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 22:58:23.409619 systemd-logind[1583]: Session 28 logged out. Waiting for processes to exit. Sep 12 22:58:23.413198 systemd[1]: Started sshd@28-10.0.0.51:22-10.0.0.1:33432.service - OpenSSH per-connection server daemon (10.0.0.1:33432). Sep 12 22:58:23.414099 systemd-logind[1583]: Removed session 28. Sep 12 22:58:23.473868 sshd[6276]: Accepted publickey for core from 10.0.0.1 port 33432 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:23.475229 sshd-session[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:23.482174 systemd-logind[1583]: New session 29 of user core. Sep 12 22:58:23.493520 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 22:58:23.612562 sshd[6279]: Connection closed by 10.0.0.1 port 33432 Sep 12 22:58:23.612998 sshd-session[6276]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:23.618317 systemd[1]: sshd@28-10.0.0.51:22-10.0.0.1:33432.service: Deactivated successfully. Sep 12 22:58:23.620551 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 22:58:23.621306 systemd-logind[1583]: Session 29 logged out. Waiting for processes to exit. Sep 12 22:58:23.622634 systemd-logind[1583]: Removed session 29. Sep 12 22:58:24.330807 kubelet[2862]: E0912 22:58:24.330758 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:58:28.651873 systemd[1]: Started sshd@29-10.0.0.51:22-10.0.0.1:33446.service - OpenSSH per-connection server daemon (10.0.0.1:33446). Sep 12 22:58:28.749252 sshd[6304]: Accepted publickey for core from 10.0.0.1 port 33446 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:28.753057 sshd-session[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:28.762369 systemd-logind[1583]: New session 30 of user core. Sep 12 22:58:28.771328 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 22:58:29.029530 sshd[6307]: Connection closed by 10.0.0.1 port 33446 Sep 12 22:58:29.032168 sshd-session[6304]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:29.046230 systemd-logind[1583]: Session 30 logged out. Waiting for processes to exit. Sep 12 22:58:29.046857 systemd[1]: sshd@29-10.0.0.51:22-10.0.0.1:33446.service: Deactivated successfully. Sep 12 22:58:29.057361 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 22:58:29.067266 systemd-logind[1583]: Removed session 30. Sep 12 22:58:32.333487 kubelet[2862]: E0912 22:58:32.330712 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:58:34.080626 systemd[1]: Started sshd@30-10.0.0.51:22-10.0.0.1:33092.service - OpenSSH per-connection server daemon (10.0.0.1:33092). Sep 12 22:58:34.202869 sshd[6337]: Accepted publickey for core from 10.0.0.1 port 33092 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:34.207775 sshd-session[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:34.220598 systemd-logind[1583]: New session 31 of user core. Sep 12 22:58:34.233715 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 12 22:58:34.590689 sshd[6340]: Connection closed by 10.0.0.1 port 33092 Sep 12 22:58:34.588723 sshd-session[6337]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:34.610904 systemd-logind[1583]: Session 31 logged out. Waiting for processes to exit. Sep 12 22:58:34.611934 systemd[1]: sshd@30-10.0.0.51:22-10.0.0.1:33092.service: Deactivated successfully. Sep 12 22:58:34.621609 systemd[1]: session-31.scope: Deactivated successfully. Sep 12 22:58:34.631103 systemd-logind[1583]: Removed session 31. Sep 12 22:58:35.332439 kubelet[2862]: E0912 22:58:35.331991 2862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:58:39.633297 systemd[1]: Started sshd@31-10.0.0.51:22-10.0.0.1:33096.service - OpenSSH per-connection server daemon (10.0.0.1:33096). Sep 12 22:58:39.834276 sshd[6353]: Accepted publickey for core from 10.0.0.1 port 33096 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 22:58:39.837669 sshd-session[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:39.854830 systemd-logind[1583]: New session 32 of user core. Sep 12 22:58:39.872729 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 12 22:58:40.166224 sshd[6356]: Connection closed by 10.0.0.1 port 33096 Sep 12 22:58:40.166639 sshd-session[6353]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:40.174877 systemd[1]: sshd@31-10.0.0.51:22-10.0.0.1:33096.service: Deactivated successfully. Sep 12 22:58:40.178083 systemd[1]: session-32.scope: Deactivated successfully. Sep 12 22:58:40.179845 systemd-logind[1583]: Session 32 logged out. Waiting for processes to exit. Sep 12 22:58:40.181639 systemd-logind[1583]: Removed session 32.