May 13 23:57:02.969112 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:57:02.969143 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:02.969158 kernel: BIOS-provided physical RAM map: May 13 23:57:02.969167 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 23:57:02.969176 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 23:57:02.969185 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 23:57:02.969196 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 23:57:02.969205 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 23:57:02.969214 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 23:57:02.969223 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 23:57:02.969236 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:57:02.969245 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 23:57:02.969258 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 23:57:02.969268 kernel: NX (Execute Disable) protection: active May 13 23:57:02.969279 kernel: APIC: Static calls initialized May 13 23:57:02.969292 kernel: SMBIOS 2.8 present. May 13 23:57:02.969315 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 23:57:02.969325 kernel: Hypervisor detected: KVM May 13 23:57:02.969335 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:57:02.969355 kernel: kvm-clock: using sched offset of 3205277294 cycles May 13 23:57:02.969382 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:57:02.969402 kernel: tsc: Detected 2794.748 MHz processor May 13 23:57:02.969422 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:57:02.969442 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:57:02.969469 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 23:57:02.969502 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 23:57:02.969521 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:57:02.969541 kernel: Using GB pages for direct mapping May 13 23:57:02.969552 kernel: ACPI: Early table checksum verification disabled May 13 23:57:02.969562 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 23:57:02.969572 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:02.969582 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:02.969592 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:02.969602 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 23:57:02.969616 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:02.969627 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:02.969637 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:02.969647 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:57:02.969657 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 23:57:02.969683 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 23:57:02.969699 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 23:57:02.969713 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 23:57:02.969733 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 23:57:02.969743 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 23:57:02.969754 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 23:57:02.969764 kernel: No NUMA configuration found May 13 23:57:02.969775 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 23:57:02.969785 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 23:57:02.969799 kernel: Zone ranges: May 13 23:57:02.969809 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:57:02.969820 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 23:57:02.969830 kernel: Normal empty May 13 23:57:02.969840 kernel: Movable zone start for each node May 13 23:57:02.969850 kernel: Early memory node ranges May 13 23:57:02.969860 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 23:57:02.969870 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 23:57:02.969880 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 23:57:02.969894 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:57:02.969911 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 23:57:02.969922 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 23:57:02.969932 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 23:57:02.969942 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:57:02.969953 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 23:57:02.969963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:57:02.969973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:57:02.969983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:57:02.969994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:57:02.970007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:57:02.970018 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:57:02.970028 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:57:02.970038 kernel: TSC deadline timer available May 13 23:57:02.970048 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 23:57:02.970059 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:57:02.970069 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 23:57:02.970079 kernel: kvm-guest: setup PV sched yield May 13 23:57:02.970090 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 23:57:02.970103 kernel: Booting paravirtualized kernel on KVM May 13 23:57:02.970114 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:57:02.970125 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 23:57:02.970135 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 23:57:02.970144 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 23:57:02.970154 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 23:57:02.970164 kernel: kvm-guest: PV spinlocks enabled May 13 23:57:02.970175 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:57:02.970187 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:02.970202 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:57:02.970212 kernel: random: crng init done May 13 23:57:02.970222 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:57:02.970241 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:57:02.970260 kernel: Fallback order for Node 0: 0 May 13 23:57:02.970277 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 23:57:02.970301 kernel: Policy zone: DMA32 May 13 23:57:02.970314 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:57:02.970333 kernel: Memory: 2430496K/2571752K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 140996K reserved, 0K cma-reserved) May 13 23:57:02.971080 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:57:02.971105 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:57:02.971125 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:57:02.971147 kernel: Dynamic Preempt: voluntary May 13 23:57:02.971170 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:57:02.971194 kernel: rcu: RCU event tracing is enabled. May 13 23:57:02.971205 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:57:02.971216 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:57:02.971233 kernel: Rude variant of Tasks RCU enabled. May 13 23:57:02.971243 kernel: Tracing variant of Tasks RCU enabled. May 13 23:57:02.971254 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:57:02.971268 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:57:02.971279 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 23:57:02.971289 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:57:02.971299 kernel: Console: colour VGA+ 80x25 May 13 23:57:02.971310 kernel: printk: console [ttyS0] enabled May 13 23:57:02.971320 kernel: ACPI: Core revision 20230628 May 13 23:57:02.971331 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 23:57:02.971345 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:57:02.971355 kernel: x2apic enabled May 13 23:57:02.971366 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:57:02.971376 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 23:57:02.971387 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 23:57:02.971397 kernel: kvm-guest: setup PV IPIs May 13 23:57:02.971421 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 23:57:02.971433 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 23:57:02.971443 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 23:57:02.971454 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 23:57:02.971465 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 23:57:02.971479 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 23:57:02.971501 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:57:02.971513 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:57:02.971534 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:57:02.971545 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 23:57:02.971560 kernel: RETBleed: Mitigation: untrained return thunk May 13 23:57:02.971571 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:57:02.971582 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:57:02.971593 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 23:57:02.971619 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 23:57:02.971639 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 23:57:02.971650 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:57:02.971675 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:57:02.971704 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:57:02.971725 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:57:02.971736 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 23:57:02.971747 kernel: Freeing SMP alternatives memory: 32K May 13 23:57:02.971758 kernel: pid_max: default: 32768 minimum: 301 May 13 23:57:02.971768 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:57:02.971794 kernel: landlock: Up and running. May 13 23:57:02.971813 kernel: SELinux: Initializing. May 13 23:57:02.971834 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:57:02.971855 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:57:02.971867 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 23:57:02.971884 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:57:02.971902 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:57:02.971930 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:57:02.971961 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 23:57:02.971984 kernel: ... version: 0 May 13 23:57:02.972008 kernel: ... bit width: 48 May 13 23:57:02.972025 kernel: ... generic registers: 6 May 13 23:57:02.972049 kernel: ... value mask: 0000ffffffffffff May 13 23:57:02.972060 kernel: ... max period: 00007fffffffffff May 13 23:57:02.972070 kernel: ... fixed-purpose events: 0 May 13 23:57:02.972081 kernel: ... event mask: 000000000000003f May 13 23:57:02.972092 kernel: signal: max sigframe size: 1776 May 13 23:57:02.972103 kernel: rcu: Hierarchical SRCU implementation. May 13 23:57:02.972114 kernel: rcu: Max phase no-delay instances is 400. May 13 23:57:02.972124 kernel: smp: Bringing up secondary CPUs ... May 13 23:57:02.972135 kernel: smpboot: x86: Booting SMP configuration: May 13 23:57:02.972150 kernel: .... node #0, CPUs: #1 #2 #3 May 13 23:57:02.972160 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:57:02.972171 kernel: smpboot: Max logical packages: 1 May 13 23:57:02.972182 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 23:57:02.972193 kernel: devtmpfs: initialized May 13 23:57:02.972204 kernel: x86/mm: Memory block size: 128MB May 13 23:57:02.972215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:57:02.972226 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:57:02.972237 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:57:02.972251 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:57:02.972262 kernel: audit: initializing netlink subsys (disabled) May 13 23:57:02.972273 kernel: audit: type=2000 audit(1747180621.913:1): state=initialized audit_enabled=0 res=1 May 13 23:57:02.972283 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:57:02.972294 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:57:02.972305 kernel: cpuidle: using governor menu May 13 23:57:02.972316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:57:02.972326 kernel: dca service started, version 1.12.1 May 13 23:57:02.972337 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 23:57:02.972352 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 23:57:02.972363 kernel: PCI: Using configuration type 1 for base access May 13 23:57:02.972374 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:57:02.972385 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:57:02.972395 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:57:02.972406 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:57:02.972417 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:57:02.972428 kernel: ACPI: Added _OSI(Module Device) May 13 23:57:02.972439 kernel: ACPI: Added _OSI(Processor Device) May 13 23:57:02.972453 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:57:02.972464 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:57:02.972474 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:57:02.972485 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:57:02.972496 kernel: ACPI: Interpreter enabled May 13 23:57:02.972507 kernel: ACPI: PM: (supports S0 S3 S5) May 13 23:57:02.972517 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:57:02.972528 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:57:02.972539 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:57:02.972553 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 23:57:02.972564 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:57:02.972914 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:57:02.973141 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 23:57:02.973316 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 23:57:02.973332 kernel: PCI host bridge to bus 0000:00 May 13 23:57:02.973515 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:57:02.973704 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:57:02.973951 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:57:02.974114 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 23:57:02.974266 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 23:57:02.974421 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 23:57:02.974579 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:57:02.974825 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 23:57:02.975023 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 23:57:02.975193 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 23:57:02.975366 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 23:57:02.975548 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 23:57:02.975755 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:57:02.975917 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:57:02.976061 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 23:57:02.976243 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 23:57:02.976416 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 23:57:02.976744 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 23:57:02.976926 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 23:57:02.977096 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 23:57:02.977266 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 23:57:02.977462 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 23:57:02.977760 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 23:57:02.977936 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 23:57:02.978105 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 23:57:02.978273 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 23:57:02.978457 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 23:57:02.978638 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 23:57:02.978893 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 20507 usecs May 13 23:57:02.979101 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 23:57:02.979267 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 23:57:02.979433 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 23:57:02.979622 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 23:57:02.979819 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 23:57:02.979836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:57:02.979853 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:57:02.979864 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:57:02.979875 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:57:02.979886 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 23:57:02.979897 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 23:57:02.979908 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 23:57:02.979919 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 23:57:02.979930 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 23:57:02.979941 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 23:57:02.979956 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 23:57:02.979967 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 23:57:02.979978 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 23:57:02.979989 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 23:57:02.979999 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 23:57:02.980011 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 23:57:02.980021 kernel: iommu: Default domain type: Translated May 13 23:57:02.980032 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:57:02.980053 kernel: PCI: Using ACPI for IRQ routing May 13 23:57:02.980070 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:57:02.980081 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 23:57:02.980092 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 23:57:02.980267 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 23:57:02.980434 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 23:57:02.980600 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:57:02.980616 kernel: vgaarb: loaded May 13 23:57:02.980627 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 23:57:02.980643 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 23:57:02.980655 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:57:02.980681 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:57:02.980711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:57:02.980742 kernel: pnp: PnP ACPI init May 13 23:57:02.980960 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 23:57:02.980977 kernel: pnp: PnP ACPI: found 6 devices May 13 23:57:02.980989 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:57:02.981005 kernel: NET: Registered PF_INET protocol family May 13 23:57:02.981016 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:57:02.981027 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:57:02.981038 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:57:02.981049 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:57:02.981060 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:57:02.981071 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:57:02.981082 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:57:02.981093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:57:02.981108 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:57:02.981119 kernel: NET: Registered PF_XDP protocol family May 13 23:57:02.981275 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:57:02.981430 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:57:02.981584 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:57:02.981817 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 23:57:02.981970 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 23:57:02.982120 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 23:57:02.982142 kernel: PCI: CLS 0 bytes, default 64 May 13 23:57:02.982153 kernel: Initialise system trusted keyrings May 13 23:57:02.982164 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:57:02.982175 kernel: Key type asymmetric registered May 13 23:57:02.982186 kernel: Asymmetric key parser 'x509' registered May 13 23:57:02.982197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:57:02.982207 kernel: io scheduler mq-deadline registered May 13 23:57:02.982218 kernel: io scheduler kyber registered May 13 23:57:02.982229 kernel: io scheduler bfq registered May 13 23:57:02.982240 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:57:02.982255 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 23:57:02.982266 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 23:57:02.982277 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 23:57:02.982288 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:57:02.982299 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:57:02.982311 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:57:02.982339 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:57:02.982351 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:57:02.982532 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 23:57:02.982555 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:57:02.982738 kernel: rtc_cmos 00:04: registered as rtc0 May 13 23:57:02.982897 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T23:57:02 UTC (1747180622) May 13 23:57:02.983051 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 23:57:02.983066 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 23:57:02.983077 kernel: NET: Registered PF_INET6 protocol family May 13 23:57:02.983088 kernel: Segment Routing with IPv6 May 13 23:57:02.983098 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:57:02.983115 kernel: NET: Registered PF_PACKET protocol family May 13 23:57:02.983126 kernel: Key type dns_resolver registered May 13 23:57:02.983137 kernel: IPI shorthand broadcast: enabled May 13 23:57:02.983148 kernel: sched_clock: Marking stable (869003732, 118935974)->(1065259572, -77319866) May 13 23:57:02.983159 kernel: registered taskstats version 1 May 13 23:57:02.983170 kernel: Loading compiled-in X.509 certificates May 13 23:57:02.983181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:57:02.983192 kernel: Key type .fscrypt registered May 13 23:57:02.983202 kernel: Key type fscrypt-provisioning registered May 13 23:57:02.983217 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:57:02.983228 kernel: ima: Allocated hash algorithm: sha1 May 13 23:57:02.983238 kernel: ima: No architecture policies found May 13 23:57:02.983249 kernel: clk: Disabling unused clocks May 13 23:57:02.983260 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:57:02.983271 kernel: Write protecting the kernel read-only data: 40960k May 13 23:57:02.983282 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:57:02.983293 kernel: Run /init as init process May 13 23:57:02.983307 kernel: with arguments: May 13 23:57:02.983318 kernel: /init May 13 23:57:02.983328 kernel: with environment: May 13 23:57:02.983339 kernel: HOME=/ May 13 23:57:02.983349 kernel: TERM=linux May 13 23:57:02.983360 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:57:02.983372 systemd[1]: Successfully made /usr/ read-only. May 13 23:57:02.983387 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:02.983403 systemd[1]: Detected virtualization kvm. May 13 23:57:02.983414 systemd[1]: Detected architecture x86-64. May 13 23:57:02.983426 systemd[1]: Running in initrd. May 13 23:57:02.983437 systemd[1]: No hostname configured, using default hostname. May 13 23:57:02.983449 systemd[1]: Hostname set to . May 13 23:57:02.983461 systemd[1]: Initializing machine ID from VM UUID. May 13 23:57:02.983472 systemd[1]: Queued start job for default target initrd.target. May 13 23:57:02.983484 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:02.983499 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:02.983528 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:57:02.983544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:02.983556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:57:02.983569 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:57:02.983587 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:57:02.983599 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:57:02.983611 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:02.983623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:02.983635 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:02.983647 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:02.983659 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:02.983686 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:02.983703 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:02.983723 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:02.983736 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:57:02.983748 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:57:02.983760 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:02.983772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:02.983784 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:02.983795 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:02.983807 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:57:02.983823 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:02.983835 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:57:02.983847 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:57:02.983859 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:02.983871 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:02.983883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:02.983895 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:57:02.983907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:02.983923 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:57:02.983966 systemd-journald[193]: Collecting audit messages is disabled. May 13 23:57:02.984002 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:57:02.984015 systemd-journald[193]: Journal started May 13 23:57:02.984043 systemd-journald[193]: Runtime Journal (/run/log/journal/e0c189992f39457da9af78c9f39e1241) is 6M, max 48.3M, 42.3M free. May 13 23:57:02.971567 systemd-modules-load[194]: Inserted module 'overlay' May 13 23:57:03.006648 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:03.006692 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:57:03.006709 kernel: Bridge firewalling registered May 13 23:57:02.999698 systemd-modules-load[194]: Inserted module 'br_netfilter' May 13 23:57:03.007036 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:03.007773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:03.011657 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:03.012617 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:03.016779 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:03.038146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:03.044001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:03.047394 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:03.053936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:03.058160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:03.059287 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:03.064966 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:57:03.073954 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:03.088919 dracut-cmdline[230]: dracut-dracut-053 May 13 23:57:03.092989 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:03.112481 systemd-resolved[229]: Positive Trust Anchors: May 13 23:57:03.112500 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:03.112532 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:03.115555 systemd-resolved[229]: Defaulting to hostname 'linux'. May 13 23:57:03.116859 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:03.122511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:03.212747 kernel: SCSI subsystem initialized May 13 23:57:03.224733 kernel: Loading iSCSI transport class v2.0-870. May 13 23:57:03.239730 kernel: iscsi: registered transport (tcp) May 13 23:57:03.268768 kernel: iscsi: registered transport (qla4xxx) May 13 23:57:03.268871 kernel: QLogic iSCSI HBA Driver May 13 23:57:03.336879 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:57:03.340569 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:57:03.389774 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:57:03.389880 kernel: device-mapper: uevent: version 1.0.3 May 13 23:57:03.391002 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:57:03.439756 kernel: raid6: avx2x4 gen() 28223 MB/s May 13 23:57:03.456762 kernel: raid6: avx2x2 gen() 29821 MB/s May 13 23:57:03.474084 kernel: raid6: avx2x1 gen() 18086 MB/s May 13 23:57:03.474153 kernel: raid6: using algorithm avx2x2 gen() 29821 MB/s May 13 23:57:03.491921 kernel: raid6: .... xor() 15712 MB/s, rmw enabled May 13 23:57:03.492017 kernel: raid6: using avx2x2 recovery algorithm May 13 23:57:03.515756 kernel: xor: automatically using best checksumming function avx May 13 23:57:03.682733 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:57:03.697346 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:03.699768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:03.733203 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 13 23:57:03.740473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:03.743679 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:57:03.771349 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation May 13 23:57:03.808429 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:03.811196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:03.898446 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:03.902273 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:57:03.929893 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:57:03.935897 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:03.952826 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:57:03.952862 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 23:57:03.953114 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:57:03.945343 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:03.946891 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:03.953776 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:57:03.967233 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:57:03.967261 kernel: AES CTR mode by8 optimization enabled May 13 23:57:03.967275 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:57:03.969325 kernel: GPT:9289727 != 19775487 May 13 23:57:03.969363 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:57:03.969378 kernel: GPT:9289727 != 19775487 May 13 23:57:03.969392 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:57:03.969406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:57:03.983639 kernel: libata version 3.00 loaded. May 13 23:57:03.984614 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:03.987155 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:03.994145 kernel: ahci 0000:00:1f.2: version 3.0 May 13 23:57:03.994755 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 23:57:03.992946 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:04.004917 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 23:57:04.005187 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 23:57:03.995157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:03.995361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:04.009788 kernel: scsi host0: ahci May 13 23:57:04.000905 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:04.023322 kernel: scsi host1: ahci May 13 23:57:04.023573 kernel: scsi host2: ahci May 13 23:57:04.023775 kernel: scsi host3: ahci May 13 23:57:04.024937 kernel: scsi host4: ahci May 13 23:57:04.004044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:04.006575 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:04.031922 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) May 13 23:57:04.020708 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:04.036712 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (465) May 13 23:57:04.036770 kernel: scsi host5: ahci May 13 23:57:04.037032 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 23:57:04.038373 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 23:57:04.038401 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 23:57:04.040437 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 23:57:04.040470 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 23:57:04.042333 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 23:57:04.072293 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:57:04.113314 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:57:04.114266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:04.129053 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:57:04.141355 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:57:04.142929 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:57:04.147559 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:57:04.149936 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:04.175227 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:04.188650 disk-uuid[556]: Primary Header is updated. May 13 23:57:04.188650 disk-uuid[556]: Secondary Entries is updated. May 13 23:57:04.188650 disk-uuid[556]: Secondary Header is updated. May 13 23:57:04.192718 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:57:04.199711 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:57:04.349725 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 23:57:04.349819 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 23:57:04.357718 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 23:57:04.357751 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 23:57:04.358719 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 23:57:04.359702 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 23:57:04.360706 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 23:57:04.360751 kernel: ata3.00: applying bridge limits May 13 23:57:04.361839 kernel: ata3.00: configured for UDMA/100 May 13 23:57:04.362716 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 23:57:04.417739 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 23:57:04.418128 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:57:04.431710 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 23:57:05.209705 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:57:05.210267 disk-uuid[564]: The operation has completed successfully. May 13 23:57:05.245146 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:57:05.245317 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:57:05.280380 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:57:05.310424 sh[591]: Success May 13 23:57:05.324708 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 23:57:05.361732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:57:05.364361 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:57:05.381310 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:57:05.406148 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:57:05.406236 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:05.406248 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:57:05.407181 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:57:05.407929 kernel: BTRFS info (device dm-0): using free space tree May 13 23:57:05.413902 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:57:05.415306 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:57:05.417880 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:57:05.420540 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:57:05.453085 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:05.453152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:05.453168 kernel: BTRFS info (device vda6): using free space tree May 13 23:57:05.456702 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:57:05.461691 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:05.543351 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:05.546124 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:05.584958 systemd-networkd[767]: lo: Link UP May 13 23:57:05.584972 systemd-networkd[767]: lo: Gained carrier May 13 23:57:05.586847 systemd-networkd[767]: Enumeration completed May 13 23:57:05.586964 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:05.587307 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:05.587312 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:05.588134 systemd-networkd[767]: eth0: Link UP May 13 23:57:05.588139 systemd-networkd[767]: eth0: Gained carrier May 13 23:57:05.588146 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:05.590344 systemd[1]: Reached target network.target - Network. May 13 23:57:05.605770 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:57:05.727939 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:57:05.730424 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:57:05.802643 ignition[772]: Ignition 2.20.0 May 13 23:57:05.802681 ignition[772]: Stage: fetch-offline May 13 23:57:05.802739 ignition[772]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:05.802756 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:05.802884 ignition[772]: parsed url from cmdline: "" May 13 23:57:05.802890 ignition[772]: no config URL provided May 13 23:57:05.802897 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:57:05.802911 ignition[772]: no config at "/usr/lib/ignition/user.ign" May 13 23:57:05.802948 ignition[772]: op(1): [started] loading QEMU firmware config module May 13 23:57:05.802955 ignition[772]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:57:05.810262 ignition[772]: op(1): [finished] loading QEMU firmware config module May 13 23:57:05.851637 ignition[772]: parsing config with SHA512: 309e57b50b9982b1cbaf373e0fadcc55077cd9f8a997362f692261b0bf69740eb300d33c347a0399dba6345233b7c3f05e75f67d541dcb5cd5fb38c2bbd086fb May 13 23:57:05.857243 unknown[772]: fetched base config from "system" May 13 23:57:05.857261 unknown[772]: fetched user config from "qemu" May 13 23:57:05.857991 ignition[772]: fetch-offline: fetch-offline passed May 13 23:57:05.858567 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.80 May 13 23:57:05.858152 ignition[772]: Ignition finished successfully May 13 23:57:05.858577 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. May 13 23:57:05.860394 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:05.862073 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:57:05.863368 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:57:05.896940 ignition[783]: Ignition 2.20.0 May 13 23:57:05.896957 ignition[783]: Stage: kargs May 13 23:57:05.897181 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:05.897197 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:05.898346 ignition[783]: kargs: kargs passed May 13 23:57:05.898405 ignition[783]: Ignition finished successfully May 13 23:57:05.906464 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:57:05.910032 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:57:05.950562 ignition[792]: Ignition 2.20.0 May 13 23:57:05.950579 ignition[792]: Stage: disks May 13 23:57:05.950831 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:05.950846 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:05.964808 ignition[792]: disks: disks passed May 13 23:57:05.965742 ignition[792]: Ignition finished successfully May 13 23:57:05.969059 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:57:05.997480 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:57:05.997959 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:57:05.998368 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:05.999001 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:06.005349 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:06.006977 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:57:06.069521 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:57:06.393482 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:57:06.395049 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:57:06.538705 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:57:06.539756 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:57:06.542034 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:57:06.544425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:06.546882 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:57:06.547625 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:57:06.547709 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:57:06.547742 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:06.593230 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:57:06.596631 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:57:06.600698 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (810) May 13 23:57:06.602865 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:06.602890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:06.602901 kernel: BTRFS info (device vda6): using free space tree May 13 23:57:06.606685 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:57:06.607500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:06.639761 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:57:06.649716 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory May 13 23:57:06.654289 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:57:06.660476 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:57:06.764381 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:57:06.766822 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:57:06.768145 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:57:06.787043 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:57:06.788277 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:06.801321 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:57:06.909919 systemd-networkd[767]: eth0: Gained IPv6LL May 13 23:57:06.976047 ignition[927]: INFO : Ignition 2.20.0 May 13 23:57:06.976047 ignition[927]: INFO : Stage: mount May 13 23:57:06.977993 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:06.977993 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:06.977993 ignition[927]: INFO : mount: mount passed May 13 23:57:06.977993 ignition[927]: INFO : Ignition finished successfully May 13 23:57:06.982317 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:57:06.986695 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:57:07.542715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:07.567718 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (937) May 13 23:57:07.570064 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:07.570106 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:07.570118 kernel: BTRFS info (device vda6): using free space tree May 13 23:57:07.573709 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:57:07.575513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:07.607261 ignition[954]: INFO : Ignition 2.20.0 May 13 23:57:07.607261 ignition[954]: INFO : Stage: files May 13 23:57:07.609408 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:07.609408 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:07.609408 ignition[954]: DEBUG : files: compiled without relabeling support, skipping May 13 23:57:07.613037 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:57:07.613037 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:57:07.613037 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:57:07.613037 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:57:07.619132 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:57:07.619132 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:57:07.619132 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:57:07.613135 unknown[954]: wrote ssh authorized keys file for user: core May 13 23:57:07.675349 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:57:07.959854 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:57:07.959854 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:57:07.964297 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:57:07.966203 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:07.968228 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:07.969926 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:07.971652 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:07.973345 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:07.975084 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:07.977039 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:07.978910 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:07.980914 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:07.980914 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:07.980914 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:07.992163 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 23:57:08.484034 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:57:09.255223 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:09.255223 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:57:09.260052 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:09.262967 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:09.262967 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:57:09.296540 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 23:57:09.297855 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:57:09.300035 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:57:09.300035 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 23:57:09.303264 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:57:09.326161 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:57:09.358810 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:57:09.360449 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:57:09.360449 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 23:57:09.360449 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:57:09.360449 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:09.360449 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:09.360449 ignition[954]: INFO : files: files passed May 13 23:57:09.360449 ignition[954]: INFO : Ignition finished successfully May 13 23:57:09.396553 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:57:09.398379 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:57:09.400139 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:57:09.415586 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:57:09.415736 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:57:09.419385 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:57:09.424048 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:09.424048 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:09.428723 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:09.432382 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:09.434061 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:57:09.437221 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:57:09.488061 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:57:09.488263 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:57:09.491350 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:57:09.504471 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:57:09.505103 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:57:09.506065 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:57:09.533839 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:09.537034 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:57:09.556255 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:09.557903 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:09.560383 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:57:09.562722 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:57:09.562902 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:09.565284 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:57:09.567233 systemd[1]: Stopped target basic.target - Basic System. May 13 23:57:09.569542 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:57:09.571912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:09.574419 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:57:09.576779 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:57:09.579178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:09.581498 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:57:09.583470 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:57:09.585637 systemd[1]: Stopped target swap.target - Swaps. May 13 23:57:09.587393 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:57:09.587557 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:09.589906 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:09.591331 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:09.593456 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:57:09.593643 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:09.595788 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:57:09.595906 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:57:09.598205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:57:09.598323 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:09.600373 systemd[1]: Stopped target paths.target - Path Units. May 13 23:57:09.602273 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:57:09.605724 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:09.607213 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:57:09.609190 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:57:09.611278 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:57:09.611415 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:09.613135 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:57:09.613257 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:09.615172 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:57:09.615307 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:09.617820 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:57:09.617945 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:57:09.621003 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:57:09.622330 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:57:09.622505 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:09.625657 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:57:09.627190 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:57:09.627370 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:09.629718 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:57:09.629982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:09.639796 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:57:09.639936 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:57:09.690370 ignition[1010]: INFO : Ignition 2.20.0 May 13 23:57:09.690370 ignition[1010]: INFO : Stage: umount May 13 23:57:09.736836 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:09.736836 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:57:09.736836 ignition[1010]: INFO : umount: umount passed May 13 23:57:09.736836 ignition[1010]: INFO : Ignition finished successfully May 13 23:57:09.693791 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:57:09.693945 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:57:09.735535 systemd[1]: Stopped target network.target - Network. May 13 23:57:09.736842 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:57:09.736927 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:57:09.738912 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:57:09.738965 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:57:09.741120 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:57:09.741172 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:57:09.743005 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:57:09.743053 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:57:09.745095 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:57:09.747101 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:57:09.775634 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:57:09.775854 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:57:09.780891 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:57:09.781155 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:57:09.781317 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:57:09.785289 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:57:09.786134 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:57:09.786213 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:09.788429 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:57:09.789419 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:57:09.789478 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:09.791720 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:57:09.791771 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:09.795173 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:57:09.795267 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:57:09.823658 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:57:09.823767 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:09.825419 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:09.827280 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:57:09.827353 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:09.846366 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:57:09.846570 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:57:09.849188 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:57:09.849439 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:09.852339 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:57:09.852453 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:57:09.853851 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:57:09.853908 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:09.856094 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:57:09.856171 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:09.858604 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:57:09.858745 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:57:09.860633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:09.860736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:09.864048 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:57:09.865561 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:57:09.865639 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:09.868809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:09.868887 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:09.890208 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:57:09.890323 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:09.900129 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:57:09.900312 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:57:09.955327 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:57:09.964363 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:57:09.964565 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:57:09.967926 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:57:09.969613 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:57:09.969820 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:57:09.973192 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:57:09.998441 systemd[1]: Switching root. May 13 23:57:10.037254 systemd-journald[193]: Journal stopped May 13 23:57:11.514372 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 13 23:57:11.514465 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:57:11.514487 kernel: SELinux: policy capability open_perms=1 May 13 23:57:11.514499 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:57:11.514512 kernel: SELinux: policy capability always_check_network=0 May 13 23:57:11.514524 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:57:11.514536 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:57:11.514560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:57:11.514572 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:57:11.514587 kernel: audit: type=1403 audit(1747180630.557:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:57:11.514603 systemd[1]: Successfully loaded SELinux policy in 52.662ms. May 13 23:57:11.514625 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.735ms. May 13 23:57:11.514639 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:11.514652 systemd[1]: Detected virtualization kvm. May 13 23:57:11.516267 systemd[1]: Detected architecture x86-64. May 13 23:57:11.516287 systemd[1]: Detected first boot. May 13 23:57:11.516300 systemd[1]: Initializing machine ID from VM UUID. May 13 23:57:11.516313 zram_generator::config[1058]: No configuration found. May 13 23:57:11.516338 kernel: Guest personality initialized and is inactive May 13 23:57:11.516351 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:57:11.516363 kernel: Initialized host personality May 13 23:57:11.516375 kernel: NET: Registered PF_VSOCK protocol family May 13 23:57:11.516388 systemd[1]: Populated /etc with preset unit settings. May 13 23:57:11.516401 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:57:11.516415 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:57:11.516434 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:57:11.516449 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:57:11.516475 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:57:11.516488 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:57:11.516501 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:57:11.516514 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:57:11.516528 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:57:11.516541 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:57:11.516554 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:57:11.516567 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:57:11.516584 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:11.516597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:11.516610 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:57:11.516623 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:57:11.516636 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:57:11.516649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:11.516676 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:57:11.516692 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:11.516705 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:57:11.516718 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:57:11.516731 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:57:11.516744 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:57:11.516757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:11.516770 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:11.516783 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:11.516797 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:11.516809 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:57:11.516826 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:57:11.516839 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:57:11.516852 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:11.516866 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:11.516878 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:11.516891 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:57:11.516904 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:57:11.516917 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:57:11.516930 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:57:11.516946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:11.516965 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:57:11.516978 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:57:11.516991 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:57:11.517004 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:57:11.517017 systemd[1]: Reached target machines.target - Containers. May 13 23:57:11.517031 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:57:11.517044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:11.517060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:11.517073 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:57:11.517086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:11.517099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:57:11.517115 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:11.517129 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:57:11.517147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:11.517164 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:57:11.517181 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:57:11.517194 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:57:11.517207 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:57:11.517220 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:57:11.517233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:11.517246 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:11.517259 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:11.517272 kernel: fuse: init (API version 7.39) May 13 23:57:11.517284 kernel: loop: module loaded May 13 23:57:11.517299 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:57:11.517313 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:57:11.517326 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:57:11.517339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:11.517352 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:57:11.517365 systemd[1]: Stopped verity-setup.service. May 13 23:57:11.517378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:11.517395 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:57:11.517431 systemd-journald[1122]: Collecting audit messages is disabled. May 13 23:57:11.517463 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:57:11.517478 systemd-journald[1122]: Journal started May 13 23:57:11.517504 systemd-journald[1122]: Runtime Journal (/run/log/journal/e0c189992f39457da9af78c9f39e1241) is 6M, max 48.3M, 42.3M free. May 13 23:57:11.271820 systemd[1]: Queued start job for default target multi-user.target. May 13 23:57:11.286361 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:57:11.286964 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:57:11.519746 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:11.519771 kernel: ACPI: bus type drm_connector registered May 13 23:57:11.522213 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:57:11.523530 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:57:11.526120 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:57:11.527592 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:57:11.529226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:11.531122 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:57:11.531357 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:57:11.533063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:11.533289 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:11.534930 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:57:11.535158 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:57:11.536728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:11.536986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:11.538746 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:57:11.538982 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:57:11.540651 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:11.540903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:11.542893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:11.544779 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:57:11.546690 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:57:11.548644 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:57:11.567724 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:57:11.571044 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:57:11.573686 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:57:11.575033 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:57:11.575067 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:11.577429 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:57:11.590835 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:57:11.596837 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:57:11.598363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:11.614111 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:57:11.616996 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:57:11.618602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:57:11.622837 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:57:11.624319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:57:11.626021 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:11.628925 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:57:11.633158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:11.635024 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:57:11.636602 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:57:11.650267 systemd-journald[1122]: Time spent on flushing to /var/log/journal/e0c189992f39457da9af78c9f39e1241 is 18.992ms for 968 entries. May 13 23:57:11.650267 systemd-journald[1122]: System Journal (/var/log/journal/e0c189992f39457da9af78c9f39e1241) is 8M, max 195.6M, 187.6M free. May 13 23:57:12.120389 systemd-journald[1122]: Received client request to flush runtime journal. May 13 23:57:12.120493 kernel: loop0: detected capacity change from 0 to 109808 May 13 23:57:12.120531 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:57:12.120557 kernel: loop1: detected capacity change from 0 to 205544 May 13 23:57:12.120582 kernel: loop2: detected capacity change from 0 to 151640 May 13 23:57:12.120608 kernel: loop3: detected capacity change from 0 to 109808 May 13 23:57:12.120639 kernel: loop4: detected capacity change from 0 to 205544 May 13 23:57:12.120681 kernel: loop5: detected capacity change from 0 to 151640 May 13 23:57:12.120712 zram_generator::config[1225]: No configuration found. May 13 23:57:11.650948 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:57:11.659830 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:57:11.671167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:11.685945 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:57:11.895906 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:57:11.896620 (sd-merge)[1193]: Merged extensions into '/usr'. May 13 23:57:11.901276 systemd[1]: Reload requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:57:11.901287 systemd[1]: Reloading... May 13 23:57:12.132374 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:57:12.172283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:12.252338 systemd[1]: Reloading finished in 350 ms. May 13 23:57:12.274642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:57:12.276755 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:57:12.278574 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:57:12.280538 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:57:12.282399 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:57:12.292191 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:57:12.304048 systemd[1]: Starting ensure-sysext.service... May 13 23:57:12.306232 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:57:12.310870 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:57:12.333557 systemd[1]: Reload requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... May 13 23:57:12.333575 systemd[1]: Reloading... May 13 23:57:12.391698 zram_generator::config[1295]: No configuration found. May 13 23:57:12.519420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:12.586379 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:57:12.586598 systemd[1]: Reloading finished in 252 ms. May 13 23:57:12.618216 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:57:12.620213 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:57:12.630992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:12.633555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:12.637289 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:12.637541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:12.647792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:12.652889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:12.655361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:12.656729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:12.656845 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:12.656950 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:12.660164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:12.660408 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:12.662319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:12.662563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:12.664361 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:12.664605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:12.671458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:12.671642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:12.673091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:12.688597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:12.690825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:12.691948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:12.692053 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:12.692158 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:12.693179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:12.693404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:12.695315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:12.695545 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:12.697284 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:12.697506 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:12.705451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:12.705695 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:12.707435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:12.709691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:57:12.711825 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:12.723630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:12.725129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:12.725254 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:12.725430 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:12.726989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:12.727246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:12.729489 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:57:12.729741 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:57:12.731307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:12.731548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:12.737216 systemd[1]: Finished ensure-sysext.service. May 13 23:57:12.738852 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:12.739094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:12.740606 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:57:12.741409 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:57:12.743139 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. May 13 23:57:12.743162 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. May 13 23:57:12.743208 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:57:12.743736 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. May 13 23:57:12.743835 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. May 13 23:57:12.744203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:57:12.744316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:57:12.748189 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:57:12.748204 systemd-tmpfiles[1335]: Skipping /boot May 13 23:57:12.751295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:12.762788 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:57:12.762804 systemd-tmpfiles[1335]: Skipping /boot May 13 23:57:12.786233 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:12.789370 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:57:12.791893 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:57:12.805412 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:57:12.810510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:12.813873 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:57:12.816809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:57:12.820506 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:57:12.888918 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:57:12.891768 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:57:12.911187 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:57:12.974649 augenrules[1396]: No rules May 13 23:57:12.975578 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:57:12.975914 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:57:12.996068 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:57:13.003159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:13.009711 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:57:13.022131 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:57:13.040385 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:57:13.043081 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:57:13.075936 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:57:13.076544 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:57:13.087136 systemd-resolved[1365]: Positive Trust Anchors: May 13 23:57:13.087156 systemd-resolved[1365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:13.087186 systemd-resolved[1365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:13.091286 systemd-resolved[1365]: Defaulting to hostname 'linux'. May 13 23:57:13.093142 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:13.099010 systemd-udevd[1403]: Using default interface naming scheme 'v255'. May 13 23:57:13.119254 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:13.137391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:13.141710 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:13.195167 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:57:13.204752 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1426) May 13 23:57:13.233996 systemd-networkd[1413]: lo: Link UP May 13 23:57:13.234007 systemd-networkd[1413]: lo: Gained carrier May 13 23:57:13.236498 systemd-networkd[1413]: Enumeration completed May 13 23:57:13.236601 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:13.238422 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:13.238430 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:13.258858 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:13.258925 systemd[1]: Reached target network.target - Network. May 13 23:57:13.258927 systemd-networkd[1413]: eth0: Link UP May 13 23:57:13.258932 systemd-networkd[1413]: eth0: Gained carrier May 13 23:57:13.258944 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:13.262276 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:57:13.265394 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:57:13.271823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:57:13.271824 systemd-networkd[1413]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:57:13.272831 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. May 13 23:57:13.273611 systemd-timesyncd[1367]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:57:13.273681 systemd-timesyncd[1367]: Initial clock synchronization to Tue 2025-05-13 23:57:13.613279 UTC. May 13 23:57:13.275599 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:57:13.338689 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 23:57:13.345703 kernel: ACPI: button: Power Button [PWRF] May 13 23:57:13.365895 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:57:13.408070 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 23:57:13.408175 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 23:57:13.408527 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 23:57:13.408805 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 23:57:13.430241 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:57:13.442696 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:57:13.461924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:13.479880 kernel: kvm_amd: TSC scaling supported May 13 23:57:13.479968 kernel: kvm_amd: Nested Virtualization enabled May 13 23:57:13.479989 kernel: kvm_amd: Nested Paging enabled May 13 23:57:13.482185 kernel: kvm_amd: LBR virtualization supported May 13 23:57:13.482226 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 23:57:13.482240 kernel: kvm_amd: Virtual GIF supported May 13 23:57:13.503705 kernel: EDAC MC: Ver: 3.0.0 May 13 23:57:13.550526 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:57:13.613913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:13.617499 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:57:13.641190 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:57:13.677105 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:57:13.679304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:13.680484 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:13.681723 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:57:13.683025 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:57:13.684708 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:57:13.685986 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:57:13.687304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:57:13.688687 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:57:13.688718 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:13.689893 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:13.692159 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:57:13.695434 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:57:13.699506 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:57:13.730811 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:57:13.732314 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:57:13.736260 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:57:13.737923 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:57:13.740472 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:57:13.742291 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:57:13.743593 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:13.744716 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:13.745851 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:57:13.745892 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:57:13.747107 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:57:13.749782 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:57:13.798790 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:57:13.801477 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:57:13.835959 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:57:13.837363 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:57:13.839073 jq[1462]: false May 13 23:57:13.839449 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:57:13.847989 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:57:13.853207 dbus-daemon[1461]: [system] SELinux support is enabled May 13 23:57:13.854884 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:57:13.859053 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:57:13.865532 extend-filesystems[1463]: Found loop3 May 13 23:57:13.865532 extend-filesystems[1463]: Found loop4 May 13 23:57:13.865532 extend-filesystems[1463]: Found loop5 May 13 23:57:13.865532 extend-filesystems[1463]: Found sr0 May 13 23:57:13.865532 extend-filesystems[1463]: Found vda May 13 23:57:13.865532 extend-filesystems[1463]: Found vda1 May 13 23:57:13.865532 extend-filesystems[1463]: Found vda2 May 13 23:57:13.865532 extend-filesystems[1463]: Found vda3 May 13 23:57:13.865532 extend-filesystems[1463]: Found usr May 13 23:57:13.865532 extend-filesystems[1463]: Found vda4 May 13 23:57:13.865532 extend-filesystems[1463]: Found vda6 May 13 23:57:13.865532 extend-filesystems[1463]: Found vda7 May 13 23:57:13.865532 extend-filesystems[1463]: Found vda9 May 13 23:57:13.865532 extend-filesystems[1463]: Checking size of /dev/vda9 May 13 23:57:13.903977 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:57:13.905791 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:57:13.906498 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:57:13.915850 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:57:13.929277 extend-filesystems[1463]: Resized partition /dev/vda9 May 13 23:57:13.929198 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:57:13.931001 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:57:13.935514 extend-filesystems[1480]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:57:13.937772 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:57:13.942900 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:57:13.943253 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:57:13.943739 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:57:13.944087 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:57:13.947836 update_engine[1477]: I20250513 23:57:13.947741 1477 main.cc:92] Flatcar Update Engine starting May 13 23:57:13.950690 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1426) May 13 23:57:13.948880 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:57:13.949220 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:57:13.962292 update_engine[1477]: I20250513 23:57:13.961194 1477 update_check_scheduler.cc:74] Next update check in 3m52s May 13 23:57:13.963746 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:57:13.970187 jq[1481]: true May 13 23:57:14.003151 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:57:14.021850 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:57:14.020249 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:57:14.020286 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:57:14.057538 tar[1484]: linux-amd64/helm May 13 23:57:14.022885 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:57:14.058005 jq[1493]: true May 13 23:57:14.022911 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:57:14.024836 systemd[1]: Started update-engine.service - Update Engine. May 13 23:57:14.032240 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:57:14.060031 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:57:14.060031 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:57:14.060031 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:57:14.068245 extend-filesystems[1463]: Resized filesystem in /dev/vda9 May 13 23:57:14.062175 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:57:14.063225 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:57:14.086596 systemd-logind[1475]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:57:14.086638 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:57:14.087162 systemd-logind[1475]: New seat seat0. May 13 23:57:14.095763 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:57:14.138269 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:57:14.162521 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:57:14.180148 bash[1523]: Updated "/home/core/.ssh/authorized_keys" May 13 23:57:14.182102 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:57:14.184696 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:57:14.193789 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:57:14.198412 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:57:14.232226 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:57:14.232627 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:57:14.236738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:57:14.318312 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:57:14.333266 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:57:14.334134 systemd-networkd[1413]: eth0: Gained IPv6LL May 13 23:57:14.337000 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:57:14.339193 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:57:14.341758 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:57:14.345824 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:57:14.349988 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:57:14.353435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:14.358407 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:57:14.448457 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:57:14.471170 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:57:14.471565 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:57:14.475277 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:57:14.478089 containerd[1491]: time="2025-05-13T23:57:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:57:14.481116 containerd[1491]: time="2025-05-13T23:57:14.480512086Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:57:14.505788 containerd[1491]: time="2025-05-13T23:57:14.505694016Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.359µs" May 13 23:57:14.505788 containerd[1491]: time="2025-05-13T23:57:14.505768965Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:57:14.505932 containerd[1491]: time="2025-05-13T23:57:14.505808079Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:57:14.506119 containerd[1491]: time="2025-05-13T23:57:14.506088539Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:57:14.506165 containerd[1491]: time="2025-05-13T23:57:14.506115340Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:57:14.506165 containerd[1491]: time="2025-05-13T23:57:14.506150913Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:57:14.506280 containerd[1491]: time="2025-05-13T23:57:14.506247784Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:57:14.506280 containerd[1491]: time="2025-05-13T23:57:14.506267826Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:57:14.506755 containerd[1491]: time="2025-05-13T23:57:14.506719531Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:57:14.506755 containerd[1491]: time="2025-05-13T23:57:14.506748337Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:57:14.506799 containerd[1491]: time="2025-05-13T23:57:14.506764661Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:57:14.506799 containerd[1491]: time="2025-05-13T23:57:14.506778165Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:57:14.506973 containerd[1491]: time="2025-05-13T23:57:14.506932699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:57:14.507327 containerd[1491]: time="2025-05-13T23:57:14.507286897Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:57:14.507365 containerd[1491]: time="2025-05-13T23:57:14.507331379Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:57:14.507365 containerd[1491]: time="2025-05-13T23:57:14.507344005Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:57:14.507449 containerd[1491]: time="2025-05-13T23:57:14.507404373Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:57:14.507949 containerd[1491]: time="2025-05-13T23:57:14.507785559Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:57:14.507949 containerd[1491]: time="2025-05-13T23:57:14.507888111Z" level=info msg="metadata content store policy set" policy=shared May 13 23:57:14.517918 containerd[1491]: time="2025-05-13T23:57:14.517889801Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:57:14.518031 containerd[1491]: time="2025-05-13T23:57:14.518016230Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:57:14.518152 containerd[1491]: time="2025-05-13T23:57:14.518137143Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:57:14.518210 containerd[1491]: time="2025-05-13T23:57:14.518196781Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:57:14.518512 containerd[1491]: time="2025-05-13T23:57:14.518269180Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:57:14.518512 containerd[1491]: time="2025-05-13T23:57:14.518286684Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:57:14.518512 containerd[1491]: time="2025-05-13T23:57:14.518299500Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:57:14.518512 containerd[1491]: time="2025-05-13T23:57:14.518311731Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:57:14.518512 containerd[1491]: time="2025-05-13T23:57:14.518323470Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:57:14.518512 containerd[1491]: time="2025-05-13T23:57:14.518334154Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:57:14.518512 containerd[1491]: time="2025-05-13T23:57:14.518343846Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:57:14.518512 containerd[1491]: time="2025-05-13T23:57:14.518355345Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:57:14.518811 containerd[1491]: time="2025-05-13T23:57:14.518790715Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:57:14.518874 containerd[1491]: time="2025-05-13T23:57:14.518860922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:57:14.518928 containerd[1491]: time="2025-05-13T23:57:14.518915796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:57:14.518978 containerd[1491]: time="2025-05-13T23:57:14.518965720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:57:14.519036 containerd[1491]: time="2025-05-13T23:57:14.519022474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:57:14.519289 containerd[1491]: time="2025-05-13T23:57:14.519112619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:57:14.519289 containerd[1491]: time="2025-05-13T23:57:14.519133632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:57:14.519289 containerd[1491]: time="2025-05-13T23:57:14.519145821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:57:14.519289 containerd[1491]: time="2025-05-13T23:57:14.519156538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:57:14.519289 containerd[1491]: time="2025-05-13T23:57:14.519167170Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:57:14.519289 containerd[1491]: time="2025-05-13T23:57:14.519178125Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:57:14.519289 containerd[1491]: time="2025-05-13T23:57:14.519246922Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:57:14.519289 containerd[1491]: time="2025-05-13T23:57:14.519260270Z" level=info msg="Start snapshots syncer" May 13 23:57:14.519519 containerd[1491]: time="2025-05-13T23:57:14.519502525Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:57:14.520106 containerd[1491]: time="2025-05-13T23:57:14.519993177Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:57:14.520106 containerd[1491]: time="2025-05-13T23:57:14.520057462Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:57:14.520548 containerd[1491]: time="2025-05-13T23:57:14.520388514Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:57:14.520761 containerd[1491]: time="2025-05-13T23:57:14.520735463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:57:14.520877 containerd[1491]: time="2025-05-13T23:57:14.520858622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.520928661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.520944997Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.520958814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.520969060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.520983796Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521007651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521045857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521058818Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521094433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521108355Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521118560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521128586Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521137871Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:57:14.521168 containerd[1491]: time="2025-05-13T23:57:14.521148503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:57:14.521481 containerd[1491]: time="2025-05-13T23:57:14.521464130Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:57:14.521552 containerd[1491]: time="2025-05-13T23:57:14.521530158Z" level=info msg="runtime interface created" May 13 23:57:14.521604 containerd[1491]: time="2025-05-13T23:57:14.521592312Z" level=info msg="created NRI interface" May 13 23:57:14.521654 containerd[1491]: time="2025-05-13T23:57:14.521641755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:57:14.521728 containerd[1491]: time="2025-05-13T23:57:14.521699899Z" level=info msg="Connect containerd service" May 13 23:57:14.521827 containerd[1491]: time="2025-05-13T23:57:14.521791401Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:57:14.522957 containerd[1491]: time="2025-05-13T23:57:14.522933881Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:57:14.817840 containerd[1491]: time="2025-05-13T23:57:14.816491061Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:57:14.817840 containerd[1491]: time="2025-05-13T23:57:14.816601092Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.817418221Z" level=info msg="Start subscribing containerd event" May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.818679004Z" level=info msg="Start recovering state" May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.818865915Z" level=info msg="Start event monitor" May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.818887555Z" level=info msg="Start cni network conf syncer for default" May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.818899201Z" level=info msg="Start streaming server" May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.818917666Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.818928518Z" level=info msg="runtime interface starting up..." May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.818937510Z" level=info msg="starting plugins..." May 13 23:57:14.819126 containerd[1491]: time="2025-05-13T23:57:14.818966796Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:57:14.819601 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:57:14.821512 containerd[1491]: time="2025-05-13T23:57:14.820038516Z" level=info msg="containerd successfully booted in 0.344437s" May 13 23:57:14.829451 tar[1484]: linux-amd64/LICENSE May 13 23:57:14.829526 tar[1484]: linux-amd64/README.md May 13 23:57:14.863595 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:57:15.593036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:15.595507 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:57:15.597122 systemd[1]: Startup finished in 1.049s (kernel) + 7.792s (initrd) + 5.089s (userspace) = 13.931s. May 13 23:57:15.635313 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:16.347938 kubelet[1587]: E0513 23:57:16.347780 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:16.352903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:16.353157 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:16.353595 systemd[1]: kubelet.service: Consumed 1.638s CPU time, 238.4M memory peak. May 13 23:57:16.511462 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:57:16.512908 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:54818.service - OpenSSH per-connection server daemon (10.0.0.1:54818). May 13 23:57:16.598125 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 54818 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:57:16.600958 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:16.615655 systemd-logind[1475]: New session 1 of user core. May 13 23:57:16.617396 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:57:16.619051 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:57:16.653632 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:57:16.662617 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:57:16.682214 (systemd)[1605]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:57:16.686080 systemd-logind[1475]: New session c1 of user core. May 13 23:57:16.862336 systemd[1605]: Queued start job for default target default.target. May 13 23:57:16.871412 systemd[1605]: Created slice app.slice - User Application Slice. May 13 23:57:16.871451 systemd[1605]: Reached target paths.target - Paths. May 13 23:57:16.871508 systemd[1605]: Reached target timers.target - Timers. May 13 23:57:16.873538 systemd[1605]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:57:16.890111 systemd[1605]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:57:16.890300 systemd[1605]: Reached target sockets.target - Sockets. May 13 23:57:16.890361 systemd[1605]: Reached target basic.target - Basic System. May 13 23:57:16.890424 systemd[1605]: Reached target default.target - Main User Target. May 13 23:57:16.890468 systemd[1605]: Startup finished in 193ms. May 13 23:57:16.890938 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:57:16.901977 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:57:16.970837 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:54830.service - OpenSSH per-connection server daemon (10.0.0.1:54830). May 13 23:57:17.039686 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 54830 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:57:17.042089 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:17.048550 systemd-logind[1475]: New session 2 of user core. May 13 23:57:17.059094 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:57:17.118122 sshd[1618]: Connection closed by 10.0.0.1 port 54830 May 13 23:57:17.118592 sshd-session[1616]: pam_unix(sshd:session): session closed for user core May 13 23:57:17.131927 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:54830.service: Deactivated successfully. May 13 23:57:17.134529 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:57:17.136609 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. May 13 23:57:17.138295 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:54840.service - OpenSSH per-connection server daemon (10.0.0.1:54840). May 13 23:57:17.139454 systemd-logind[1475]: Removed session 2. May 13 23:57:17.193234 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 54840 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:57:17.195196 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:17.200202 systemd-logind[1475]: New session 3 of user core. May 13 23:57:17.209895 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:57:17.263282 sshd[1626]: Connection closed by 10.0.0.1 port 54840 May 13 23:57:17.263808 sshd-session[1623]: pam_unix(sshd:session): session closed for user core May 13 23:57:17.283375 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:54840.service: Deactivated successfully. May 13 23:57:17.286036 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:57:17.288229 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. May 13 23:57:17.290504 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:54846.service - OpenSSH per-connection server daemon (10.0.0.1:54846). May 13 23:57:17.292197 systemd-logind[1475]: Removed session 3. May 13 23:57:17.347575 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 54846 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:57:17.349713 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:17.354840 systemd-logind[1475]: New session 4 of user core. May 13 23:57:17.364868 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:57:17.424508 sshd[1634]: Connection closed by 10.0.0.1 port 54846 May 13 23:57:17.424989 sshd-session[1631]: pam_unix(sshd:session): session closed for user core May 13 23:57:17.441267 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:54846.service: Deactivated successfully. May 13 23:57:17.444158 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:57:17.446332 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. May 13 23:57:17.448278 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:54852.service - OpenSSH per-connection server daemon (10.0.0.1:54852). May 13 23:57:17.449463 systemd-logind[1475]: Removed session 4. May 13 23:57:17.506049 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 54852 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:57:17.508492 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:17.515003 systemd-logind[1475]: New session 5 of user core. May 13 23:57:17.536022 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:57:17.599945 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:57:17.600318 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:17.622167 sudo[1643]: pam_unix(sudo:session): session closed for user root May 13 23:57:17.624583 sshd[1642]: Connection closed by 10.0.0.1 port 54852 May 13 23:57:17.625236 sshd-session[1639]: pam_unix(sshd:session): session closed for user core May 13 23:57:17.646862 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:54852.service: Deactivated successfully. May 13 23:57:17.649991 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:57:17.652590 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. May 13 23:57:17.654961 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:54876.service - OpenSSH per-connection server daemon (10.0.0.1:54876). May 13 23:57:17.656028 systemd-logind[1475]: Removed session 5. May 13 23:57:17.709132 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 54876 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:57:17.711249 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:17.717058 systemd-logind[1475]: New session 6 of user core. May 13 23:57:17.728985 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:57:17.788365 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:57:17.788733 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:17.793532 sudo[1653]: pam_unix(sudo:session): session closed for user root May 13 23:57:17.801985 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:57:17.802509 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:17.815111 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:57:17.862068 augenrules[1675]: No rules May 13 23:57:17.864139 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:57:17.864453 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:57:17.866036 sudo[1652]: pam_unix(sudo:session): session closed for user root May 13 23:57:17.868148 sshd[1651]: Connection closed by 10.0.0.1 port 54876 May 13 23:57:17.868501 sshd-session[1648]: pam_unix(sshd:session): session closed for user core May 13 23:57:17.886715 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:54876.service: Deactivated successfully. May 13 23:57:17.889452 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:57:17.891558 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. May 13 23:57:17.893643 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:46748.service - OpenSSH per-connection server daemon (10.0.0.1:46748). May 13 23:57:17.894785 systemd-logind[1475]: Removed session 6. May 13 23:57:17.947706 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 46748 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:57:17.949566 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:17.955472 systemd-logind[1475]: New session 7 of user core. May 13 23:57:17.967008 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:57:18.027408 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:57:18.027908 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:18.523793 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:57:18.538236 (dockerd)[1707]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:57:18.987008 dockerd[1707]: time="2025-05-13T23:57:18.986837503Z" level=info msg="Starting up" May 13 23:57:18.987781 dockerd[1707]: time="2025-05-13T23:57:18.987741762Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:57:19.845486 dockerd[1707]: time="2025-05-13T23:57:19.845398949Z" level=info msg="Loading containers: start." May 13 23:57:20.093724 kernel: Initializing XFRM netlink socket May 13 23:57:20.203828 systemd-networkd[1413]: docker0: Link UP May 13 23:57:20.290129 dockerd[1707]: time="2025-05-13T23:57:20.290052747Z" level=info msg="Loading containers: done." May 13 23:57:20.325010 dockerd[1707]: time="2025-05-13T23:57:20.324879970Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:57:20.325010 dockerd[1707]: time="2025-05-13T23:57:20.325030452Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:57:20.325361 dockerd[1707]: time="2025-05-13T23:57:20.325210235Z" level=info msg="Daemon has completed initialization" May 13 23:57:20.427273 dockerd[1707]: time="2025-05-13T23:57:20.427182572Z" level=info msg="API listen on /run/docker.sock" May 13 23:57:20.427425 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:57:21.360250 containerd[1491]: time="2025-05-13T23:57:21.360195991Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:57:26.226511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2159256857.mount: Deactivated successfully. May 13 23:57:26.604701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:57:26.612143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:26.939472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:26.959330 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:27.602867 kubelet[1928]: E0513 23:57:27.602776 1928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:27.619074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:27.619359 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:27.621223 systemd[1]: kubelet.service: Consumed 368ms CPU time, 96.5M memory peak. May 13 23:57:35.025924 containerd[1491]: time="2025-05-13T23:57:35.025865935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:35.030943 containerd[1491]: time="2025-05-13T23:57:35.030861184Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 13 23:57:35.032286 containerd[1491]: time="2025-05-13T23:57:35.032244541Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:35.039344 containerd[1491]: time="2025-05-13T23:57:35.039297720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:35.040276 containerd[1491]: time="2025-05-13T23:57:35.040231312Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 13.679986463s" May 13 23:57:35.040334 containerd[1491]: time="2025-05-13T23:57:35.040280791Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 23:57:35.042759 containerd[1491]: time="2025-05-13T23:57:35.042706843Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:57:37.870300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:57:37.872307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:38.088893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:38.107164 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:39.197656 kubelet[1997]: E0513 23:57:39.197559 1997 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:39.202775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:39.202998 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:39.203482 systemd[1]: kubelet.service: Consumed 244ms CPU time, 96.1M memory peak. May 13 23:57:43.109514 containerd[1491]: time="2025-05-13T23:57:43.109425094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:43.132766 containerd[1491]: time="2025-05-13T23:57:43.132628575Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 13 23:57:43.187697 containerd[1491]: time="2025-05-13T23:57:43.187619117Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:43.246958 containerd[1491]: time="2025-05-13T23:57:43.246883176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:43.248147 containerd[1491]: time="2025-05-13T23:57:43.248095930Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 8.205352944s" May 13 23:57:43.248216 containerd[1491]: time="2025-05-13T23:57:43.248150089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 23:57:43.248830 containerd[1491]: time="2025-05-13T23:57:43.248777098Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:57:46.687807 containerd[1491]: time="2025-05-13T23:57:46.687719414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:46.753410 containerd[1491]: time="2025-05-13T23:57:46.753294141Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 13 23:57:46.780478 containerd[1491]: time="2025-05-13T23:57:46.780363901Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:46.798606 containerd[1491]: time="2025-05-13T23:57:46.798409910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:46.799905 containerd[1491]: time="2025-05-13T23:57:46.799831143Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 3.551011477s" May 13 23:57:46.799905 containerd[1491]: time="2025-05-13T23:57:46.799896595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 23:57:46.800917 containerd[1491]: time="2025-05-13T23:57:46.800591142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:57:49.203408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125395776.mount: Deactivated successfully. May 13 23:57:49.453661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 23:57:49.455631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:49.652065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:49.656751 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:49.750942 kubelet[2025]: E0513 23:57:49.750722 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:49.755029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:49.755247 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:49.755650 systemd[1]: kubelet.service: Consumed 273ms CPU time, 98.2M memory peak. May 13 23:57:52.979890 containerd[1491]: time="2025-05-13T23:57:52.979800138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.987216 containerd[1491]: time="2025-05-13T23:57:52.987070346Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 13 23:57:52.990297 containerd[1491]: time="2025-05-13T23:57:52.990199266Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.996167 containerd[1491]: time="2025-05-13T23:57:52.996089493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.996534 containerd[1491]: time="2025-05-13T23:57:52.996442599Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 6.195810977s" May 13 23:57:52.996534 containerd[1491]: time="2025-05-13T23:57:52.996498851Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 23:57:52.997138 containerd[1491]: time="2025-05-13T23:57:52.997102504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:57:54.370167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325009987.mount: Deactivated successfully. May 13 23:57:55.420895 containerd[1491]: time="2025-05-13T23:57:55.420335306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:55.422587 containerd[1491]: time="2025-05-13T23:57:55.422472489Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 23:57:55.431191 containerd[1491]: time="2025-05-13T23:57:55.431077779Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:55.448639 containerd[1491]: time="2025-05-13T23:57:55.448496779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:55.449726 containerd[1491]: time="2025-05-13T23:57:55.449632288Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.452496321s" May 13 23:57:55.449726 containerd[1491]: time="2025-05-13T23:57:55.449712444Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:57:55.450498 containerd[1491]: time="2025-05-13T23:57:55.450435096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:57:56.144292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473350179.mount: Deactivated successfully. May 13 23:57:56.345821 containerd[1491]: time="2025-05-13T23:57:56.345725291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:56.372343 containerd[1491]: time="2025-05-13T23:57:56.372207397Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:57:56.391327 containerd[1491]: time="2025-05-13T23:57:56.391216547Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:56.413838 containerd[1491]: time="2025-05-13T23:57:56.413654583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:56.415691 containerd[1491]: time="2025-05-13T23:57:56.415019704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 964.528671ms" May 13 23:57:56.415691 containerd[1491]: time="2025-05-13T23:57:56.415065985Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:57:56.416437 containerd[1491]: time="2025-05-13T23:57:56.416119265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:57:58.253379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627062120.mount: Deactivated successfully. May 13 23:57:59.705654 update_engine[1477]: I20250513 23:57:59.705538 1477 update_attempter.cc:509] Updating boot flags... May 13 23:57:59.827486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 23:57:59.829369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:59.924699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2118) May 13 23:58:00.841817 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2120) May 13 23:58:00.872292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:00.887352 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:00.890716 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2120) May 13 23:58:00.973230 kubelet[2134]: E0513 23:58:00.973151 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:00.978819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:00.979110 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:00.979605 systemd[1]: kubelet.service: Consumed 243ms CPU time, 95.1M memory peak. May 13 23:58:04.822698 containerd[1491]: time="2025-05-13T23:58:04.822622707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:04.877166 containerd[1491]: time="2025-05-13T23:58:04.877016582Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 13 23:58:04.952570 containerd[1491]: time="2025-05-13T23:58:04.952479307Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:04.988074 containerd[1491]: time="2025-05-13T23:58:04.987997428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:04.989423 containerd[1491]: time="2025-05-13T23:58:04.989381043Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 8.573180164s" May 13 23:58:04.989494 containerd[1491]: time="2025-05-13T23:58:04.989424781Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 23:58:07.539653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:07.539931 systemd[1]: kubelet.service: Consumed 243ms CPU time, 95.1M memory peak. May 13 23:58:07.542464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:07.571731 systemd[1]: Reload requested from client PID 2200 ('systemctl') (unit session-7.scope)... May 13 23:58:07.571752 systemd[1]: Reloading... May 13 23:58:07.674697 zram_generator::config[2244]: No configuration found. May 13 23:58:09.353993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:09.485134 systemd[1]: Reloading finished in 1912 ms. May 13 23:58:09.552858 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:58:09.552969 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:58:09.553303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:09.553358 systemd[1]: kubelet.service: Consumed 154ms CPU time, 83.6M memory peak. May 13 23:58:09.556375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:09.767972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:09.781179 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:58:09.821341 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:58:09.821341 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:58:09.821341 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:58:10.399726 kubelet[2293]: I0513 23:58:10.399514 2293 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:58:10.711012 kubelet[2293]: I0513 23:58:10.710882 2293 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:58:10.711012 kubelet[2293]: I0513 23:58:10.710927 2293 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:58:10.711206 kubelet[2293]: I0513 23:58:10.711182 2293 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:58:10.824095 kubelet[2293]: I0513 23:58:10.823851 2293 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:58:10.824945 kubelet[2293]: E0513 23:58:10.824900 2293 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:10.843860 kubelet[2293]: I0513 23:58:10.843798 2293 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:58:10.854536 kubelet[2293]: I0513 23:58:10.854498 2293 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:58:10.858956 kubelet[2293]: I0513 23:58:10.858870 2293 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:58:10.859324 kubelet[2293]: I0513 23:58:10.859253 2293 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:58:10.859564 kubelet[2293]: I0513 23:58:10.859307 2293 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:58:10.859564 kubelet[2293]: I0513 23:58:10.859564 2293 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:58:10.859775 kubelet[2293]: I0513 23:58:10.859578 2293 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:58:10.859813 kubelet[2293]: I0513 23:58:10.859798 2293 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:10.864090 kubelet[2293]: I0513 23:58:10.864036 2293 kubelet.go:408] "Attempting to sync node with API server" May 13 23:58:10.864090 kubelet[2293]: I0513 23:58:10.864080 2293 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:58:10.864229 kubelet[2293]: I0513 23:58:10.864134 2293 kubelet.go:314] "Adding apiserver pod source" May 13 23:58:10.864229 kubelet[2293]: I0513 23:58:10.864161 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:58:10.881820 kubelet[2293]: I0513 23:58:10.881572 2293 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:58:10.887459 kubelet[2293]: W0513 23:58:10.887222 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:10.887592 kubelet[2293]: W0513 23:58:10.887346 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:10.887592 kubelet[2293]: E0513 23:58:10.887503 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:10.887592 kubelet[2293]: E0513 23:58:10.887470 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:10.891168 kubelet[2293]: I0513 23:58:10.891120 2293 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:58:10.892061 kubelet[2293]: W0513 23:58:10.892013 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:58:10.893089 kubelet[2293]: I0513 23:58:10.893056 2293 server.go:1269] "Started kubelet" May 13 23:58:10.893882 kubelet[2293]: I0513 23:58:10.893831 2293 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:58:10.896716 kubelet[2293]: I0513 23:58:10.896653 2293 server.go:460] "Adding debug handlers to kubelet server" May 13 23:58:10.896803 kubelet[2293]: I0513 23:58:10.896724 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:58:10.897339 kubelet[2293]: I0513 23:58:10.897298 2293 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:58:10.898465 kubelet[2293]: I0513 23:58:10.898431 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:58:10.900300 kubelet[2293]: I0513 23:58:10.898649 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:58:10.900300 kubelet[2293]: I0513 23:58:10.899129 2293 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:58:10.900300 kubelet[2293]: I0513 23:58:10.899257 2293 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:58:10.900300 kubelet[2293]: I0513 23:58:10.899326 2293 reconciler.go:26] "Reconciler: start to sync state" May 13 23:58:10.900300 kubelet[2293]: W0513 23:58:10.899721 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:10.900300 kubelet[2293]: E0513 23:58:10.899762 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:10.900300 kubelet[2293]: E0513 23:58:10.899883 2293 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:58:10.900300 kubelet[2293]: E0513 23:58:10.899987 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:10.900300 kubelet[2293]: E0513 23:58:10.900194 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" May 13 23:58:10.900730 kubelet[2293]: I0513 23:58:10.900709 2293 factory.go:221] Registration of the systemd container factory successfully May 13 23:58:10.900870 kubelet[2293]: I0513 23:58:10.900821 2293 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:58:10.901820 kubelet[2293]: I0513 23:58:10.901803 2293 factory.go:221] Registration of the containerd container factory successfully May 13 23:58:10.922891 kubelet[2293]: E0513 23:58:10.920109 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b94d719aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:58:10.893024984 +0000 UTC m=+1.106871836,LastTimestamp:2025-05-13 23:58:10.893024984 +0000 UTC m=+1.106871836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:58:10.924486 kubelet[2293]: I0513 23:58:10.924424 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:58:10.925390 kubelet[2293]: I0513 23:58:10.925038 2293 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:58:10.925390 kubelet[2293]: I0513 23:58:10.925059 2293 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:58:10.925390 kubelet[2293]: I0513 23:58:10.925079 2293 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:10.927025 kubelet[2293]: I0513 23:58:10.926983 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:58:10.927199 kubelet[2293]: I0513 23:58:10.927171 2293 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:58:10.927247 kubelet[2293]: I0513 23:58:10.927207 2293 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:58:10.927309 kubelet[2293]: E0513 23:58:10.927262 2293 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:58:10.928061 kubelet[2293]: W0513 23:58:10.927999 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:10.928123 kubelet[2293]: E0513 23:58:10.928069 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:11.001250 kubelet[2293]: E0513 23:58:11.001059 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:11.007270 kubelet[2293]: I0513 23:58:11.007218 2293 policy_none.go:49] "None policy: Start" May 13 23:58:11.008101 kubelet[2293]: I0513 23:58:11.008072 2293 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:58:11.008101 kubelet[2293]: I0513 23:58:11.008099 2293 state_mem.go:35] "Initializing new in-memory state store" May 13 23:58:11.028513 kubelet[2293]: E0513 23:58:11.028392 2293 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:58:11.101333 kubelet[2293]: E0513 23:58:11.101281 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:11.101504 kubelet[2293]: E0513 23:58:11.101355 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" May 13 23:58:11.172328 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:58:11.194719 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:58:11.198972 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:58:11.202325 kubelet[2293]: E0513 23:58:11.202275 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:11.215460 kubelet[2293]: I0513 23:58:11.215229 2293 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:58:11.215598 kubelet[2293]: I0513 23:58:11.215572 2293 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:58:11.215745 kubelet[2293]: I0513 23:58:11.215591 2293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:58:11.216179 kubelet[2293]: I0513 23:58:11.216149 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:58:11.217101 kubelet[2293]: E0513 23:58:11.217077 2293 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:58:11.237930 systemd[1]: Created slice kubepods-burstable-pod399569b895729331bee62c40a5744811.slice - libcontainer container kubepods-burstable-pod399569b895729331bee62c40a5744811.slice. May 13 23:58:11.257512 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 23:58:11.261559 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 23:58:11.301699 kubelet[2293]: I0513 23:58:11.301599 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/399569b895729331bee62c40a5744811-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"399569b895729331bee62c40a5744811\") " pod="kube-system/kube-apiserver-localhost" May 13 23:58:11.301699 kubelet[2293]: I0513 23:58:11.301652 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/399569b895729331bee62c40a5744811-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"399569b895729331bee62c40a5744811\") " pod="kube-system/kube-apiserver-localhost" May 13 23:58:11.301699 kubelet[2293]: I0513 23:58:11.301703 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/399569b895729331bee62c40a5744811-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"399569b895729331bee62c40a5744811\") " pod="kube-system/kube-apiserver-localhost" May 13 23:58:11.301699 kubelet[2293]: I0513 23:58:11.301734 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:11.302004 kubelet[2293]: I0513 23:58:11.301752 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 23:58:11.302004 kubelet[2293]: I0513 23:58:11.301775 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:11.302004 kubelet[2293]: I0513 23:58:11.301793 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:11.302004 kubelet[2293]: I0513 23:58:11.301812 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:11.302004 kubelet[2293]: I0513 23:58:11.301828 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:11.319082 kubelet[2293]: I0513 23:58:11.319025 2293 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:58:11.320003 kubelet[2293]: E0513 23:58:11.319928 2293 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" May 13 23:58:11.502367 kubelet[2293]: E0513 23:58:11.502292 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" May 13 23:58:11.522393 kubelet[2293]: I0513 23:58:11.522259 2293 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:58:11.522699 kubelet[2293]: E0513 23:58:11.522650 2293 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" May 13 23:58:11.555230 kubelet[2293]: E0513 23:58:11.555151 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:11.556111 containerd[1491]: time="2025-05-13T23:58:11.556045570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:399569b895729331bee62c40a5744811,Namespace:kube-system,Attempt:0,}" May 13 23:58:11.562538 kubelet[2293]: E0513 23:58:11.562439 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:11.563212 containerd[1491]: time="2025-05-13T23:58:11.563168073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 23:58:11.564416 kubelet[2293]: E0513 23:58:11.564367 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:11.564946 containerd[1491]: time="2025-05-13T23:58:11.564813268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 23:58:11.924313 kubelet[2293]: I0513 23:58:11.924255 2293 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:58:11.924854 kubelet[2293]: E0513 23:58:11.924744 2293 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" May 13 23:58:12.014369 kubelet[2293]: W0513 23:58:12.014243 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:12.014369 kubelet[2293]: E0513 23:58:12.014347 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:12.014369 kubelet[2293]: W0513 23:58:12.014342 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:12.014615 kubelet[2293]: E0513 23:58:12.014414 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:12.087221 kubelet[2293]: W0513 23:58:12.087095 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:12.087221 kubelet[2293]: E0513 23:58:12.087189 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:12.303483 kubelet[2293]: E0513 23:58:12.303296 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" May 13 23:58:12.379953 kubelet[2293]: W0513 23:58:12.379832 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:12.379953 kubelet[2293]: E0513 23:58:12.379946 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:12.727348 kubelet[2293]: I0513 23:58:12.727296 2293 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:58:12.727824 kubelet[2293]: E0513 23:58:12.727768 2293 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" May 13 23:58:12.750946 containerd[1491]: time="2025-05-13T23:58:12.746026591Z" level=info msg="connecting to shim 963e36aa12d4cfc9dc5e328ccdd654741fe534c82659477c15b9e0892bc18e6f" address="unix:///run/containerd/s/c42b57cd3d5e612ff9b07aaae1beaa4c3c8ff4f4120343f9afb7b1929a4d176b" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:12.750946 containerd[1491]: time="2025-05-13T23:58:12.748583112Z" level=info msg="connecting to shim 030575889503d300fb729cc5bc3683190a674009584039aa298b24d41b9e8809" address="unix:///run/containerd/s/9639851ee45ce8959ffecaf53f95e304a751c0104915b0ccdcc2639dce70531a" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:12.764289 containerd[1491]: time="2025-05-13T23:58:12.764183970Z" level=info msg="connecting to shim 46ca7468291e48d299ad54045d9ba2ac68f244ab09ecfea30d8820f91028b4f1" address="unix:///run/containerd/s/c5e2fc5ea15636f0c878b263b5f48caa925594c3af885261f9f2e987f0d5f34a" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:12.790226 systemd[1]: Started cri-containerd-963e36aa12d4cfc9dc5e328ccdd654741fe534c82659477c15b9e0892bc18e6f.scope - libcontainer container 963e36aa12d4cfc9dc5e328ccdd654741fe534c82659477c15b9e0892bc18e6f. May 13 23:58:12.795729 systemd[1]: Started cri-containerd-030575889503d300fb729cc5bc3683190a674009584039aa298b24d41b9e8809.scope - libcontainer container 030575889503d300fb729cc5bc3683190a674009584039aa298b24d41b9e8809. May 13 23:58:12.813907 systemd[1]: Started cri-containerd-46ca7468291e48d299ad54045d9ba2ac68f244ab09ecfea30d8820f91028b4f1.scope - libcontainer container 46ca7468291e48d299ad54045d9ba2ac68f244ab09ecfea30d8820f91028b4f1. May 13 23:58:12.878478 containerd[1491]: time="2025-05-13T23:58:12.878415537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:399569b895729331bee62c40a5744811,Namespace:kube-system,Attempt:0,} returns sandbox id \"963e36aa12d4cfc9dc5e328ccdd654741fe534c82659477c15b9e0892bc18e6f\"" May 13 23:58:12.880053 kubelet[2293]: E0513 23:58:12.880014 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:12.882799 containerd[1491]: time="2025-05-13T23:58:12.882754449Z" level=info msg="CreateContainer within sandbox \"963e36aa12d4cfc9dc5e328ccdd654741fe534c82659477c15b9e0892bc18e6f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:58:12.890577 containerd[1491]: time="2025-05-13T23:58:12.890493131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"030575889503d300fb729cc5bc3683190a674009584039aa298b24d41b9e8809\"" May 13 23:58:12.891538 kubelet[2293]: E0513 23:58:12.891428 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:12.893954 containerd[1491]: time="2025-05-13T23:58:12.893911902Z" level=info msg="CreateContainer within sandbox \"030575889503d300fb729cc5bc3683190a674009584039aa298b24d41b9e8809\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:58:12.911025 containerd[1491]: time="2025-05-13T23:58:12.910964817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ca7468291e48d299ad54045d9ba2ac68f244ab09ecfea30d8820f91028b4f1\"" May 13 23:58:12.911800 kubelet[2293]: E0513 23:58:12.911773 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:12.914263 containerd[1491]: time="2025-05-13T23:58:12.913904185Z" level=info msg="CreateContainer within sandbox \"46ca7468291e48d299ad54045d9ba2ac68f244ab09ecfea30d8820f91028b4f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:58:12.955702 kubelet[2293]: E0513 23:58:12.955612 2293 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:13.130026 containerd[1491]: time="2025-05-13T23:58:13.129944207Z" level=info msg="Container e1eeaa5ee1e7cbec2d7f317e65f99183b413392a5089bcbb087bfed02d5e3ab2: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:13.226643 containerd[1491]: time="2025-05-13T23:58:13.226569678Z" level=info msg="Container b6fe0f431444199ad5ddaf457fe45716abf74cbeed5a6969d88d22b2acb2c576: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:13.339435 containerd[1491]: time="2025-05-13T23:58:13.339352242Z" level=info msg="Container 58a506ce5f90ec393f6f1a9abf926388824a926c0303772a0300bf9a9dfd3bb5: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:13.507375 containerd[1491]: time="2025-05-13T23:58:13.507193082Z" level=info msg="CreateContainer within sandbox \"030575889503d300fb729cc5bc3683190a674009584039aa298b24d41b9e8809\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6fe0f431444199ad5ddaf457fe45716abf74cbeed5a6969d88d22b2acb2c576\"" May 13 23:58:13.508078 containerd[1491]: time="2025-05-13T23:58:13.508040782Z" level=info msg="StartContainer for \"b6fe0f431444199ad5ddaf457fe45716abf74cbeed5a6969d88d22b2acb2c576\"" May 13 23:58:13.509320 containerd[1491]: time="2025-05-13T23:58:13.509284061Z" level=info msg="connecting to shim b6fe0f431444199ad5ddaf457fe45716abf74cbeed5a6969d88d22b2acb2c576" address="unix:///run/containerd/s/9639851ee45ce8959ffecaf53f95e304a751c0104915b0ccdcc2639dce70531a" protocol=ttrpc version=3 May 13 23:58:13.534957 systemd[1]: Started cri-containerd-b6fe0f431444199ad5ddaf457fe45716abf74cbeed5a6969d88d22b2acb2c576.scope - libcontainer container b6fe0f431444199ad5ddaf457fe45716abf74cbeed5a6969d88d22b2acb2c576. May 13 23:58:13.569479 kubelet[2293]: E0513 23:58:13.569355 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b94d719aed8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:58:10.893024984 +0000 UTC m=+1.106871836,LastTimestamp:2025-05-13 23:58:10.893024984 +0000 UTC m=+1.106871836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:58:13.850812 kubelet[2293]: W0513 23:58:13.850764 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:13.850812 kubelet[2293]: E0513 23:58:13.850815 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:13.904486 kubelet[2293]: E0513 23:58:13.904403 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="3.2s" May 13 23:58:13.993066 kubelet[2293]: W0513 23:58:13.993005 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:13.993066 kubelet[2293]: E0513 23:58:13.993064 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:14.078454 kubelet[2293]: W0513 23:58:14.078405 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 13 23:58:14.078602 kubelet[2293]: E0513 23:58:14.078477 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:14.140571 containerd[1491]: time="2025-05-13T23:58:14.140154878Z" level=info msg="CreateContainer within sandbox \"963e36aa12d4cfc9dc5e328ccdd654741fe534c82659477c15b9e0892bc18e6f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1eeaa5ee1e7cbec2d7f317e65f99183b413392a5089bcbb087bfed02d5e3ab2\"" May 13 23:58:14.141244 containerd[1491]: time="2025-05-13T23:58:14.140728330Z" level=info msg="StartContainer for \"b6fe0f431444199ad5ddaf457fe45716abf74cbeed5a6969d88d22b2acb2c576\" returns successfully" May 13 23:58:14.141244 containerd[1491]: time="2025-05-13T23:58:14.141136773Z" level=info msg="StartContainer for \"e1eeaa5ee1e7cbec2d7f317e65f99183b413392a5089bcbb087bfed02d5e3ab2\"" May 13 23:58:14.142376 containerd[1491]: time="2025-05-13T23:58:14.142327701Z" level=info msg="connecting to shim e1eeaa5ee1e7cbec2d7f317e65f99183b413392a5089bcbb087bfed02d5e3ab2" address="unix:///run/containerd/s/c42b57cd3d5e612ff9b07aaae1beaa4c3c8ff4f4120343f9afb7b1929a4d176b" protocol=ttrpc version=3 May 13 23:58:14.152762 kubelet[2293]: E0513 23:58:14.152567 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:14.168869 systemd[1]: Started cri-containerd-e1eeaa5ee1e7cbec2d7f317e65f99183b413392a5089bcbb087bfed02d5e3ab2.scope - libcontainer container e1eeaa5ee1e7cbec2d7f317e65f99183b413392a5089bcbb087bfed02d5e3ab2. May 13 23:58:14.329032 kubelet[2293]: I0513 23:58:14.328963 2293 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:58:14.634094 containerd[1491]: time="2025-05-13T23:58:14.633997137Z" level=info msg="CreateContainer within sandbox \"46ca7468291e48d299ad54045d9ba2ac68f244ab09ecfea30d8820f91028b4f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58a506ce5f90ec393f6f1a9abf926388824a926c0303772a0300bf9a9dfd3bb5\"" May 13 23:58:14.634941 containerd[1491]: time="2025-05-13T23:58:14.634709553Z" level=info msg="StartContainer for \"58a506ce5f90ec393f6f1a9abf926388824a926c0303772a0300bf9a9dfd3bb5\"" May 13 23:58:14.642716 containerd[1491]: time="2025-05-13T23:58:14.637170476Z" level=info msg="connecting to shim 58a506ce5f90ec393f6f1a9abf926388824a926c0303772a0300bf9a9dfd3bb5" address="unix:///run/containerd/s/c5e2fc5ea15636f0c878b263b5f48caa925594c3af885261f9f2e987f0d5f34a" protocol=ttrpc version=3 May 13 23:58:14.642716 containerd[1491]: time="2025-05-13T23:58:14.637910650Z" level=info msg="StartContainer for \"e1eeaa5ee1e7cbec2d7f317e65f99183b413392a5089bcbb087bfed02d5e3ab2\" returns successfully" May 13 23:58:14.748892 systemd[1]: Started cri-containerd-58a506ce5f90ec393f6f1a9abf926388824a926c0303772a0300bf9a9dfd3bb5.scope - libcontainer container 58a506ce5f90ec393f6f1a9abf926388824a926c0303772a0300bf9a9dfd3bb5. May 13 23:58:15.031155 containerd[1491]: time="2025-05-13T23:58:15.031007037Z" level=info msg="StartContainer for \"58a506ce5f90ec393f6f1a9abf926388824a926c0303772a0300bf9a9dfd3bb5\" returns successfully" May 13 23:58:15.167858 kubelet[2293]: E0513 23:58:15.166789 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:15.171252 kubelet[2293]: E0513 23:58:15.170976 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:15.178851 kubelet[2293]: E0513 23:58:15.175841 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:16.203731 kubelet[2293]: E0513 23:58:16.203210 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:16.203731 kubelet[2293]: E0513 23:58:16.203620 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:17.217003 kubelet[2293]: E0513 23:58:17.216932 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:17.311852 kubelet[2293]: E0513 23:58:17.311792 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:58:17.416708 kubelet[2293]: I0513 23:58:17.416617 2293 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 23:58:17.416708 kubelet[2293]: E0513 23:58:17.416717 2293 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 23:58:17.431461 kubelet[2293]: E0513 23:58:17.431399 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:17.532367 kubelet[2293]: E0513 23:58:17.532213 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:17.633128 kubelet[2293]: E0513 23:58:17.633026 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:17.733852 kubelet[2293]: E0513 23:58:17.733761 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:17.834251 kubelet[2293]: E0513 23:58:17.834177 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:17.934893 kubelet[2293]: E0513 23:58:17.934789 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.036005 kubelet[2293]: E0513 23:58:18.035947 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.136789 kubelet[2293]: E0513 23:58:18.136605 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.218946 kubelet[2293]: E0513 23:58:18.218902 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:18.237226 kubelet[2293]: E0513 23:58:18.237147 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.337883 kubelet[2293]: E0513 23:58:18.337817 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.439105 kubelet[2293]: E0513 23:58:18.438883 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.539633 kubelet[2293]: E0513 23:58:18.539556 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.640374 kubelet[2293]: E0513 23:58:18.640279 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.741574 kubelet[2293]: E0513 23:58:18.741262 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.842191 kubelet[2293]: E0513 23:58:18.842102 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:18.942570 kubelet[2293]: E0513 23:58:18.942491 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.043450 kubelet[2293]: E0513 23:58:19.043282 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.143870 kubelet[2293]: E0513 23:58:19.143824 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.244089 kubelet[2293]: E0513 23:58:19.244030 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.344693 kubelet[2293]: E0513 23:58:19.344596 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.445632 kubelet[2293]: E0513 23:58:19.445569 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.546238 kubelet[2293]: E0513 23:58:19.546167 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.646948 kubelet[2293]: E0513 23:58:19.646786 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.678135 kubelet[2293]: E0513 23:58:19.678102 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:19.747414 kubelet[2293]: E0513 23:58:19.747359 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.848274 kubelet[2293]: E0513 23:58:19.848212 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:19.949030 kubelet[2293]: E0513 23:58:19.948850 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:20.049927 kubelet[2293]: E0513 23:58:20.049856 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:20.150694 kubelet[2293]: E0513 23:58:20.150610 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:20.862227 kubelet[2293]: E0513 23:58:20.862184 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:20.887591 kubelet[2293]: I0513 23:58:20.887529 2293 apiserver.go:52] "Watching apiserver" May 13 23:58:20.900111 kubelet[2293]: I0513 23:58:20.900061 2293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:58:21.180307 kubelet[2293]: I0513 23:58:21.180139 2293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.180083674 podStartE2EDuration="1.180083674s" podCreationTimestamp="2025-05-13 23:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:21.179945699 +0000 UTC m=+11.393792541" watchObservedRunningTime="2025-05-13 23:58:21.180083674 +0000 UTC m=+11.393930516" May 13 23:58:21.221227 kubelet[2293]: E0513 23:58:21.221177 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:22.175992 systemd[1]: Reload requested from client PID 2569 ('systemctl') (unit session-7.scope)... May 13 23:58:22.176014 systemd[1]: Reloading... May 13 23:58:22.272717 zram_generator::config[2616]: No configuration found. May 13 23:58:22.410618 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:22.556352 systemd[1]: Reloading finished in 379 ms. May 13 23:58:22.591741 kubelet[2293]: I0513 23:58:22.591697 2293 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:58:22.592021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:22.613400 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:58:22.613740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:22.613819 systemd[1]: kubelet.service: Consumed 1.020s CPU time, 119.8M memory peak. May 13 23:58:22.616079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:22.862746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:22.878271 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:58:22.933256 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:58:22.933256 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:58:22.933256 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:58:22.933726 kubelet[2658]: I0513 23:58:22.933343 2658 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:58:22.942697 kubelet[2658]: I0513 23:58:22.941674 2658 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:58:22.942697 kubelet[2658]: I0513 23:58:22.941705 2658 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:58:22.942697 kubelet[2658]: I0513 23:58:22.941977 2658 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:58:22.943946 kubelet[2658]: I0513 23:58:22.943884 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:58:22.948594 kubelet[2658]: I0513 23:58:22.948532 2658 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:58:22.954715 kubelet[2658]: I0513 23:58:22.954625 2658 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:58:22.961604 kubelet[2658]: I0513 23:58:22.961520 2658 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:58:22.961801 kubelet[2658]: I0513 23:58:22.961700 2658 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:58:22.961947 kubelet[2658]: I0513 23:58:22.961891 2658 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:58:22.962351 kubelet[2658]: I0513 23:58:22.961944 2658 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:58:22.962351 kubelet[2658]: I0513 23:58:22.962293 2658 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:58:22.962351 kubelet[2658]: I0513 23:58:22.962306 2658 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:58:22.962351 kubelet[2658]: I0513 23:58:22.962349 2658 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:22.962891 kubelet[2658]: I0513 23:58:22.962530 2658 kubelet.go:408] "Attempting to sync node with API server" May 13 23:58:22.962891 kubelet[2658]: I0513 23:58:22.962549 2658 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:58:22.962891 kubelet[2658]: I0513 23:58:22.962592 2658 kubelet.go:314] "Adding apiserver pod source" May 13 23:58:22.962891 kubelet[2658]: I0513 23:58:22.962612 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:58:22.965059 kubelet[2658]: I0513 23:58:22.964548 2658 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:58:22.965136 kubelet[2658]: I0513 23:58:22.965095 2658 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:58:22.966119 kubelet[2658]: I0513 23:58:22.966086 2658 server.go:1269] "Started kubelet" May 13 23:58:22.973054 kubelet[2658]: I0513 23:58:22.971268 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:58:22.973054 kubelet[2658]: I0513 23:58:22.972766 2658 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:58:22.973820 kubelet[2658]: I0513 23:58:22.973793 2658 factory.go:221] Registration of the systemd container factory successfully May 13 23:58:22.974083 kubelet[2658]: I0513 23:58:22.974057 2658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:58:22.974432 kubelet[2658]: I0513 23:58:22.974404 2658 server.go:460] "Adding debug handlers to kubelet server" May 13 23:58:22.975888 kubelet[2658]: I0513 23:58:22.975826 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:58:22.976177 kubelet[2658]: I0513 23:58:22.976148 2658 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:58:22.976273 kubelet[2658]: I0513 23:58:22.976259 2658 factory.go:221] Registration of the containerd container factory successfully May 13 23:58:22.977261 kubelet[2658]: I0513 23:58:22.977220 2658 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:58:22.978050 kubelet[2658]: I0513 23:58:22.978020 2658 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:58:22.978162 kubelet[2658]: E0513 23:58:22.978142 2658 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:58:22.978651 kubelet[2658]: I0513 23:58:22.978621 2658 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:58:22.978824 kubelet[2658]: I0513 23:58:22.978802 2658 reconciler.go:26] "Reconciler: start to sync state" May 13 23:58:23.001372 kubelet[2658]: I0513 23:58:23.001288 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:58:23.003605 kubelet[2658]: I0513 23:58:23.003346 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:58:23.003605 kubelet[2658]: I0513 23:58:23.003376 2658 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:58:23.003605 kubelet[2658]: I0513 23:58:23.003400 2658 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:58:23.003605 kubelet[2658]: E0513 23:58:23.003451 2658 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:58:23.036180 kubelet[2658]: I0513 23:58:23.036119 2658 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:58:23.036180 kubelet[2658]: I0513 23:58:23.036149 2658 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:58:23.036180 kubelet[2658]: I0513 23:58:23.036182 2658 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:23.036458 kubelet[2658]: I0513 23:58:23.036426 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:58:23.036505 kubelet[2658]: I0513 23:58:23.036448 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:58:23.036505 kubelet[2658]: I0513 23:58:23.036472 2658 policy_none.go:49] "None policy: Start" May 13 23:58:23.038601 kubelet[2658]: I0513 23:58:23.037248 2658 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:58:23.038601 kubelet[2658]: I0513 23:58:23.037286 2658 state_mem.go:35] "Initializing new in-memory state store" May 13 23:58:23.038601 kubelet[2658]: I0513 23:58:23.037507 2658 state_mem.go:75] "Updated machine memory state" May 13 23:58:23.044198 kubelet[2658]: I0513 23:58:23.043484 2658 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:58:23.044198 kubelet[2658]: I0513 23:58:23.043796 2658 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:58:23.044198 kubelet[2658]: I0513 23:58:23.043817 2658 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:58:23.044198 kubelet[2658]: I0513 23:58:23.044085 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:58:23.150256 kubelet[2658]: I0513 23:58:23.150125 2658 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:58:23.279650 kubelet[2658]: I0513 23:58:23.279575 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 23:58:23.279650 kubelet[2658]: I0513 23:58:23.279640 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/399569b895729331bee62c40a5744811-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"399569b895729331bee62c40a5744811\") " pod="kube-system/kube-apiserver-localhost" May 13 23:58:23.279882 kubelet[2658]: I0513 23:58:23.279699 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:23.279882 kubelet[2658]: I0513 23:58:23.279715 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:23.279882 kubelet[2658]: I0513 23:58:23.279730 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:23.279882 kubelet[2658]: I0513 23:58:23.279750 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/399569b895729331bee62c40a5744811-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"399569b895729331bee62c40a5744811\") " pod="kube-system/kube-apiserver-localhost" May 13 23:58:23.279882 kubelet[2658]: I0513 23:58:23.279771 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/399569b895729331bee62c40a5744811-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"399569b895729331bee62c40a5744811\") " pod="kube-system/kube-apiserver-localhost" May 13 23:58:23.280043 kubelet[2658]: I0513 23:58:23.279851 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:23.280043 kubelet[2658]: I0513 23:58:23.279943 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:58:23.849327 kubelet[2658]: E0513 23:58:23.849272 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:23.849514 kubelet[2658]: E0513 23:58:23.849506 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:23.849680 kubelet[2658]: E0513 23:58:23.849627 2658 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 23:58:23.849787 kubelet[2658]: E0513 23:58:23.849771 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:23.884910 kubelet[2658]: I0513 23:58:23.884865 2658 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 23:58:23.885064 kubelet[2658]: I0513 23:58:23.885020 2658 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 23:58:23.963761 kubelet[2658]: I0513 23:58:23.963698 2658 apiserver.go:52] "Watching apiserver" May 13 23:58:23.979750 kubelet[2658]: I0513 23:58:23.979641 2658 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:58:24.019847 kubelet[2658]: E0513 23:58:24.019594 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:24.019847 kubelet[2658]: E0513 23:58:24.019688 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:24.019847 kubelet[2658]: E0513 23:58:24.019795 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:25.021047 kubelet[2658]: E0513 23:58:25.020999 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:26.584575 kubelet[2658]: I0513 23:58:26.584488 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.584444876 podStartE2EDuration="3.584444876s" podCreationTimestamp="2025-05-13 23:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:24.741251509 +0000 UTC m=+1.857161666" watchObservedRunningTime="2025-05-13 23:58:26.584444876 +0000 UTC m=+3.700355003" May 13 23:58:28.902281 kubelet[2658]: I0513 23:58:28.902178 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.902156742 podStartE2EDuration="5.902156742s" podCreationTimestamp="2025-05-13 23:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:28.123549585 +0000 UTC m=+5.239459712" watchObservedRunningTime="2025-05-13 23:58:28.902156742 +0000 UTC m=+6.018066869" May 13 23:58:30.792146 kubelet[2658]: E0513 23:58:30.792034 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:30.899704 kubelet[2658]: E0513 23:58:30.899636 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:31.032381 kubelet[2658]: E0513 23:58:31.032067 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:31.032381 kubelet[2658]: E0513 23:58:31.032199 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:31.063589 kubelet[2658]: E0513 23:58:31.063444 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:31.776289 kubelet[2658]: I0513 23:58:31.776256 2658 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:58:31.814523 kubelet[2658]: I0513 23:58:31.776734 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:58:31.815319 containerd[1491]: time="2025-05-13T23:58:31.776593869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:58:32.033778 kubelet[2658]: E0513 23:58:32.033635 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:32.033941 kubelet[2658]: E0513 23:58:32.033821 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:33.218754 kubelet[2658]: W0513 23:58:33.216758 2658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 13 23:58:33.218754 kubelet[2658]: E0513 23:58:33.216804 2658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 13 23:58:33.218754 kubelet[2658]: W0513 23:58:33.216844 2658 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 13 23:58:33.218754 kubelet[2658]: E0513 23:58:33.216855 2658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 13 23:58:33.231192 systemd[1]: Created slice kubepods-besteffort-podd721a8f9_42d8_4e2d_8305_dc59868c686d.slice - libcontainer container kubepods-besteffort-podd721a8f9_42d8_4e2d_8305_dc59868c686d.slice. May 13 23:58:33.235528 kubelet[2658]: I0513 23:58:33.235485 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d721a8f9-42d8-4e2d-8305-dc59868c686d-lib-modules\") pod \"kube-proxy-49jhn\" (UID: \"d721a8f9-42d8-4e2d-8305-dc59868c686d\") " pod="kube-system/kube-proxy-49jhn" May 13 23:58:33.235614 kubelet[2658]: I0513 23:58:33.235533 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2j5\" (UniqueName: \"kubernetes.io/projected/d721a8f9-42d8-4e2d-8305-dc59868c686d-kube-api-access-9n2j5\") pod \"kube-proxy-49jhn\" (UID: \"d721a8f9-42d8-4e2d-8305-dc59868c686d\") " pod="kube-system/kube-proxy-49jhn" May 13 23:58:33.235614 kubelet[2658]: I0513 23:58:33.235552 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d721a8f9-42d8-4e2d-8305-dc59868c686d-kube-proxy\") pod \"kube-proxy-49jhn\" (UID: \"d721a8f9-42d8-4e2d-8305-dc59868c686d\") " pod="kube-system/kube-proxy-49jhn" May 13 23:58:33.235614 kubelet[2658]: I0513 23:58:33.235566 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d721a8f9-42d8-4e2d-8305-dc59868c686d-xtables-lock\") pod \"kube-proxy-49jhn\" (UID: \"d721a8f9-42d8-4e2d-8305-dc59868c686d\") " pod="kube-system/kube-proxy-49jhn" May 13 23:58:33.322999 systemd[1]: Created slice kubepods-besteffort-pode0540089_ebd5_485b_86b3_9172e9340324.slice - libcontainer container kubepods-besteffort-pode0540089_ebd5_485b_86b3_9172e9340324.slice. May 13 23:58:33.336161 kubelet[2658]: I0513 23:58:33.336068 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e0540089-ebd5-485b-86b3-9172e9340324-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-nzvxq\" (UID: \"e0540089-ebd5-485b-86b3-9172e9340324\") " pod="tigera-operator/tigera-operator-6f6897fdc5-nzvxq" May 13 23:58:33.336161 kubelet[2658]: I0513 23:58:33.336160 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p62n7\" (UniqueName: \"kubernetes.io/projected/e0540089-ebd5-485b-86b3-9172e9340324-kube-api-access-p62n7\") pod \"tigera-operator-6f6897fdc5-nzvxq\" (UID: \"e0540089-ebd5-485b-86b3-9172e9340324\") " pod="tigera-operator/tigera-operator-6f6897fdc5-nzvxq" May 13 23:58:33.633632 containerd[1491]: time="2025-05-13T23:58:33.633561960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-nzvxq,Uid:e0540089-ebd5-485b-86b3-9172e9340324,Namespace:tigera-operator,Attempt:0,}" May 13 23:58:33.735719 containerd[1491]: time="2025-05-13T23:58:33.735628585Z" level=info msg="connecting to shim 013208a0739551dbd952998fa4e9a5fdc094c29a0fa14ede7a4efc498d724449" address="unix:///run/containerd/s/4b71ff263f9f0402cb6b743a1027bc6bdc6efe2d6836dc62bce5200d0da1fc83" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:33.771942 systemd[1]: Started cri-containerd-013208a0739551dbd952998fa4e9a5fdc094c29a0fa14ede7a4efc498d724449.scope - libcontainer container 013208a0739551dbd952998fa4e9a5fdc094c29a0fa14ede7a4efc498d724449. May 13 23:58:33.873352 containerd[1491]: time="2025-05-13T23:58:33.873288380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-nzvxq,Uid:e0540089-ebd5-485b-86b3-9172e9340324,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"013208a0739551dbd952998fa4e9a5fdc094c29a0fa14ede7a4efc498d724449\"" May 13 23:58:33.875701 containerd[1491]: time="2025-05-13T23:58:33.875189598Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:58:34.337272 kubelet[2658]: E0513 23:58:34.337195 2658 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 13 23:58:34.337813 kubelet[2658]: E0513 23:58:34.337329 2658 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d721a8f9-42d8-4e2d-8305-dc59868c686d-kube-proxy podName:d721a8f9-42d8-4e2d-8305-dc59868c686d nodeName:}" failed. No retries permitted until 2025-05-13 23:58:34.837300888 +0000 UTC m=+11.953211015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d721a8f9-42d8-4e2d-8305-dc59868c686d-kube-proxy") pod "kube-proxy-49jhn" (UID: "d721a8f9-42d8-4e2d-8305-dc59868c686d") : failed to sync configmap cache: timed out waiting for the condition May 13 23:58:35.041469 kubelet[2658]: E0513 23:58:35.041406 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:35.041997 containerd[1491]: time="2025-05-13T23:58:35.041934114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49jhn,Uid:d721a8f9-42d8-4e2d-8305-dc59868c686d,Namespace:kube-system,Attempt:0,}" May 13 23:58:35.091760 containerd[1491]: time="2025-05-13T23:58:35.091690952Z" level=info msg="connecting to shim e68f75fe920b42d5f2d55629d10f2493bce6499bfc4b1a8d3fb5d187b9fb0dd2" address="unix:///run/containerd/s/70d4a98d511151384552f80fae48fcc2b0b04fbc9597b5288bef17fc49777969" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:35.120860 systemd[1]: Started cri-containerd-e68f75fe920b42d5f2d55629d10f2493bce6499bfc4b1a8d3fb5d187b9fb0dd2.scope - libcontainer container e68f75fe920b42d5f2d55629d10f2493bce6499bfc4b1a8d3fb5d187b9fb0dd2. May 13 23:58:35.174907 containerd[1491]: time="2025-05-13T23:58:35.174862420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49jhn,Uid:d721a8f9-42d8-4e2d-8305-dc59868c686d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e68f75fe920b42d5f2d55629d10f2493bce6499bfc4b1a8d3fb5d187b9fb0dd2\"" May 13 23:58:35.175696 kubelet[2658]: E0513 23:58:35.175637 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:35.177513 containerd[1491]: time="2025-05-13T23:58:35.177462666Z" level=info msg="CreateContainer within sandbox \"e68f75fe920b42d5f2d55629d10f2493bce6499bfc4b1a8d3fb5d187b9fb0dd2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:58:35.209876 containerd[1491]: time="2025-05-13T23:58:35.209807193Z" level=info msg="Container 0b67216161a16a1d90bb5085167b58d5fcc70d684d942953bb290e8956afdcd8: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:35.914548 containerd[1491]: time="2025-05-13T23:58:35.914493318Z" level=info msg="CreateContainer within sandbox \"e68f75fe920b42d5f2d55629d10f2493bce6499bfc4b1a8d3fb5d187b9fb0dd2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b67216161a16a1d90bb5085167b58d5fcc70d684d942953bb290e8956afdcd8\"" May 13 23:58:35.915399 containerd[1491]: time="2025-05-13T23:58:35.915359071Z" level=info msg="StartContainer for \"0b67216161a16a1d90bb5085167b58d5fcc70d684d942953bb290e8956afdcd8\"" May 13 23:58:35.917327 containerd[1491]: time="2025-05-13T23:58:35.917303456Z" level=info msg="connecting to shim 0b67216161a16a1d90bb5085167b58d5fcc70d684d942953bb290e8956afdcd8" address="unix:///run/containerd/s/70d4a98d511151384552f80fae48fcc2b0b04fbc9597b5288bef17fc49777969" protocol=ttrpc version=3 May 13 23:58:35.941867 systemd[1]: Started cri-containerd-0b67216161a16a1d90bb5085167b58d5fcc70d684d942953bb290e8956afdcd8.scope - libcontainer container 0b67216161a16a1d90bb5085167b58d5fcc70d684d942953bb290e8956afdcd8. May 13 23:58:36.373304 containerd[1491]: time="2025-05-13T23:58:36.373249354Z" level=info msg="StartContainer for \"0b67216161a16a1d90bb5085167b58d5fcc70d684d942953bb290e8956afdcd8\" returns successfully" May 13 23:58:36.552139 sudo[1687]: pam_unix(sudo:session): session closed for user root May 13 23:58:36.553819 sshd[1686]: Connection closed by 10.0.0.1 port 46748 May 13 23:58:36.561703 sshd-session[1683]: pam_unix(sshd:session): session closed for user core May 13 23:58:36.573473 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:46748.service: Deactivated successfully. May 13 23:58:36.577465 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:58:36.578106 systemd[1]: session-7.scope: Consumed 5.185s CPU time, 218.5M memory peak. May 13 23:58:36.582782 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. May 13 23:58:36.584248 systemd-logind[1475]: Removed session 7. May 13 23:58:36.902043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476712833.mount: Deactivated successfully. May 13 23:58:37.051706 kubelet[2658]: E0513 23:58:37.049825 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:37.226249 containerd[1491]: time="2025-05-13T23:58:37.226127022Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:37.226962 containerd[1491]: time="2025-05-13T23:58:37.226908362Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 23:58:37.227893 containerd[1491]: time="2025-05-13T23:58:37.227869673Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:37.229782 containerd[1491]: time="2025-05-13T23:58:37.229750452Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:37.230325 containerd[1491]: time="2025-05-13T23:58:37.230298213Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.355066711s" May 13 23:58:37.230362 containerd[1491]: time="2025-05-13T23:58:37.230327902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 23:58:37.231958 containerd[1491]: time="2025-05-13T23:58:37.231932948Z" level=info msg="CreateContainer within sandbox \"013208a0739551dbd952998fa4e9a5fdc094c29a0fa14ede7a4efc498d724449\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:58:37.240981 containerd[1491]: time="2025-05-13T23:58:37.240927981Z" level=info msg="Container 425f1e2a5e3f6e1dfdba1831756609eabff6dbd072dcaaf7906d5fed00788a92: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:37.247801 containerd[1491]: time="2025-05-13T23:58:37.247755200Z" level=info msg="CreateContainer within sandbox \"013208a0739551dbd952998fa4e9a5fdc094c29a0fa14ede7a4efc498d724449\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"425f1e2a5e3f6e1dfdba1831756609eabff6dbd072dcaaf7906d5fed00788a92\"" May 13 23:58:37.248335 containerd[1491]: time="2025-05-13T23:58:37.248284423Z" level=info msg="StartContainer for \"425f1e2a5e3f6e1dfdba1831756609eabff6dbd072dcaaf7906d5fed00788a92\"" May 13 23:58:37.249331 containerd[1491]: time="2025-05-13T23:58:37.249294172Z" level=info msg="connecting to shim 425f1e2a5e3f6e1dfdba1831756609eabff6dbd072dcaaf7906d5fed00788a92" address="unix:///run/containerd/s/4b71ff263f9f0402cb6b743a1027bc6bdc6efe2d6836dc62bce5200d0da1fc83" protocol=ttrpc version=3 May 13 23:58:37.274816 systemd[1]: Started cri-containerd-425f1e2a5e3f6e1dfdba1831756609eabff6dbd072dcaaf7906d5fed00788a92.scope - libcontainer container 425f1e2a5e3f6e1dfdba1831756609eabff6dbd072dcaaf7906d5fed00788a92. May 13 23:58:37.365635 containerd[1491]: time="2025-05-13T23:58:37.365592111Z" level=info msg="StartContainer for \"425f1e2a5e3f6e1dfdba1831756609eabff6dbd072dcaaf7906d5fed00788a92\" returns successfully" May 13 23:58:38.053475 kubelet[2658]: E0513 23:58:38.053369 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:38.101360 kubelet[2658]: I0513 23:58:38.101216 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-49jhn" podStartSLOduration=6.101189651 podStartE2EDuration="6.101189651s" podCreationTimestamp="2025-05-13 23:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:37.0626377 +0000 UTC m=+14.178547847" watchObservedRunningTime="2025-05-13 23:58:38.101189651 +0000 UTC m=+15.217099778" May 13 23:58:40.901944 kubelet[2658]: I0513 23:58:40.901863 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-nzvxq" podStartSLOduration=4.54560321 podStartE2EDuration="7.901842933s" podCreationTimestamp="2025-05-13 23:58:33 +0000 UTC" firstStartedPulling="2025-05-13 23:58:33.874740571 +0000 UTC m=+10.990650698" lastFinishedPulling="2025-05-13 23:58:37.230980304 +0000 UTC m=+14.346890421" observedRunningTime="2025-05-13 23:58:38.101437939 +0000 UTC m=+15.217348066" watchObservedRunningTime="2025-05-13 23:58:40.901842933 +0000 UTC m=+18.017753060" May 13 23:58:40.922344 systemd[1]: Created slice kubepods-besteffort-pod40cc0397_0464_4c2d_9fc9_953b5634329f.slice - libcontainer container kubepods-besteffort-pod40cc0397_0464_4c2d_9fc9_953b5634329f.slice. May 13 23:58:40.934713 systemd[1]: Created slice kubepods-besteffort-podb6b75de9_b29e_4ecb_9883_253cdb37c993.slice - libcontainer container kubepods-besteffort-podb6b75de9_b29e_4ecb_9883_253cdb37c993.slice. May 13 23:58:40.983537 kubelet[2658]: I0513 23:58:40.983453 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40cc0397-0464-4c2d-9fc9-953b5634329f-tigera-ca-bundle\") pod \"calico-typha-875fd4bcd-8gwgw\" (UID: \"40cc0397-0464-4c2d-9fc9-953b5634329f\") " pod="calico-system/calico-typha-875fd4bcd-8gwgw" May 13 23:58:40.983537 kubelet[2658]: I0513 23:58:40.983534 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-xtables-lock\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983796 kubelet[2658]: I0513 23:58:40.983571 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6b75de9-b29e-4ecb-9883-253cdb37c993-tigera-ca-bundle\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983796 kubelet[2658]: I0513 23:58:40.983599 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-log-dir\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983796 kubelet[2658]: I0513 23:58:40.983624 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nx54\" (UniqueName: \"kubernetes.io/projected/b6b75de9-b29e-4ecb-9883-253cdb37c993-kube-api-access-2nx54\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983796 kubelet[2658]: I0513 23:58:40.983710 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-policysync\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983796 kubelet[2658]: I0513 23:58:40.983735 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-bin-dir\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983997 kubelet[2658]: I0513 23:58:40.983759 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls4nz\" (UniqueName: \"kubernetes.io/projected/40cc0397-0464-4c2d-9fc9-953b5634329f-kube-api-access-ls4nz\") pod \"calico-typha-875fd4bcd-8gwgw\" (UID: \"40cc0397-0464-4c2d-9fc9-953b5634329f\") " pod="calico-system/calico-typha-875fd4bcd-8gwgw" May 13 23:58:40.983997 kubelet[2658]: I0513 23:58:40.983778 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-net-dir\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983997 kubelet[2658]: I0513 23:58:40.983796 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-flexvol-driver-host\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983997 kubelet[2658]: I0513 23:58:40.983817 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-lib-modules\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.983997 kubelet[2658]: I0513 23:58:40.983836 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-var-lib-calico\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.984183 kubelet[2658]: I0513 23:58:40.983889 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/40cc0397-0464-4c2d-9fc9-953b5634329f-typha-certs\") pod \"calico-typha-875fd4bcd-8gwgw\" (UID: \"40cc0397-0464-4c2d-9fc9-953b5634329f\") " pod="calico-system/calico-typha-875fd4bcd-8gwgw" May 13 23:58:40.984183 kubelet[2658]: I0513 23:58:40.983907 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b6b75de9-b29e-4ecb-9883-253cdb37c993-node-certs\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:40.984183 kubelet[2658]: I0513 23:58:40.983924 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-var-run-calico\") pod \"calico-node-g2hgr\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " pod="calico-system/calico-node-g2hgr" May 13 23:58:41.089407 kubelet[2658]: E0513 23:58:41.089373 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.089407 kubelet[2658]: W0513 23:58:41.089398 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.089782 kubelet[2658]: E0513 23:58:41.089424 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.089782 kubelet[2658]: E0513 23:58:41.089711 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.089782 kubelet[2658]: W0513 23:58:41.089721 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.089782 kubelet[2658]: E0513 23:58:41.089731 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.186103 kubelet[2658]: E0513 23:58:41.185952 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.186103 kubelet[2658]: W0513 23:58:41.185992 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.186103 kubelet[2658]: E0513 23:58:41.186037 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.186399 kubelet[2658]: E0513 23:58:41.186375 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.186399 kubelet[2658]: W0513 23:58:41.186394 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.186506 kubelet[2658]: E0513 23:58:41.186408 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.191013 kubelet[2658]: E0513 23:58:41.190935 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:58:41.198967 kubelet[2658]: E0513 23:58:41.198931 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.198967 kubelet[2658]: W0513 23:58:41.198963 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.199159 kubelet[2658]: E0513 23:58:41.198993 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.203093 kubelet[2658]: E0513 23:58:41.203060 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.203093 kubelet[2658]: W0513 23:58:41.203081 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.203222 kubelet[2658]: E0513 23:58:41.203100 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.225401 kubelet[2658]: E0513 23:58:41.225341 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:41.226054 containerd[1491]: time="2025-05-13T23:58:41.226009894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-875fd4bcd-8gwgw,Uid:40cc0397-0464-4c2d-9fc9-953b5634329f,Namespace:calico-system,Attempt:0,}" May 13 23:58:41.239527 kubelet[2658]: E0513 23:58:41.239489 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:41.239903 containerd[1491]: time="2025-05-13T23:58:41.239867011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g2hgr,Uid:b6b75de9-b29e-4ecb-9883-253cdb37c993,Namespace:calico-system,Attempt:0,}" May 13 23:58:41.279734 kubelet[2658]: E0513 23:58:41.279693 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.279734 kubelet[2658]: W0513 23:58:41.279721 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.279734 kubelet[2658]: E0513 23:58:41.279747 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.280039 kubelet[2658]: E0513 23:58:41.280020 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.280039 kubelet[2658]: W0513 23:58:41.280038 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.280137 kubelet[2658]: E0513 23:58:41.280053 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.280325 kubelet[2658]: E0513 23:58:41.280309 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.280325 kubelet[2658]: W0513 23:58:41.280322 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.280438 kubelet[2658]: E0513 23:58:41.280336 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.280608 kubelet[2658]: E0513 23:58:41.280588 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.280608 kubelet[2658]: W0513 23:58:41.280605 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.280725 kubelet[2658]: E0513 23:58:41.280619 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.280896 kubelet[2658]: E0513 23:58:41.280877 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.280896 kubelet[2658]: W0513 23:58:41.280889 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.280982 kubelet[2658]: E0513 23:58:41.280900 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.281282 kubelet[2658]: E0513 23:58:41.281231 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.281282 kubelet[2658]: W0513 23:58:41.281273 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.281347 kubelet[2658]: E0513 23:58:41.281306 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.281637 kubelet[2658]: E0513 23:58:41.281614 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.281637 kubelet[2658]: W0513 23:58:41.281626 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.281637 kubelet[2658]: E0513 23:58:41.281636 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.281915 kubelet[2658]: E0513 23:58:41.281891 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.281915 kubelet[2658]: W0513 23:58:41.281903 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.281915 kubelet[2658]: E0513 23:58:41.281911 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.282194 kubelet[2658]: E0513 23:58:41.282174 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.282194 kubelet[2658]: W0513 23:58:41.282187 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.282267 kubelet[2658]: E0513 23:58:41.282198 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.282517 kubelet[2658]: E0513 23:58:41.282486 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.282517 kubelet[2658]: W0513 23:58:41.282503 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.282616 kubelet[2658]: E0513 23:58:41.282518 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.282792 kubelet[2658]: E0513 23:58:41.282765 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.282792 kubelet[2658]: W0513 23:58:41.282778 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.282877 kubelet[2658]: E0513 23:58:41.282793 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.283039 kubelet[2658]: E0513 23:58:41.283015 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.283039 kubelet[2658]: W0513 23:58:41.283027 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.283039 kubelet[2658]: E0513 23:58:41.283038 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.283314 kubelet[2658]: E0513 23:58:41.283294 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.283314 kubelet[2658]: W0513 23:58:41.283309 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.283389 kubelet[2658]: E0513 23:58:41.283320 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.283562 kubelet[2658]: E0513 23:58:41.283547 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.283562 kubelet[2658]: W0513 23:58:41.283558 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.283627 kubelet[2658]: E0513 23:58:41.283567 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.283834 kubelet[2658]: E0513 23:58:41.283817 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.283834 kubelet[2658]: W0513 23:58:41.283833 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.283970 kubelet[2658]: E0513 23:58:41.283842 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.284083 kubelet[2658]: E0513 23:58:41.284068 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.284083 kubelet[2658]: W0513 23:58:41.284080 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.284155 kubelet[2658]: E0513 23:58:41.284089 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.284333 kubelet[2658]: E0513 23:58:41.284319 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.284333 kubelet[2658]: W0513 23:58:41.284330 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.284389 kubelet[2658]: E0513 23:58:41.284339 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.284635 kubelet[2658]: E0513 23:58:41.284610 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.284635 kubelet[2658]: W0513 23:58:41.284625 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.284746 kubelet[2658]: E0513 23:58:41.284636 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.284955 kubelet[2658]: E0513 23:58:41.284927 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.284955 kubelet[2658]: W0513 23:58:41.284942 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.284955 kubelet[2658]: E0513 23:58:41.284953 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.285209 kubelet[2658]: E0513 23:58:41.285182 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.285209 kubelet[2658]: W0513 23:58:41.285194 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.285209 kubelet[2658]: E0513 23:58:41.285202 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.287538 kubelet[2658]: E0513 23:58:41.287514 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.287538 kubelet[2658]: W0513 23:58:41.287528 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.287639 kubelet[2658]: E0513 23:58:41.287541 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.287639 kubelet[2658]: I0513 23:58:41.287577 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w297d\" (UniqueName: \"kubernetes.io/projected/6c89c23a-8ac4-492c-ae00-402f1ec38ec8-kube-api-access-w297d\") pod \"csi-node-driver-ppkvw\" (UID: \"6c89c23a-8ac4-492c-ae00-402f1ec38ec8\") " pod="calico-system/csi-node-driver-ppkvw" May 13 23:58:41.287887 kubelet[2658]: E0513 23:58:41.287859 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.287887 kubelet[2658]: W0513 23:58:41.287873 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.287984 kubelet[2658]: E0513 23:58:41.287891 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.287984 kubelet[2658]: I0513 23:58:41.287914 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c89c23a-8ac4-492c-ae00-402f1ec38ec8-kubelet-dir\") pod \"csi-node-driver-ppkvw\" (UID: \"6c89c23a-8ac4-492c-ae00-402f1ec38ec8\") " pod="calico-system/csi-node-driver-ppkvw" May 13 23:58:41.288199 kubelet[2658]: E0513 23:58:41.288169 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.288199 kubelet[2658]: W0513 23:58:41.288183 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.288289 kubelet[2658]: E0513 23:58:41.288200 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.288289 kubelet[2658]: I0513 23:58:41.288220 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6c89c23a-8ac4-492c-ae00-402f1ec38ec8-varrun\") pod \"csi-node-driver-ppkvw\" (UID: \"6c89c23a-8ac4-492c-ae00-402f1ec38ec8\") " pod="calico-system/csi-node-driver-ppkvw" May 13 23:58:41.288472 kubelet[2658]: E0513 23:58:41.288450 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.288472 kubelet[2658]: W0513 23:58:41.288466 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.288562 kubelet[2658]: E0513 23:58:41.288486 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.288562 kubelet[2658]: I0513 23:58:41.288518 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c89c23a-8ac4-492c-ae00-402f1ec38ec8-registration-dir\") pod \"csi-node-driver-ppkvw\" (UID: \"6c89c23a-8ac4-492c-ae00-402f1ec38ec8\") " pod="calico-system/csi-node-driver-ppkvw" May 13 23:58:41.288776 kubelet[2658]: E0513 23:58:41.288745 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.288776 kubelet[2658]: W0513 23:58:41.288764 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.288868 kubelet[2658]: E0513 23:58:41.288783 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.289007 kubelet[2658]: E0513 23:58:41.288989 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.289007 kubelet[2658]: W0513 23:58:41.289001 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.289084 kubelet[2658]: E0513 23:58:41.289018 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.289300 kubelet[2658]: E0513 23:58:41.289281 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.289300 kubelet[2658]: W0513 23:58:41.289293 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.289388 kubelet[2658]: E0513 23:58:41.289310 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.289532 kubelet[2658]: E0513 23:58:41.289514 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.289532 kubelet[2658]: W0513 23:58:41.289526 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.289608 kubelet[2658]: E0513 23:58:41.289563 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.289796 kubelet[2658]: E0513 23:58:41.289778 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.289796 kubelet[2658]: W0513 23:58:41.289791 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.289888 kubelet[2658]: E0513 23:58:41.289819 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.290028 kubelet[2658]: E0513 23:58:41.290010 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.290028 kubelet[2658]: W0513 23:58:41.290022 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.290095 kubelet[2658]: E0513 23:58:41.290060 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.290095 kubelet[2658]: I0513 23:58:41.290084 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c89c23a-8ac4-492c-ae00-402f1ec38ec8-socket-dir\") pod \"csi-node-driver-ppkvw\" (UID: \"6c89c23a-8ac4-492c-ae00-402f1ec38ec8\") " pod="calico-system/csi-node-driver-ppkvw" May 13 23:58:41.290392 kubelet[2658]: E0513 23:58:41.290365 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.290392 kubelet[2658]: W0513 23:58:41.290378 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.290477 kubelet[2658]: E0513 23:58:41.290414 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.290611 kubelet[2658]: E0513 23:58:41.290593 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.290611 kubelet[2658]: W0513 23:58:41.290605 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.290719 kubelet[2658]: E0513 23:58:41.290617 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.290895 kubelet[2658]: E0513 23:58:41.290876 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.290895 kubelet[2658]: W0513 23:58:41.290889 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.290982 kubelet[2658]: E0513 23:58:41.290905 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.291138 kubelet[2658]: E0513 23:58:41.291112 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.291138 kubelet[2658]: W0513 23:58:41.291132 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.291222 kubelet[2658]: E0513 23:58:41.291143 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.291399 kubelet[2658]: E0513 23:58:41.291381 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.291399 kubelet[2658]: W0513 23:58:41.291394 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.291473 kubelet[2658]: E0513 23:58:41.291406 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.392383 kubelet[2658]: E0513 23:58:41.392323 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.392383 kubelet[2658]: W0513 23:58:41.392364 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.392383 kubelet[2658]: E0513 23:58:41.392391 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.392754 kubelet[2658]: E0513 23:58:41.392727 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.392754 kubelet[2658]: W0513 23:58:41.392740 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.392844 kubelet[2658]: E0513 23:58:41.392757 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.393032 kubelet[2658]: E0513 23:58:41.393011 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.393032 kubelet[2658]: W0513 23:58:41.393030 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.393091 kubelet[2658]: E0513 23:58:41.393049 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.393313 kubelet[2658]: E0513 23:58:41.393295 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.393313 kubelet[2658]: W0513 23:58:41.393309 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.393369 kubelet[2658]: E0513 23:58:41.393328 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.393570 kubelet[2658]: E0513 23:58:41.393548 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.393570 kubelet[2658]: W0513 23:58:41.393563 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.393638 kubelet[2658]: E0513 23:58:41.393580 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.393930 kubelet[2658]: E0513 23:58:41.393913 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.393930 kubelet[2658]: W0513 23:58:41.393928 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.394018 kubelet[2658]: E0513 23:58:41.393944 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.394220 kubelet[2658]: E0513 23:58:41.394202 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.394220 kubelet[2658]: W0513 23:58:41.394216 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.394309 kubelet[2658]: E0513 23:58:41.394233 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.394496 kubelet[2658]: E0513 23:58:41.394476 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.394496 kubelet[2658]: W0513 23:58:41.394491 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.394558 kubelet[2658]: E0513 23:58:41.394507 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.394785 kubelet[2658]: E0513 23:58:41.394763 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.394785 kubelet[2658]: W0513 23:58:41.394779 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.394885 kubelet[2658]: E0513 23:58:41.394797 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.395034 kubelet[2658]: E0513 23:58:41.395016 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.395034 kubelet[2658]: W0513 23:58:41.395030 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.395091 kubelet[2658]: E0513 23:58:41.395048 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.395305 kubelet[2658]: E0513 23:58:41.395287 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.395305 kubelet[2658]: W0513 23:58:41.395302 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.395383 kubelet[2658]: E0513 23:58:41.395319 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.395597 kubelet[2658]: E0513 23:58:41.395580 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.395640 kubelet[2658]: W0513 23:58:41.395595 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.395640 kubelet[2658]: E0513 23:58:41.395614 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.395891 kubelet[2658]: E0513 23:58:41.395874 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.395891 kubelet[2658]: W0513 23:58:41.395889 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.395963 kubelet[2658]: E0513 23:58:41.395906 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.396166 kubelet[2658]: E0513 23:58:41.396149 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.396166 kubelet[2658]: W0513 23:58:41.396163 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.396239 kubelet[2658]: E0513 23:58:41.396180 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.396464 kubelet[2658]: E0513 23:58:41.396439 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.396464 kubelet[2658]: W0513 23:58:41.396452 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.396528 kubelet[2658]: E0513 23:58:41.396466 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.396677 kubelet[2658]: E0513 23:58:41.396648 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.396677 kubelet[2658]: W0513 23:58:41.396658 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.396754 kubelet[2658]: E0513 23:58:41.396681 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.396932 kubelet[2658]: E0513 23:58:41.396916 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.396969 kubelet[2658]: W0513 23:58:41.396931 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.396969 kubelet[2658]: E0513 23:58:41.396949 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.397198 kubelet[2658]: E0513 23:58:41.397182 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.397198 kubelet[2658]: W0513 23:58:41.397198 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.397266 kubelet[2658]: E0513 23:58:41.397214 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.397466 kubelet[2658]: E0513 23:58:41.397450 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.397466 kubelet[2658]: W0513 23:58:41.397464 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.397538 kubelet[2658]: E0513 23:58:41.397482 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.397772 kubelet[2658]: E0513 23:58:41.397754 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.397772 kubelet[2658]: W0513 23:58:41.397770 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.397853 kubelet[2658]: E0513 23:58:41.397803 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.398027 kubelet[2658]: E0513 23:58:41.398009 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.398027 kubelet[2658]: W0513 23:58:41.398025 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.398094 kubelet[2658]: E0513 23:58:41.398058 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.398329 kubelet[2658]: E0513 23:58:41.398305 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.398329 kubelet[2658]: W0513 23:58:41.398321 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.398416 kubelet[2658]: E0513 23:58:41.398334 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.398591 kubelet[2658]: E0513 23:58:41.398562 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.398591 kubelet[2658]: W0513 23:58:41.398578 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.398591 kubelet[2658]: E0513 23:58:41.398590 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.398860 kubelet[2658]: E0513 23:58:41.398831 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.398860 kubelet[2658]: W0513 23:58:41.398844 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.398860 kubelet[2658]: E0513 23:58:41.398857 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.399142 kubelet[2658]: E0513 23:58:41.399114 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.399142 kubelet[2658]: W0513 23:58:41.399135 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.399227 kubelet[2658]: E0513 23:58:41.399147 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.498236 kubelet[2658]: E0513 23:58:41.498083 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.498236 kubelet[2658]: W0513 23:58:41.498109 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.498236 kubelet[2658]: E0513 23:58:41.498133 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.524659 kubelet[2658]: E0513 23:58:41.524583 2658 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:58:41.524659 kubelet[2658]: W0513 23:58:41.524607 2658 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:58:41.524659 kubelet[2658]: E0513 23:58:41.524631 2658 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:58:41.988597 containerd[1491]: time="2025-05-13T23:58:41.988505166Z" level=info msg="connecting to shim 670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed" address="unix:///run/containerd/s/d0aa7498c6ad6d2ae8c50dffa2e9b83adc77ee3f7920b95b9a619418833640f8" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:41.989019 containerd[1491]: time="2025-05-13T23:58:41.988505096Z" level=info msg="connecting to shim 1b7ebd626341df608eb26d72ab5b405ae900684b9fb4024c46079546a39cfcad" address="unix:///run/containerd/s/6711b1c86f0e751b96663cfdde20e42a92ea0a40cdcde2fce49f70144363b82e" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:42.027185 systemd[1]: Started cri-containerd-670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed.scope - libcontainer container 670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed. May 13 23:58:42.034797 systemd[1]: Started cri-containerd-1b7ebd626341df608eb26d72ab5b405ae900684b9fb4024c46079546a39cfcad.scope - libcontainer container 1b7ebd626341df608eb26d72ab5b405ae900684b9fb4024c46079546a39cfcad. May 13 23:58:42.093661 containerd[1491]: time="2025-05-13T23:58:42.093613392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g2hgr,Uid:b6b75de9-b29e-4ecb-9883-253cdb37c993,Namespace:calico-system,Attempt:0,} returns sandbox id \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\"" May 13 23:58:42.094585 kubelet[2658]: E0513 23:58:42.094548 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:42.096138 containerd[1491]: time="2025-05-13T23:58:42.096098515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:58:42.141906 containerd[1491]: time="2025-05-13T23:58:42.141769607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-875fd4bcd-8gwgw,Uid:40cc0397-0464-4c2d-9fc9-953b5634329f,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b7ebd626341df608eb26d72ab5b405ae900684b9fb4024c46079546a39cfcad\"" May 13 23:58:42.142854 kubelet[2658]: E0513 23:58:42.142656 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:43.004870 kubelet[2658]: E0513 23:58:43.004790 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:58:43.994735 containerd[1491]: time="2025-05-13T23:58:43.994645593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:43.995939 containerd[1491]: time="2025-05-13T23:58:43.995812205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 23:58:43.997511 containerd[1491]: time="2025-05-13T23:58:43.997455151Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:44.003909 containerd[1491]: time="2025-05-13T23:58:44.003844222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:44.004627 containerd[1491]: time="2025-05-13T23:58:44.004587659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.908439434s" May 13 23:58:44.004742 containerd[1491]: time="2025-05-13T23:58:44.004632428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 23:58:44.005845 containerd[1491]: time="2025-05-13T23:58:44.005797065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:58:44.007552 containerd[1491]: time="2025-05-13T23:58:44.007070269Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:58:44.091467 containerd[1491]: time="2025-05-13T23:58:44.091366909Z" level=info msg="Container 9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:44.109999 containerd[1491]: time="2025-05-13T23:58:44.109923148Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\"" May 13 23:58:44.110713 containerd[1491]: time="2025-05-13T23:58:44.110654429Z" level=info msg="StartContainer for \"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\"" May 13 23:58:44.112771 containerd[1491]: time="2025-05-13T23:58:44.112724476Z" level=info msg="connecting to shim 9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161" address="unix:///run/containerd/s/d0aa7498c6ad6d2ae8c50dffa2e9b83adc77ee3f7920b95b9a619418833640f8" protocol=ttrpc version=3 May 13 23:58:44.139917 systemd[1]: Started cri-containerd-9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161.scope - libcontainer container 9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161. May 13 23:58:44.207020 systemd[1]: cri-containerd-9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161.scope: Deactivated successfully. May 13 23:58:44.208947 containerd[1491]: time="2025-05-13T23:58:44.208900402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\" id:\"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\" pid:3251 exited_at:{seconds:1747180724 nanos:208349482}" May 13 23:58:44.224549 containerd[1491]: time="2025-05-13T23:58:44.224477248Z" level=info msg="received exit event container_id:\"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\" id:\"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\" pid:3251 exited_at:{seconds:1747180724 nanos:208349482}" May 13 23:58:44.226807 containerd[1491]: time="2025-05-13T23:58:44.226767705Z" level=info msg="StartContainer for \"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\" returns successfully" May 13 23:58:44.255100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161-rootfs.mount: Deactivated successfully. May 13 23:58:45.005333 kubelet[2658]: E0513 23:58:45.004801 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:58:45.069254 kubelet[2658]: E0513 23:58:45.069211 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:47.004470 kubelet[2658]: E0513 23:58:47.004404 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:58:47.457111 containerd[1491]: time="2025-05-13T23:58:47.457046563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:47.457910 containerd[1491]: time="2025-05-13T23:58:47.457839886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 23:58:47.463827 containerd[1491]: time="2025-05-13T23:58:47.463771848Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:47.466395 containerd[1491]: time="2025-05-13T23:58:47.466360833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:47.467103 containerd[1491]: time="2025-05-13T23:58:47.467069005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.461222211s" May 13 23:58:47.467172 containerd[1491]: time="2025-05-13T23:58:47.467107973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 23:58:47.468216 containerd[1491]: time="2025-05-13T23:58:47.467963449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:58:47.477547 containerd[1491]: time="2025-05-13T23:58:47.477492136Z" level=info msg="CreateContainer within sandbox \"1b7ebd626341df608eb26d72ab5b405ae900684b9fb4024c46079546a39cfcad\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:58:47.562559 containerd[1491]: time="2025-05-13T23:58:47.562476740Z" level=info msg="Container c0ed72fb5561b6271fffa3862a2b8cd6c5e41fe2be4f9dbdbe06d4e903437571: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:47.582627 containerd[1491]: time="2025-05-13T23:58:47.582559881Z" level=info msg="CreateContainer within sandbox \"1b7ebd626341df608eb26d72ab5b405ae900684b9fb4024c46079546a39cfcad\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c0ed72fb5561b6271fffa3862a2b8cd6c5e41fe2be4f9dbdbe06d4e903437571\"" May 13 23:58:47.583363 containerd[1491]: time="2025-05-13T23:58:47.583120880Z" level=info msg="StartContainer for \"c0ed72fb5561b6271fffa3862a2b8cd6c5e41fe2be4f9dbdbe06d4e903437571\"" May 13 23:58:47.587618 containerd[1491]: time="2025-05-13T23:58:47.587550325Z" level=info msg="connecting to shim c0ed72fb5561b6271fffa3862a2b8cd6c5e41fe2be4f9dbdbe06d4e903437571" address="unix:///run/containerd/s/6711b1c86f0e751b96663cfdde20e42a92ea0a40cdcde2fce49f70144363b82e" protocol=ttrpc version=3 May 13 23:58:47.612916 systemd[1]: Started cri-containerd-c0ed72fb5561b6271fffa3862a2b8cd6c5e41fe2be4f9dbdbe06d4e903437571.scope - libcontainer container c0ed72fb5561b6271fffa3862a2b8cd6c5e41fe2be4f9dbdbe06d4e903437571. May 13 23:58:47.674045 containerd[1491]: time="2025-05-13T23:58:47.673985994Z" level=info msg="StartContainer for \"c0ed72fb5561b6271fffa3862a2b8cd6c5e41fe2be4f9dbdbe06d4e903437571\" returns successfully" May 13 23:58:48.080827 kubelet[2658]: E0513 23:58:48.080789 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:48.176103 kubelet[2658]: I0513 23:58:48.176037 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-875fd4bcd-8gwgw" podStartSLOduration=2.8516160680000002 podStartE2EDuration="8.176017435s" podCreationTimestamp="2025-05-13 23:58:40 +0000 UTC" firstStartedPulling="2025-05-13 23:58:42.143448526 +0000 UTC m=+19.259358653" lastFinishedPulling="2025-05-13 23:58:47.467849893 +0000 UTC m=+24.583760020" observedRunningTime="2025-05-13 23:58:48.175578239 +0000 UTC m=+25.291488387" watchObservedRunningTime="2025-05-13 23:58:48.176017435 +0000 UTC m=+25.291927562" May 13 23:58:49.004396 kubelet[2658]: E0513 23:58:49.004328 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:58:49.078214 kubelet[2658]: I0513 23:58:49.078152 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:58:49.078585 kubelet[2658]: E0513 23:58:49.078540 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:51.004595 kubelet[2658]: E0513 23:58:51.004512 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:58:52.642873 containerd[1491]: time="2025-05-13T23:58:52.642797127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:52.645858 containerd[1491]: time="2025-05-13T23:58:52.645657952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 23:58:52.647723 containerd[1491]: time="2025-05-13T23:58:52.647657722Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:52.649921 containerd[1491]: time="2025-05-13T23:58:52.649876539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:52.650456 containerd[1491]: time="2025-05-13T23:58:52.650384594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.182394462s" May 13 23:58:52.650456 containerd[1491]: time="2025-05-13T23:58:52.650437610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 23:58:52.668276 containerd[1491]: time="2025-05-13T23:58:52.668206313Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:58:52.706486 containerd[1491]: time="2025-05-13T23:58:52.706378462Z" level=info msg="Container 0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:52.740356 containerd[1491]: time="2025-05-13T23:58:52.740285902Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\"" May 13 23:58:52.743618 containerd[1491]: time="2025-05-13T23:58:52.741048279Z" level=info msg="StartContainer for \"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\"" May 13 23:58:52.743618 containerd[1491]: time="2025-05-13T23:58:52.742912662Z" level=info msg="connecting to shim 0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087" address="unix:///run/containerd/s/d0aa7498c6ad6d2ae8c50dffa2e9b83adc77ee3f7920b95b9a619418833640f8" protocol=ttrpc version=3 May 13 23:58:52.769026 systemd[1]: Started cri-containerd-0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087.scope - libcontainer container 0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087. May 13 23:58:52.823289 containerd[1491]: time="2025-05-13T23:58:52.823154263Z" level=info msg="StartContainer for \"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\" returns successfully" May 13 23:58:53.454047 kubelet[2658]: E0513 23:58:53.454007 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:53.456989 kubelet[2658]: E0513 23:58:53.456856 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:58:54.457739 kubelet[2658]: E0513 23:58:54.456348 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:54.596973 systemd[1]: cri-containerd-0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087.scope: Deactivated successfully. May 13 23:58:54.597653 systemd[1]: cri-containerd-0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087.scope: Consumed 632ms CPU time, 162.5M memory peak, 4K read from disk, 154M written to disk. May 13 23:58:54.598380 containerd[1491]: time="2025-05-13T23:58:54.598012202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\" id:\"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\" pid:3352 exited_at:{seconds:1747180734 nanos:597608513}" May 13 23:58:54.598380 containerd[1491]: time="2025-05-13T23:58:54.598108708Z" level=info msg="received exit event container_id:\"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\" id:\"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\" pid:3352 exited_at:{seconds:1747180734 nanos:597608513}" May 13 23:58:54.628251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087-rootfs.mount: Deactivated successfully. May 13 23:58:54.679859 kubelet[2658]: I0513 23:58:54.679816 2658 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:58:54.794264 systemd[1]: Created slice kubepods-besteffort-podb4d74f95_719e_4dc2_b743_1167771220e5.slice - libcontainer container kubepods-besteffort-podb4d74f95_719e_4dc2_b743_1167771220e5.slice. May 13 23:58:54.801443 systemd[1]: Created slice kubepods-burstable-pod4df2c9e3_a73b_411b_a21e_2c619d05304c.slice - libcontainer container kubepods-burstable-pod4df2c9e3_a73b_411b_a21e_2c619d05304c.slice. May 13 23:58:54.809994 systemd[1]: Created slice kubepods-besteffort-pod6687c9e7_fce4_4cea_b426_8f1da2fef6f3.slice - libcontainer container kubepods-besteffort-pod6687c9e7_fce4_4cea_b426_8f1da2fef6f3.slice. May 13 23:58:54.818512 systemd[1]: Created slice kubepods-besteffort-pod49054da8_6d54_4b6e_8457_befd52fd3a07.slice - libcontainer container kubepods-besteffort-pod49054da8_6d54_4b6e_8457_befd52fd3a07.slice. May 13 23:58:54.825149 systemd[1]: Created slice kubepods-burstable-podf40d0199_33e4_4e2f_9993_c63871326054.slice - libcontainer container kubepods-burstable-podf40d0199_33e4_4e2f_9993_c63871326054.slice. May 13 23:58:54.939520 kubelet[2658]: I0513 23:58:54.939426 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6687c9e7-fce4-4cea-b426-8f1da2fef6f3-tigera-ca-bundle\") pod \"calico-kube-controllers-688d9b6545-z68xp\" (UID: \"6687c9e7-fce4-4cea-b426-8f1da2fef6f3\") " pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:58:54.939520 kubelet[2658]: I0513 23:58:54.939495 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40d0199-33e4-4e2f-9993-c63871326054-config-volume\") pod \"coredns-6f6b679f8f-rnl27\" (UID: \"f40d0199-33e4-4e2f-9993-c63871326054\") " pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:58:54.939520 kubelet[2658]: I0513 23:58:54.939518 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqvz\" (UniqueName: \"kubernetes.io/projected/49054da8-6d54-4b6e-8457-befd52fd3a07-kube-api-access-jmqvz\") pod \"calico-apiserver-5c6bb84fcc-ptzvd\" (UID: \"49054da8-6d54-4b6e-8457-befd52fd3a07\") " pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:58:54.939847 kubelet[2658]: I0513 23:58:54.939576 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrvb\" (UniqueName: \"kubernetes.io/projected/6687c9e7-fce4-4cea-b426-8f1da2fef6f3-kube-api-access-mdrvb\") pod \"calico-kube-controllers-688d9b6545-z68xp\" (UID: \"6687c9e7-fce4-4cea-b426-8f1da2fef6f3\") " pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:58:54.939847 kubelet[2658]: I0513 23:58:54.939619 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6llvh\" (UniqueName: \"kubernetes.io/projected/b4d74f95-719e-4dc2-b743-1167771220e5-kube-api-access-6llvh\") pod \"calico-apiserver-5c6bb84fcc-8lbpv\" (UID: \"b4d74f95-719e-4dc2-b743-1167771220e5\") " pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:58:54.939923 kubelet[2658]: I0513 23:58:54.939880 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87wt2\" (UniqueName: \"kubernetes.io/projected/4df2c9e3-a73b-411b-a21e-2c619d05304c-kube-api-access-87wt2\") pod \"coredns-6f6b679f8f-l9mww\" (UID: \"4df2c9e3-a73b-411b-a21e-2c619d05304c\") " pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:58:54.940010 kubelet[2658]: I0513 23:58:54.939975 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49054da8-6d54-4b6e-8457-befd52fd3a07-calico-apiserver-certs\") pod \"calico-apiserver-5c6bb84fcc-ptzvd\" (UID: \"49054da8-6d54-4b6e-8457-befd52fd3a07\") " pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:58:54.940055 kubelet[2658]: I0513 23:58:54.940020 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrf6\" (UniqueName: \"kubernetes.io/projected/f40d0199-33e4-4e2f-9993-c63871326054-kube-api-access-2lrf6\") pod \"coredns-6f6b679f8f-rnl27\" (UID: \"f40d0199-33e4-4e2f-9993-c63871326054\") " pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:58:54.940097 kubelet[2658]: I0513 23:58:54.940051 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4d74f95-719e-4dc2-b743-1167771220e5-calico-apiserver-certs\") pod \"calico-apiserver-5c6bb84fcc-8lbpv\" (UID: \"b4d74f95-719e-4dc2-b743-1167771220e5\") " pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:58:54.940097 kubelet[2658]: I0513 23:58:54.940085 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4df2c9e3-a73b-411b-a21e-2c619d05304c-config-volume\") pod \"coredns-6f6b679f8f-l9mww\" (UID: \"4df2c9e3-a73b-411b-a21e-2c619d05304c\") " pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:58:55.012156 systemd[1]: Created slice kubepods-besteffort-pod6c89c23a_8ac4_492c_ae00_402f1ec38ec8.slice - libcontainer container kubepods-besteffort-pod6c89c23a_8ac4_492c_ae00_402f1ec38ec8.slice. May 13 23:58:55.106060 kubelet[2658]: E0513 23:58:55.106018 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:55.128997 kubelet[2658]: E0513 23:58:55.128955 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:55.239717 containerd[1491]: time="2025-05-13T23:58:55.239285452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,}" May 13 23:58:55.239717 containerd[1491]: time="2025-05-13T23:58:55.239325226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,}" May 13 23:58:55.239717 containerd[1491]: time="2025-05-13T23:58:55.239338540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,}" May 13 23:58:55.240003 containerd[1491]: time="2025-05-13T23:58:55.239909616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,}" May 13 23:58:55.240080 containerd[1491]: time="2025-05-13T23:58:55.240055965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,}" May 13 23:58:55.240179 containerd[1491]: time="2025-05-13T23:58:55.240150628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,}" May 13 23:58:55.362066 containerd[1491]: time="2025-05-13T23:58:55.361920149Z" level=error msg="Failed to destroy network for sandbox \"1f3cdb9ddafd72b0811244cf42abe75924fe74659dea4a06174fcf6cd16e00d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.365482 containerd[1491]: time="2025-05-13T23:58:55.364291835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3cdb9ddafd72b0811244cf42abe75924fe74659dea4a06174fcf6cd16e00d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.365945 kubelet[2658]: E0513 23:58:55.365882 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3cdb9ddafd72b0811244cf42abe75924fe74659dea4a06174fcf6cd16e00d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.366007 kubelet[2658]: E0513 23:58:55.365982 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3cdb9ddafd72b0811244cf42abe75924fe74659dea4a06174fcf6cd16e00d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:58:55.366031 kubelet[2658]: E0513 23:58:55.366004 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3cdb9ddafd72b0811244cf42abe75924fe74659dea4a06174fcf6cd16e00d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:58:55.366751 kubelet[2658]: E0513 23:58:55.366067 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f3cdb9ddafd72b0811244cf42abe75924fe74659dea4a06174fcf6cd16e00d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rnl27" podUID="f40d0199-33e4-4e2f-9993-c63871326054" May 13 23:58:55.378481 containerd[1491]: time="2025-05-13T23:58:55.378417068Z" level=error msg="Failed to destroy network for sandbox \"c7490e325bbe6d15abe6d2ca05bec3388d10f704638c61c1a3b3cfab82754802\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.380911 containerd[1491]: time="2025-05-13T23:58:55.380844425Z" level=error msg="Failed to destroy network for sandbox \"e051acabf7883b9d705c787e098e728748e57a70bcaa2b972b7539555c41816a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.382247 containerd[1491]: time="2025-05-13T23:58:55.382017955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7490e325bbe6d15abe6d2ca05bec3388d10f704638c61c1a3b3cfab82754802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.382598 containerd[1491]: time="2025-05-13T23:58:55.382562884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e051acabf7883b9d705c787e098e728748e57a70bcaa2b972b7539555c41816a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.382749 kubelet[2658]: E0513 23:58:55.382714 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7490e325bbe6d15abe6d2ca05bec3388d10f704638c61c1a3b3cfab82754802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.382805 kubelet[2658]: E0513 23:58:55.382783 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7490e325bbe6d15abe6d2ca05bec3388d10f704638c61c1a3b3cfab82754802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:58:55.382834 kubelet[2658]: E0513 23:58:55.382806 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7490e325bbe6d15abe6d2ca05bec3388d10f704638c61c1a3b3cfab82754802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:58:55.383048 kubelet[2658]: E0513 23:58:55.382846 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7490e325bbe6d15abe6d2ca05bec3388d10f704638c61c1a3b3cfab82754802\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:58:55.383599 kubelet[2658]: E0513 23:58:55.383495 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e051acabf7883b9d705c787e098e728748e57a70bcaa2b972b7539555c41816a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.383599 kubelet[2658]: E0513 23:58:55.383526 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e051acabf7883b9d705c787e098e728748e57a70bcaa2b972b7539555c41816a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:58:55.383599 kubelet[2658]: E0513 23:58:55.383544 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e051acabf7883b9d705c787e098e728748e57a70bcaa2b972b7539555c41816a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:58:55.383721 kubelet[2658]: E0513 23:58:55.383572 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e051acabf7883b9d705c787e098e728748e57a70bcaa2b972b7539555c41816a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-l9mww" podUID="4df2c9e3-a73b-411b-a21e-2c619d05304c" May 13 23:58:55.398876 containerd[1491]: time="2025-05-13T23:58:55.398818240Z" level=error msg="Failed to destroy network for sandbox \"34c11148833b2c9608b9b10e9d63d0d395110033cccdc80c045b178408b35684\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.400067 containerd[1491]: time="2025-05-13T23:58:55.400026404Z" level=error msg="Failed to destroy network for sandbox \"447eef8578cbcf4ba6e965fbe39ea648f412808ccdec1b5ac418080a426e7be9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.400998 containerd[1491]: time="2025-05-13T23:58:55.400951249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c11148833b2c9608b9b10e9d63d0d395110033cccdc80c045b178408b35684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.401222 kubelet[2658]: E0513 23:58:55.401186 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c11148833b2c9608b9b10e9d63d0d395110033cccdc80c045b178408b35684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.401303 kubelet[2658]: E0513 23:58:55.401250 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c11148833b2c9608b9b10e9d63d0d395110033cccdc80c045b178408b35684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:58:55.401303 kubelet[2658]: E0513 23:58:55.401269 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c11148833b2c9608b9b10e9d63d0d395110033cccdc80c045b178408b35684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:58:55.401384 kubelet[2658]: E0513 23:58:55.401308 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-ptzvd_calico-apiserver(49054da8-6d54-4b6e-8457-befd52fd3a07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-ptzvd_calico-apiserver(49054da8-6d54-4b6e-8457-befd52fd3a07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34c11148833b2c9608b9b10e9d63d0d395110033cccdc80c045b178408b35684\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" podUID="49054da8-6d54-4b6e-8457-befd52fd3a07" May 13 23:58:55.402043 containerd[1491]: time="2025-05-13T23:58:55.401981126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"447eef8578cbcf4ba6e965fbe39ea648f412808ccdec1b5ac418080a426e7be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.402876 kubelet[2658]: E0513 23:58:55.402844 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447eef8578cbcf4ba6e965fbe39ea648f412808ccdec1b5ac418080a426e7be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.402940 kubelet[2658]: E0513 23:58:55.402886 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447eef8578cbcf4ba6e965fbe39ea648f412808ccdec1b5ac418080a426e7be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:58:55.402940 kubelet[2658]: E0513 23:58:55.402912 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447eef8578cbcf4ba6e965fbe39ea648f412808ccdec1b5ac418080a426e7be9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:58:55.402993 kubelet[2658]: E0513 23:58:55.402945 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"447eef8578cbcf4ba6e965fbe39ea648f412808ccdec1b5ac418080a426e7be9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" podUID="b4d74f95-719e-4dc2-b743-1167771220e5" May 13 23:58:55.411047 containerd[1491]: time="2025-05-13T23:58:55.410997408Z" level=error msg="Failed to destroy network for sandbox \"31eaf06bc3b2cba05b6f052414288b8689d71d1057ec6fae900ed081ae200c88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.412269 containerd[1491]: time="2025-05-13T23:58:55.412236528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31eaf06bc3b2cba05b6f052414288b8689d71d1057ec6fae900ed081ae200c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.412488 kubelet[2658]: E0513 23:58:55.412436 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31eaf06bc3b2cba05b6f052414288b8689d71d1057ec6fae900ed081ae200c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:58:55.412530 kubelet[2658]: E0513 23:58:55.412506 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31eaf06bc3b2cba05b6f052414288b8689d71d1057ec6fae900ed081ae200c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:58:55.412559 kubelet[2658]: E0513 23:58:55.412528 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31eaf06bc3b2cba05b6f052414288b8689d71d1057ec6fae900ed081ae200c88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:58:55.412599 kubelet[2658]: E0513 23:58:55.412575 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31eaf06bc3b2cba05b6f052414288b8689d71d1057ec6fae900ed081ae200c88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" podUID="6687c9e7-fce4-4cea-b426-8f1da2fef6f3" May 13 23:58:55.461540 kubelet[2658]: E0513 23:58:55.460893 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:58:55.461983 containerd[1491]: time="2025-05-13T23:58:55.461656775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:58:55.625778 systemd[1]: run-netns-cni\x2d424ff76b\x2d7723\x2d14a3\x2d59f0\x2d7783e33da691.mount: Deactivated successfully. May 13 23:58:59.545597 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:44206.service - OpenSSH per-connection server daemon (10.0.0.1:44206). May 13 23:58:59.606826 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 44206 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:58:59.608763 sshd-session[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:59.615711 systemd-logind[1475]: New session 8 of user core. May 13 23:58:59.622837 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:58:59.791771 sshd[3621]: Connection closed by 10.0.0.1 port 44206 May 13 23:58:59.792114 sshd-session[3619]: pam_unix(sshd:session): session closed for user core May 13 23:58:59.795466 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:44206.service: Deactivated successfully. May 13 23:58:59.798301 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:58:59.801197 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. May 13 23:58:59.802631 systemd-logind[1475]: Removed session 8. May 13 23:59:00.312249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596987780.mount: Deactivated successfully. May 13 23:59:03.371366 containerd[1491]: time="2025-05-13T23:59:03.371310801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:03.490495 containerd[1491]: time="2025-05-13T23:59:03.490354505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 23:59:03.607069 containerd[1491]: time="2025-05-13T23:59:03.607009753Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:03.739059 containerd[1491]: time="2025-05-13T23:59:03.738862023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:03.739530 containerd[1491]: time="2025-05-13T23:59:03.739473130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.277750142s" May 13 23:59:03.739595 containerd[1491]: time="2025-05-13T23:59:03.739531258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 23:59:03.750698 containerd[1491]: time="2025-05-13T23:59:03.750637526Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:59:04.402190 containerd[1491]: time="2025-05-13T23:59:04.402132060Z" level=info msg="Container cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:04.812440 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:44208.service - OpenSSH per-connection server daemon (10.0.0.1:44208). May 13 23:59:04.967219 sshd[3638]: Accepted publickey for core from 10.0.0.1 port 44208 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:04.969257 sshd-session[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.973920 systemd-logind[1475]: New session 9 of user core. May 13 23:59:04.983848 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:59:05.061951 kubelet[2658]: I0513 23:59:05.061854 2658 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:59:05.062483 kubelet[2658]: E0513 23:59:05.062380 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:05.673540 kubelet[2658]: E0513 23:59:05.673495 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:05.980831 containerd[1491]: time="2025-05-13T23:59:05.980693870Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\"" May 13 23:59:05.981317 containerd[1491]: time="2025-05-13T23:59:05.981278782Z" level=info msg="StartContainer for \"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\"" May 13 23:59:05.983333 containerd[1491]: time="2025-05-13T23:59:05.983290317Z" level=info msg="connecting to shim cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2" address="unix:///run/containerd/s/d0aa7498c6ad6d2ae8c50dffa2e9b83adc77ee3f7920b95b9a619418833640f8" protocol=ttrpc version=3 May 13 23:59:06.012865 systemd[1]: Started cri-containerd-cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2.scope - libcontainer container cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2. May 13 23:59:06.612795 sshd[3640]: Connection closed by 10.0.0.1 port 44208 May 13 23:59:06.366994 sshd-session[3638]: pam_unix(sshd:session): session closed for user core May 13 23:59:06.371041 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:44208.service: Deactivated successfully. May 13 23:59:06.773822 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:59:06.773928 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:59:06.373256 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:59:06.374141 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. May 13 23:59:06.375134 systemd-logind[1475]: Removed session 9. May 13 23:59:07.005236 kubelet[2658]: E0513 23:59:07.005075 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:07.005744 containerd[1491]: time="2025-05-13T23:59:07.005658493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,}" May 13 23:59:07.240601 containerd[1491]: time="2025-05-13T23:59:07.240532361Z" level=info msg="StartContainer for \"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" returns successfully" May 13 23:59:08.005399 containerd[1491]: time="2025-05-13T23:59:08.005340403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,}" May 13 23:59:08.031135 containerd[1491]: time="2025-05-13T23:59:08.031075801Z" level=error msg="Failed to destroy network for sandbox \"606ebf181d7c568473156c8a6f7c90be4391bd2b98b75e3525a518c14ddd54c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:08.033422 systemd[1]: run-netns-cni\x2d7ed9a9ea\x2d0109\x2dc5c5\x2d830b\x2d6a7edc8bcaad.mount: Deactivated successfully. May 13 23:59:08.217311 systemd[1]: cri-containerd-cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2.scope: Deactivated successfully. May 13 23:59:08.218258 containerd[1491]: time="2025-05-13T23:59:08.218206063Z" level=info msg="received exit event container_id:\"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" id:\"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" pid:3662 exit_status:1 exited_at:{seconds:1747180748 nanos:218006098}" May 13 23:59:08.218358 containerd[1491]: time="2025-05-13T23:59:08.218331418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" id:\"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" pid:3662 exit_status:1 exited_at:{seconds:1747180748 nanos:218006098}" May 13 23:59:08.245057 kubelet[2658]: E0513 23:59:08.244573 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:08.244643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2-rootfs.mount: Deactivated successfully. May 13 23:59:08.417263 kubelet[2658]: I0513 23:59:08.417185 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g2hgr" podStartSLOduration=6.772617701 podStartE2EDuration="28.417165511s" podCreationTimestamp="2025-05-13 23:58:40 +0000 UTC" firstStartedPulling="2025-05-13 23:58:42.095873725 +0000 UTC m=+19.211783852" lastFinishedPulling="2025-05-13 23:59:03.740421535 +0000 UTC m=+40.856331662" observedRunningTime="2025-05-13 23:59:08.416899121 +0000 UTC m=+45.532809269" watchObservedRunningTime="2025-05-13 23:59:08.417165511 +0000 UTC m=+45.533075638" May 13 23:59:08.619959 containerd[1491]: time="2025-05-13T23:59:08.619853392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"606ebf181d7c568473156c8a6f7c90be4391bd2b98b75e3525a518c14ddd54c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:08.623453 kubelet[2658]: E0513 23:59:08.620180 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606ebf181d7c568473156c8a6f7c90be4391bd2b98b75e3525a518c14ddd54c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:08.623453 kubelet[2658]: E0513 23:59:08.620273 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606ebf181d7c568473156c8a6f7c90be4391bd2b98b75e3525a518c14ddd54c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:59:08.623453 kubelet[2658]: E0513 23:59:08.620295 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606ebf181d7c568473156c8a6f7c90be4391bd2b98b75e3525a518c14ddd54c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:59:08.623737 kubelet[2658]: E0513 23:59:08.620357 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"606ebf181d7c568473156c8a6f7c90be4391bd2b98b75e3525a518c14ddd54c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-l9mww" podUID="4df2c9e3-a73b-411b-a21e-2c619d05304c" May 13 23:59:08.942688 containerd[1491]: time="2025-05-13T23:59:08.942622011Z" level=error msg="Failed to destroy network for sandbox \"2accfc928ea47dea68cca98748830a148e32723445c1d0dcc9d0302f9f9a77c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:08.945009 systemd[1]: run-netns-cni\x2d9a4c8980\x2d8133\x2d67ef\x2db7a0\x2d699622ade63a.mount: Deactivated successfully. May 13 23:59:09.101361 containerd[1491]: time="2025-05-13T23:59:09.101254302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2accfc928ea47dea68cca98748830a148e32723445c1d0dcc9d0302f9f9a77c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:09.101936 kubelet[2658]: E0513 23:59:09.101604 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2accfc928ea47dea68cca98748830a148e32723445c1d0dcc9d0302f9f9a77c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:09.101936 kubelet[2658]: E0513 23:59:09.101717 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2accfc928ea47dea68cca98748830a148e32723445c1d0dcc9d0302f9f9a77c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:59:09.101936 kubelet[2658]: E0513 23:59:09.101747 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2accfc928ea47dea68cca98748830a148e32723445c1d0dcc9d0302f9f9a77c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:59:09.102101 kubelet[2658]: E0513 23:59:09.101822 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2accfc928ea47dea68cca98748830a148e32723445c1d0dcc9d0302f9f9a77c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" podUID="6687c9e7-fce4-4cea-b426-8f1da2fef6f3" May 13 23:59:09.122003 containerd[1491]: time="2025-05-13T23:59:09.121860835Z" level=error msg="ExecSync for \"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"e4cb07a5649e8b6c87bb5a19af0340ef6276ba24da60346e207d95fa22561846\": task cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2 not found" May 13 23:59:09.122242 kubelet[2658]: E0513 23:59:09.122173 2658 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"e4cb07a5649e8b6c87bb5a19af0340ef6276ba24da60346e207d95fa22561846\": task cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2 not found" containerID="cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 23:59:09.123479 containerd[1491]: time="2025-05-13T23:59:09.123426255Z" level=error msg="ExecSync for \"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2 not found" May 13 23:59:09.124701 kubelet[2658]: E0513 23:59:09.123637 2658 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2 not found" containerID="cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 23:59:09.212859 containerd[1491]: time="2025-05-13T23:59:09.212692964Z" level=error msg="ExecSync for \"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 13 23:59:09.213065 kubelet[2658]: E0513 23:59:09.213008 2658 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 23:59:09.249959 kubelet[2658]: I0513 23:59:09.249908 2658 scope.go:117] "RemoveContainer" containerID="cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2" May 13 23:59:09.250398 kubelet[2658]: E0513 23:59:09.250016 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:09.252289 containerd[1491]: time="2025-05-13T23:59:09.252240973Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" May 13 23:59:09.707251 containerd[1491]: time="2025-05-13T23:59:09.707179322Z" level=info msg="Container b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:09.775585 containerd[1491]: time="2025-05-13T23:59:09.775523685Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\"" May 13 23:59:09.776280 containerd[1491]: time="2025-05-13T23:59:09.776258125Z" level=info msg="StartContainer for \"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\"" May 13 23:59:09.778019 containerd[1491]: time="2025-05-13T23:59:09.777976522Z" level=info msg="connecting to shim b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f" address="unix:///run/containerd/s/d0aa7498c6ad6d2ae8c50dffa2e9b83adc77ee3f7920b95b9a619418833640f8" protocol=ttrpc version=3 May 13 23:59:09.803855 systemd[1]: Started cri-containerd-b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f.scope - libcontainer container b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f. May 13 23:59:09.877135 containerd[1491]: time="2025-05-13T23:59:09.877048158Z" level=info msg="StartContainer for \"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\" returns successfully" May 13 23:59:09.957453 systemd[1]: cri-containerd-b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f.scope: Deactivated successfully. May 13 23:59:09.960390 containerd[1491]: time="2025-05-13T23:59:09.960345626Z" level=info msg="received exit event container_id:\"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\" id:\"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\" pid:3795 exit_status:1 exited_at:{seconds:1747180749 nanos:960067653}" May 13 23:59:09.960514 containerd[1491]: time="2025-05-13T23:59:09.960484767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\" id:\"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\" pid:3795 exit_status:1 exited_at:{seconds:1747180749 nanos:960067653}" May 13 23:59:09.989645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f-rootfs.mount: Deactivated successfully. May 13 23:59:10.004563 kubelet[2658]: E0513 23:59:10.004482 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:10.004930 containerd[1491]: time="2025-05-13T23:59:10.004885398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,}" May 13 23:59:10.005101 containerd[1491]: time="2025-05-13T23:59:10.004883986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,}" May 13 23:59:10.086872 containerd[1491]: time="2025-05-13T23:59:10.086784540Z" level=error msg="Failed to destroy network for sandbox \"e427086a2ae27c45d4df1a6ec66b154129d3d4160a66b910f9ecb52c75616321\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:10.089631 systemd[1]: run-netns-cni\x2d59fa2849\x2ddfc1\x2dc437\x2d4940\x2d9423c7cbbe40.mount: Deactivated successfully. May 13 23:59:10.090165 containerd[1491]: time="2025-05-13T23:59:10.090126044Z" level=error msg="Failed to destroy network for sandbox \"8d9d01847983eacefee8e28ecec575e43f496c5c231da4860f9d48666012bf99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:10.091422 containerd[1491]: time="2025-05-13T23:59:10.091346680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e427086a2ae27c45d4df1a6ec66b154129d3d4160a66b910f9ecb52c75616321\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:10.091762 kubelet[2658]: E0513 23:59:10.091630 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e427086a2ae27c45d4df1a6ec66b154129d3d4160a66b910f9ecb52c75616321\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:10.091889 kubelet[2658]: E0513 23:59:10.091858 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e427086a2ae27c45d4df1a6ec66b154129d3d4160a66b910f9ecb52c75616321\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:59:10.091968 kubelet[2658]: E0513 23:59:10.091915 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e427086a2ae27c45d4df1a6ec66b154129d3d4160a66b910f9ecb52c75616321\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:59:10.092284 kubelet[2658]: E0513 23:59:10.091993 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e427086a2ae27c45d4df1a6ec66b154129d3d4160a66b910f9ecb52c75616321\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rnl27" podUID="f40d0199-33e4-4e2f-9993-c63871326054" May 13 23:59:10.092834 containerd[1491]: time="2025-05-13T23:59:10.092789604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9d01847983eacefee8e28ecec575e43f496c5c231da4860f9d48666012bf99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:10.092999 kubelet[2658]: E0513 23:59:10.092947 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9d01847983eacefee8e28ecec575e43f496c5c231da4860f9d48666012bf99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:10.092999 kubelet[2658]: E0513 23:59:10.092981 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9d01847983eacefee8e28ecec575e43f496c5c231da4860f9d48666012bf99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:59:10.093120 kubelet[2658]: E0513 23:59:10.093007 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9d01847983eacefee8e28ecec575e43f496c5c231da4860f9d48666012bf99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:59:10.093120 kubelet[2658]: E0513 23:59:10.093052 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d9d01847983eacefee8e28ecec575e43f496c5c231da4860f9d48666012bf99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:59:10.093349 systemd[1]: run-netns-cni\x2d1f4f9a5e\x2dc76e\x2d526e\x2d3e34\x2d71bb3c437ac1.mount: Deactivated successfully. May 13 23:59:10.257636 kubelet[2658]: I0513 23:59:10.257481 2658 scope.go:117] "RemoveContainer" containerID="cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2" May 13 23:59:10.258291 kubelet[2658]: I0513 23:59:10.257824 2658 scope.go:117] "RemoveContainer" containerID="b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f" May 13 23:59:10.258291 kubelet[2658]: E0513 23:59:10.257899 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:10.258291 kubelet[2658]: E0513 23:59:10.257983 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-g2hgr_calico-system(b6b75de9-b29e-4ecb-9883-253cdb37c993)\"" pod="calico-system/calico-node-g2hgr" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" May 13 23:59:10.263145 containerd[1491]: time="2025-05-13T23:59:10.263093431Z" level=info msg="RemoveContainer for \"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\"" May 13 23:59:10.276185 containerd[1491]: time="2025-05-13T23:59:10.276132234Z" level=info msg="RemoveContainer for \"cbae3ac10da7255517bc83b8730937c6a7eb01690b3661af20a937afe5279bd2\" returns successfully" May 13 23:59:11.005308 containerd[1491]: time="2025-05-13T23:59:11.005255054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:11.005489 containerd[1491]: time="2025-05-13T23:59:11.005255175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:11.264222 kubelet[2658]: I0513 23:59:11.264082 2658 scope.go:117] "RemoveContainer" containerID="b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f" May 13 23:59:11.264222 kubelet[2658]: E0513 23:59:11.264176 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:11.264696 kubelet[2658]: E0513 23:59:11.264258 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-g2hgr_calico-system(b6b75de9-b29e-4ecb-9883-253cdb37c993)\"" pod="calico-system/calico-node-g2hgr" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" May 13 23:59:11.368969 containerd[1491]: time="2025-05-13T23:59:11.368906420Z" level=error msg="Failed to destroy network for sandbox \"ae21da31d94f442617fc6ec53c7232597a6c40dd24e9133d16ed6d05f7934758\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:11.385098 systemd[1]: run-netns-cni\x2d0d9c7e19\x2d8e37\x2d7b4d\x2ddabf\x2d51f662bb51d3.mount: Deactivated successfully. May 13 23:59:11.387511 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:55722.service - OpenSSH per-connection server daemon (10.0.0.1:55722). May 13 23:59:11.442117 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 55722 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:11.444453 sshd-session[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:11.450387 systemd-logind[1475]: New session 10 of user core. May 13 23:59:11.458032 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:59:11.462096 containerd[1491]: time="2025-05-13T23:59:11.462032478Z" level=error msg="Failed to destroy network for sandbox \"b587b1e3c142c229b768333865bf6bbb6a7ab5405efa92856dda7bc9ec36f5c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:11.464469 systemd[1]: run-netns-cni\x2d0750d160\x2dd73d\x2d4dc9\x2d2c6c\x2d6e43c5546ff0.mount: Deactivated successfully. May 13 23:59:11.479351 containerd[1491]: time="2025-05-13T23:59:11.479275830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae21da31d94f442617fc6ec53c7232597a6c40dd24e9133d16ed6d05f7934758\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:11.479783 kubelet[2658]: E0513 23:59:11.479700 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae21da31d94f442617fc6ec53c7232597a6c40dd24e9133d16ed6d05f7934758\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:11.479861 kubelet[2658]: E0513 23:59:11.479814 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae21da31d94f442617fc6ec53c7232597a6c40dd24e9133d16ed6d05f7934758\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:59:11.479861 kubelet[2658]: E0513 23:59:11.479845 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae21da31d94f442617fc6ec53c7232597a6c40dd24e9133d16ed6d05f7934758\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:59:11.479935 kubelet[2658]: E0513 23:59:11.479908 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae21da31d94f442617fc6ec53c7232597a6c40dd24e9133d16ed6d05f7934758\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" podUID="b4d74f95-719e-4dc2-b743-1167771220e5" May 13 23:59:11.588251 containerd[1491]: time="2025-05-13T23:59:11.588157049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b587b1e3c142c229b768333865bf6bbb6a7ab5405efa92856dda7bc9ec36f5c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:11.588575 kubelet[2658]: E0513 23:59:11.588517 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b587b1e3c142c229b768333865bf6bbb6a7ab5405efa92856dda7bc9ec36f5c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:11.588655 kubelet[2658]: E0513 23:59:11.588590 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b587b1e3c142c229b768333865bf6bbb6a7ab5405efa92856dda7bc9ec36f5c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:59:11.588655 kubelet[2658]: E0513 23:59:11.588612 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b587b1e3c142c229b768333865bf6bbb6a7ab5405efa92856dda7bc9ec36f5c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:59:11.588764 kubelet[2658]: E0513 23:59:11.588681 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-ptzvd_calico-apiserver(49054da8-6d54-4b6e-8457-befd52fd3a07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-ptzvd_calico-apiserver(49054da8-6d54-4b6e-8457-befd52fd3a07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b587b1e3c142c229b768333865bf6bbb6a7ab5405efa92856dda7bc9ec36f5c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" podUID="49054da8-6d54-4b6e-8457-befd52fd3a07" May 13 23:59:11.714167 sshd[3981]: Connection closed by 10.0.0.1 port 55722 May 13 23:59:11.714614 sshd-session[3941]: pam_unix(sshd:session): session closed for user core May 13 23:59:11.721343 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:55722.service: Deactivated successfully. May 13 23:59:11.724485 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:59:11.725651 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. May 13 23:59:11.727240 systemd-logind[1475]: Removed session 10. May 13 23:59:12.809948 kubelet[2658]: I0513 23:59:12.809883 2658 scope.go:117] "RemoveContainer" containerID="b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f" May 13 23:59:12.810506 kubelet[2658]: E0513 23:59:12.809988 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:12.810506 kubelet[2658]: E0513 23:59:12.810098 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-g2hgr_calico-system(b6b75de9-b29e-4ecb-9883-253cdb37c993)\"" pod="calico-system/calico-node-g2hgr" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" May 13 23:59:16.733870 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:55734.service - OpenSSH per-connection server daemon (10.0.0.1:55734). May 13 23:59:16.797109 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 55734 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:16.799022 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:16.803970 systemd-logind[1475]: New session 11 of user core. May 13 23:59:16.813845 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:59:16.984167 sshd[3998]: Connection closed by 10.0.0.1 port 55734 May 13 23:59:16.984481 sshd-session[3996]: pam_unix(sshd:session): session closed for user core May 13 23:59:16.989148 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:55734.service: Deactivated successfully. May 13 23:59:16.991856 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:59:16.992764 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. May 13 23:59:16.994312 systemd-logind[1475]: Removed session 11. May 13 23:59:20.004478 kubelet[2658]: E0513 23:59:20.004401 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:20.005268 containerd[1491]: time="2025-05-13T23:59:20.005045239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,}" May 13 23:59:20.072256 containerd[1491]: time="2025-05-13T23:59:20.072173719Z" level=error msg="Failed to destroy network for sandbox \"919466b16af10ae6a079a1e024474b25164a697b3133702712f4234f7ed1eeef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:20.075351 systemd[1]: run-netns-cni\x2d9f282add\x2d51ef\x2da993\x2d6189\x2d03af32d712a1.mount: Deactivated successfully. May 13 23:59:20.076436 containerd[1491]: time="2025-05-13T23:59:20.076383256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"919466b16af10ae6a079a1e024474b25164a697b3133702712f4234f7ed1eeef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:20.076898 kubelet[2658]: E0513 23:59:20.076816 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"919466b16af10ae6a079a1e024474b25164a697b3133702712f4234f7ed1eeef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:20.077046 kubelet[2658]: E0513 23:59:20.076919 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"919466b16af10ae6a079a1e024474b25164a697b3133702712f4234f7ed1eeef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:59:20.077046 kubelet[2658]: E0513 23:59:20.076947 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"919466b16af10ae6a079a1e024474b25164a697b3133702712f4234f7ed1eeef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:59:20.077046 kubelet[2658]: E0513 23:59:20.077010 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"919466b16af10ae6a079a1e024474b25164a697b3133702712f4234f7ed1eeef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-l9mww" podUID="4df2c9e3-a73b-411b-a21e-2c619d05304c" May 13 23:59:21.998930 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:34412.service - OpenSSH per-connection server daemon (10.0.0.1:34412). May 13 23:59:22.004778 containerd[1491]: time="2025-05-13T23:59:22.004728311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,}" May 13 23:59:22.055244 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 34412 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:22.057580 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:22.062823 systemd-logind[1475]: New session 12 of user core. May 13 23:59:22.072911 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:59:22.296184 containerd[1491]: time="2025-05-13T23:59:22.296029989Z" level=error msg="Failed to destroy network for sandbox \"2ef6ac0f6d95e32d720273c1a994f89e1d617b4af3713a53aaf4aedf4d541f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:22.298952 systemd[1]: run-netns-cni\x2d0706e719\x2df77c\x2da468\x2d1610\x2d3b7fb95cf233.mount: Deactivated successfully. May 13 23:59:22.327382 sshd[4050]: Connection closed by 10.0.0.1 port 34412 May 13 23:59:22.327804 sshd-session[4048]: pam_unix(sshd:session): session closed for user core May 13 23:59:22.332843 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:34412.service: Deactivated successfully. May 13 23:59:22.334991 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:59:22.335815 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. May 13 23:59:22.336750 systemd-logind[1475]: Removed session 12. May 13 23:59:22.677605 containerd[1491]: time="2025-05-13T23:59:22.677513961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ef6ac0f6d95e32d720273c1a994f89e1d617b4af3713a53aaf4aedf4d541f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:22.677894 kubelet[2658]: E0513 23:59:22.677831 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ef6ac0f6d95e32d720273c1a994f89e1d617b4af3713a53aaf4aedf4d541f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:22.678283 kubelet[2658]: E0513 23:59:22.677916 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ef6ac0f6d95e32d720273c1a994f89e1d617b4af3713a53aaf4aedf4d541f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:59:22.678283 kubelet[2658]: E0513 23:59:22.677936 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ef6ac0f6d95e32d720273c1a994f89e1d617b4af3713a53aaf4aedf4d541f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:59:22.678283 kubelet[2658]: E0513 23:59:22.677988 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ef6ac0f6d95e32d720273c1a994f89e1d617b4af3713a53aaf4aedf4d541f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" podUID="6687c9e7-fce4-4cea-b426-8f1da2fef6f3" May 13 23:59:23.005988 containerd[1491]: time="2025-05-13T23:59:23.005494722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:23.005988 containerd[1491]: time="2025-05-13T23:59:23.005614641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,}" May 13 23:59:23.267081 containerd[1491]: time="2025-05-13T23:59:23.266906185Z" level=error msg="Failed to destroy network for sandbox \"f59fe4e2b9b6b538393a8842ec81e674b2bb6a1543fffa5719f96754cea5006b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:23.270450 systemd[1]: run-netns-cni\x2d0b90ed91\x2d5ef3\x2d22fb\x2d79c1\x2d0cf5d6e9be6c.mount: Deactivated successfully. May 13 23:59:23.288059 containerd[1491]: time="2025-05-13T23:59:23.287743644Z" level=error msg="Failed to destroy network for sandbox \"6d38a0bda5c01978a821ae3f13747783196ab5ad876885211901c4fd8b0b66a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:23.290577 systemd[1]: run-netns-cni\x2d0e906c0a\x2dfb98\x2dd4f5\x2d4afb\x2d9e68a0b7db09.mount: Deactivated successfully. May 13 23:59:23.334474 containerd[1491]: time="2025-05-13T23:59:23.334341522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59fe4e2b9b6b538393a8842ec81e674b2bb6a1543fffa5719f96754cea5006b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:23.334889 kubelet[2658]: E0513 23:59:23.334817 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59fe4e2b9b6b538393a8842ec81e674b2bb6a1543fffa5719f96754cea5006b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:23.335066 kubelet[2658]: E0513 23:59:23.334913 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59fe4e2b9b6b538393a8842ec81e674b2bb6a1543fffa5719f96754cea5006b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:59:23.335066 kubelet[2658]: E0513 23:59:23.334948 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59fe4e2b9b6b538393a8842ec81e674b2bb6a1543fffa5719f96754cea5006b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:59:23.335066 kubelet[2658]: E0513 23:59:23.335022 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f59fe4e2b9b6b538393a8842ec81e674b2bb6a1543fffa5719f96754cea5006b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" podUID="b4d74f95-719e-4dc2-b743-1167771220e5" May 13 23:59:23.368050 containerd[1491]: time="2025-05-13T23:59:23.367935383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d38a0bda5c01978a821ae3f13747783196ab5ad876885211901c4fd8b0b66a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:23.368330 kubelet[2658]: E0513 23:59:23.368257 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d38a0bda5c01978a821ae3f13747783196ab5ad876885211901c4fd8b0b66a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:23.368430 kubelet[2658]: E0513 23:59:23.368339 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d38a0bda5c01978a821ae3f13747783196ab5ad876885211901c4fd8b0b66a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:59:23.368430 kubelet[2658]: E0513 23:59:23.368363 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d38a0bda5c01978a821ae3f13747783196ab5ad876885211901c4fd8b0b66a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:59:23.368515 kubelet[2658]: E0513 23:59:23.368418 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d38a0bda5c01978a821ae3f13747783196ab5ad876885211901c4fd8b0b66a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:59:24.005719 containerd[1491]: time="2025-05-13T23:59:24.005618691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:24.211115 containerd[1491]: time="2025-05-13T23:59:24.210856848Z" level=error msg="Failed to destroy network for sandbox \"bbf57444aa7af407acbdf3b1cc3777050adec4f2b4e2e0aee1601c0570b14aa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:24.215621 systemd[1]: run-netns-cni\x2dd81642b4\x2da4bd\x2d0b60\x2d5374\x2dd4cd480dfa01.mount: Deactivated successfully. May 13 23:59:24.279885 containerd[1491]: time="2025-05-13T23:59:24.279680746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf57444aa7af407acbdf3b1cc3777050adec4f2b4e2e0aee1601c0570b14aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:24.280188 kubelet[2658]: E0513 23:59:24.280088 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf57444aa7af407acbdf3b1cc3777050adec4f2b4e2e0aee1601c0570b14aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:24.280188 kubelet[2658]: E0513 23:59:24.280187 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf57444aa7af407acbdf3b1cc3777050adec4f2b4e2e0aee1601c0570b14aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:59:24.280984 kubelet[2658]: E0513 23:59:24.280218 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf57444aa7af407acbdf3b1cc3777050adec4f2b4e2e0aee1601c0570b14aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:59:24.280984 kubelet[2658]: E0513 23:59:24.280290 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-ptzvd_calico-apiserver(49054da8-6d54-4b6e-8457-befd52fd3a07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-ptzvd_calico-apiserver(49054da8-6d54-4b6e-8457-befd52fd3a07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbf57444aa7af407acbdf3b1cc3777050adec4f2b4e2e0aee1601c0570b14aa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" podUID="49054da8-6d54-4b6e-8457-befd52fd3a07" May 13 23:59:25.004585 kubelet[2658]: E0513 23:59:25.004502 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:25.005628 containerd[1491]: time="2025-05-13T23:59:25.005079479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,}" May 13 23:59:25.288518 containerd[1491]: time="2025-05-13T23:59:25.288325226Z" level=error msg="Failed to destroy network for sandbox \"1f629367e9193e3cd0c55d95cd19611a3030db930ccaf1ba2775aaa6d41e7b80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:25.290824 systemd[1]: run-netns-cni\x2d7f1df84d\x2d9feb\x2d4f2b\x2d286e\x2d9052df557392.mount: Deactivated successfully. May 13 23:59:25.394185 containerd[1491]: time="2025-05-13T23:59:25.394096471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f629367e9193e3cd0c55d95cd19611a3030db930ccaf1ba2775aaa6d41e7b80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:25.394469 kubelet[2658]: E0513 23:59:25.394424 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f629367e9193e3cd0c55d95cd19611a3030db930ccaf1ba2775aaa6d41e7b80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:25.394960 kubelet[2658]: E0513 23:59:25.394497 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f629367e9193e3cd0c55d95cd19611a3030db930ccaf1ba2775aaa6d41e7b80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:59:25.394960 kubelet[2658]: E0513 23:59:25.394520 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f629367e9193e3cd0c55d95cd19611a3030db930ccaf1ba2775aaa6d41e7b80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:59:25.394960 kubelet[2658]: E0513 23:59:25.394575 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f629367e9193e3cd0c55d95cd19611a3030db930ccaf1ba2775aaa6d41e7b80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rnl27" podUID="f40d0199-33e4-4e2f-9993-c63871326054" May 13 23:59:26.004955 kubelet[2658]: I0513 23:59:26.004887 2658 scope.go:117] "RemoveContainer" containerID="b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f" May 13 23:59:26.005169 kubelet[2658]: E0513 23:59:26.005021 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:26.007658 containerd[1491]: time="2025-05-13T23:59:26.007588730Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" May 13 23:59:26.264643 containerd[1491]: time="2025-05-13T23:59:26.264407268Z" level=info msg="Container 08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:26.515371 containerd[1491]: time="2025-05-13T23:59:26.515177870Z" level=info msg="CreateContainer within sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\"" May 13 23:59:26.516074 containerd[1491]: time="2025-05-13T23:59:26.516025320Z" level=info msg="StartContainer for \"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\"" May 13 23:59:26.517588 containerd[1491]: time="2025-05-13T23:59:26.517544975Z" level=info msg="connecting to shim 08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe" address="unix:///run/containerd/s/d0aa7498c6ad6d2ae8c50dffa2e9b83adc77ee3f7920b95b9a619418833640f8" protocol=ttrpc version=3 May 13 23:59:26.549895 systemd[1]: Started cri-containerd-08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe.scope - libcontainer container 08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe. May 13 23:59:26.690842 systemd[1]: cri-containerd-08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe.scope: Deactivated successfully. May 13 23:59:26.693286 containerd[1491]: time="2025-05-13T23:59:26.693210463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\" id:\"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\" pid:4261 exit_status:1 exited_at:{seconds:1747180766 nanos:692566280}" May 13 23:59:26.702547 containerd[1491]: time="2025-05-13T23:59:26.702439471Z" level=info msg="received exit event container_id:\"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\" id:\"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\" pid:4261 exit_status:1 exited_at:{seconds:1747180766 nanos:692566280}" May 13 23:59:26.706145 containerd[1491]: time="2025-05-13T23:59:26.706079342Z" level=info msg="StartContainer for \"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\" returns successfully" May 13 23:59:26.736432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe-rootfs.mount: Deactivated successfully. May 13 23:59:27.306507 kubelet[2658]: I0513 23:59:27.306455 2658 scope.go:117] "RemoveContainer" containerID="b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f" May 13 23:59:27.307181 kubelet[2658]: I0513 23:59:27.307021 2658 scope.go:117] "RemoveContainer" containerID="08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe" May 13 23:59:27.307181 kubelet[2658]: E0513 23:59:27.307113 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:27.307253 kubelet[2658]: E0513 23:59:27.307211 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-g2hgr_calico-system(b6b75de9-b29e-4ecb-9883-253cdb37c993)\"" pod="calico-system/calico-node-g2hgr" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" May 13 23:59:27.310358 containerd[1491]: time="2025-05-13T23:59:27.309261123Z" level=info msg="RemoveContainer for \"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\"" May 13 23:59:27.343481 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:34424.service - OpenSSH per-connection server daemon (10.0.0.1:34424). May 13 23:59:27.391675 containerd[1491]: time="2025-05-13T23:59:27.391594041Z" level=info msg="RemoveContainer for \"b73cf0f3fa711a89005de8a43804835caaa954da88e151ee4c7ba62bfa9e006f\" returns successfully" May 13 23:59:27.401710 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 34424 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:27.403579 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:27.408125 systemd-logind[1475]: New session 13 of user core. May 13 23:59:27.417789 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:59:27.560704 sshd[4298]: Connection closed by 10.0.0.1 port 34424 May 13 23:59:27.562504 sshd-session[4296]: pam_unix(sshd:session): session closed for user core May 13 23:59:27.565808 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:34424.service: Deactivated successfully. May 13 23:59:27.568176 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:59:27.570204 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. May 13 23:59:27.571277 systemd-logind[1475]: Removed session 13. May 13 23:59:32.574468 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:44274.service - OpenSSH per-connection server daemon (10.0.0.1:44274). May 13 23:59:32.627839 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 44274 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:32.629423 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:32.633595 systemd-logind[1475]: New session 14 of user core. May 13 23:59:32.642803 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:59:32.776398 sshd[4315]: Connection closed by 10.0.0.1 port 44274 May 13 23:59:32.776820 sshd-session[4313]: pam_unix(sshd:session): session closed for user core May 13 23:59:32.787989 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:44274.service: Deactivated successfully. May 13 23:59:32.790163 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:59:32.791937 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. May 13 23:59:32.793384 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:44282.service - OpenSSH per-connection server daemon (10.0.0.1:44282). May 13 23:59:32.794602 systemd-logind[1475]: Removed session 14. May 13 23:59:32.853583 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 44282 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:32.855498 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:32.861641 systemd-logind[1475]: New session 15 of user core. May 13 23:59:32.872912 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:59:33.088904 sshd[4331]: Connection closed by 10.0.0.1 port 44282 May 13 23:59:33.091104 sshd-session[4328]: pam_unix(sshd:session): session closed for user core May 13 23:59:33.105735 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:44282.service: Deactivated successfully. May 13 23:59:33.107894 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:59:33.108611 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. May 13 23:59:33.111147 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:44284.service - OpenSSH per-connection server daemon (10.0.0.1:44284). May 13 23:59:33.112208 systemd-logind[1475]: Removed session 15. May 13 23:59:33.166282 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 44284 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:33.167996 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:33.172458 systemd-logind[1475]: New session 16 of user core. May 13 23:59:33.181833 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:59:33.448769 sshd[4344]: Connection closed by 10.0.0.1 port 44284 May 13 23:59:33.449163 sshd-session[4341]: pam_unix(sshd:session): session closed for user core May 13 23:59:33.454283 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:44284.service: Deactivated successfully. May 13 23:59:33.456821 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:59:33.457848 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. May 13 23:59:33.458986 systemd-logind[1475]: Removed session 16. May 13 23:59:34.005377 containerd[1491]: time="2025-05-13T23:59:34.005328700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,}" May 13 23:59:34.190142 containerd[1491]: time="2025-05-13T23:59:34.190069056Z" level=error msg="Failed to destroy network for sandbox \"16cb4cf6cd7893ab10d3f7a8df68190a9901d269c9fc36e3f4c4705e1ee276c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:34.192450 systemd[1]: run-netns-cni\x2d105ded58\x2dae77\x2dcbaa\x2d1645\x2ddde001ce0c22.mount: Deactivated successfully. May 13 23:59:34.219160 containerd[1491]: time="2025-05-13T23:59:34.219088982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16cb4cf6cd7893ab10d3f7a8df68190a9901d269c9fc36e3f4c4705e1ee276c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:34.219472 kubelet[2658]: E0513 23:59:34.219405 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16cb4cf6cd7893ab10d3f7a8df68190a9901d269c9fc36e3f4c4705e1ee276c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:34.219950 kubelet[2658]: E0513 23:59:34.219495 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16cb4cf6cd7893ab10d3f7a8df68190a9901d269c9fc36e3f4c4705e1ee276c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:59:34.219950 kubelet[2658]: E0513 23:59:34.219520 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16cb4cf6cd7893ab10d3f7a8df68190a9901d269c9fc36e3f4c4705e1ee276c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:59:34.219950 kubelet[2658]: E0513 23:59:34.219576 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16cb4cf6cd7893ab10d3f7a8df68190a9901d269c9fc36e3f4c4705e1ee276c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:59:35.004751 kubelet[2658]: E0513 23:59:35.004639 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:35.005631 containerd[1491]: time="2025-05-13T23:59:35.005254113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,}" May 13 23:59:35.232788 containerd[1491]: time="2025-05-13T23:59:35.232703382Z" level=error msg="Failed to destroy network for sandbox \"b357d8e04be99cb7f84bb843821d0358df85e817693f20657f64e0cdb517105f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:35.235872 systemd[1]: run-netns-cni\x2d041c8a8f\x2dd4e4\x2da194\x2dc6c5\x2dc5a82dd221e2.mount: Deactivated successfully. May 13 23:59:35.281685 containerd[1491]: time="2025-05-13T23:59:35.281480016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b357d8e04be99cb7f84bb843821d0358df85e817693f20657f64e0cdb517105f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:35.281992 kubelet[2658]: E0513 23:59:35.281932 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b357d8e04be99cb7f84bb843821d0358df85e817693f20657f64e0cdb517105f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:35.282660 kubelet[2658]: E0513 23:59:35.282033 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b357d8e04be99cb7f84bb843821d0358df85e817693f20657f64e0cdb517105f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:59:35.282660 kubelet[2658]: E0513 23:59:35.282061 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b357d8e04be99cb7f84bb843821d0358df85e817693f20657f64e0cdb517105f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:59:35.282660 kubelet[2658]: E0513 23:59:35.282119 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b357d8e04be99cb7f84bb843821d0358df85e817693f20657f64e0cdb517105f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-l9mww" podUID="4df2c9e3-a73b-411b-a21e-2c619d05304c" May 13 23:59:37.004451 kubelet[2658]: E0513 23:59:37.004397 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:37.005202 containerd[1491]: time="2025-05-13T23:59:37.005153968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:37.006054 containerd[1491]: time="2025-05-13T23:59:37.005863896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,}" May 13 23:59:37.186523 containerd[1491]: time="2025-05-13T23:59:37.186404273Z" level=error msg="Failed to destroy network for sandbox \"657b678f6976f7b7f059847eb7bef04dd39849327bd0436c27b628dbee2bcfd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:37.189644 systemd[1]: run-netns-cni\x2d72358a17\x2de7ba\x2dc99f\x2d1dc4\x2d9ca32792d571.mount: Deactivated successfully. May 13 23:59:37.290016 containerd[1491]: time="2025-05-13T23:59:37.289858222Z" level=error msg="Failed to destroy network for sandbox \"cc190abb41f209f2d4a3e6b2ee95b211d1adf70c9e0c6518d8276fd3df2429d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:37.292583 systemd[1]: run-netns-cni\x2d86d07fb1\x2d493c\x2dafad\x2d41e5\x2d5930a49cc92f.mount: Deactivated successfully. May 13 23:59:37.367887 containerd[1491]: time="2025-05-13T23:59:37.367805347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"657b678f6976f7b7f059847eb7bef04dd39849327bd0436c27b628dbee2bcfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:37.368400 kubelet[2658]: E0513 23:59:37.368324 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657b678f6976f7b7f059847eb7bef04dd39849327bd0436c27b628dbee2bcfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:37.368492 kubelet[2658]: E0513 23:59:37.368438 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657b678f6976f7b7f059847eb7bef04dd39849327bd0436c27b628dbee2bcfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:59:37.368492 kubelet[2658]: E0513 23:59:37.368469 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657b678f6976f7b7f059847eb7bef04dd39849327bd0436c27b628dbee2bcfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:59:37.368582 kubelet[2658]: E0513 23:59:37.368541 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"657b678f6976f7b7f059847eb7bef04dd39849327bd0436c27b628dbee2bcfd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" podUID="b4d74f95-719e-4dc2-b743-1167771220e5" May 13 23:59:37.407710 containerd[1491]: time="2025-05-13T23:59:37.407557003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc190abb41f209f2d4a3e6b2ee95b211d1adf70c9e0c6518d8276fd3df2429d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:37.408031 kubelet[2658]: E0513 23:59:37.407957 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc190abb41f209f2d4a3e6b2ee95b211d1adf70c9e0c6518d8276fd3df2429d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:37.408112 kubelet[2658]: E0513 23:59:37.408053 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc190abb41f209f2d4a3e6b2ee95b211d1adf70c9e0c6518d8276fd3df2429d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:59:37.408112 kubelet[2658]: E0513 23:59:37.408081 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc190abb41f209f2d4a3e6b2ee95b211d1adf70c9e0c6518d8276fd3df2429d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:59:37.408182 kubelet[2658]: E0513 23:59:37.408143 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc190abb41f209f2d4a3e6b2ee95b211d1adf70c9e0c6518d8276fd3df2429d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rnl27" podUID="f40d0199-33e4-4e2f-9993-c63871326054" May 13 23:59:38.005547 containerd[1491]: time="2025-05-13T23:59:38.005481006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,}" May 13 23:59:38.350534 containerd[1491]: time="2025-05-13T23:59:38.350461938Z" level=error msg="Failed to destroy network for sandbox \"ed502b54bcd37ea840a06acac965240ded164059c044a379cd0f0e75a534317a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:38.352954 systemd[1]: run-netns-cni\x2d38695e87\x2dc51d\x2d8606\x2d6f03\x2defde4c926778.mount: Deactivated successfully. May 13 23:59:38.461685 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:54478.service - OpenSSH per-connection server daemon (10.0.0.1:54478). May 13 23:59:38.519612 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 54478 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:38.521306 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:38.526192 systemd-logind[1475]: New session 17 of user core. May 13 23:59:38.535863 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:59:38.666416 sshd[4547]: Connection closed by 10.0.0.1 port 54478 May 13 23:59:38.667647 containerd[1491]: time="2025-05-13T23:59:38.667582211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed502b54bcd37ea840a06acac965240ded164059c044a379cd0f0e75a534317a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:38.668012 kubelet[2658]: E0513 23:59:38.667943 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed502b54bcd37ea840a06acac965240ded164059c044a379cd0f0e75a534317a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:38.668342 kubelet[2658]: E0513 23:59:38.668031 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed502b54bcd37ea840a06acac965240ded164059c044a379cd0f0e75a534317a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:59:38.668342 kubelet[2658]: E0513 23:59:38.668058 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed502b54bcd37ea840a06acac965240ded164059c044a379cd0f0e75a534317a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:59:38.668342 kubelet[2658]: E0513 23:59:38.668124 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed502b54bcd37ea840a06acac965240ded164059c044a379cd0f0e75a534317a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" podUID="6687c9e7-fce4-4cea-b426-8f1da2fef6f3" May 13 23:59:38.668576 sshd-session[4545]: pam_unix(sshd:session): session closed for user core May 13 23:59:38.673483 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:54478.service: Deactivated successfully. May 13 23:59:38.675937 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:59:38.676659 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. May 13 23:59:38.677758 systemd-logind[1475]: Removed session 17. May 13 23:59:39.005144 containerd[1491]: time="2025-05-13T23:59:39.004966353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:39.170311 containerd[1491]: time="2025-05-13T23:59:39.170244661Z" level=error msg="Failed to destroy network for sandbox \"38013b984883d26c4d67625c3cdeb79c2d2549353eaf229f9ef65aadf9ebd267\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:39.172604 systemd[1]: run-netns-cni\x2d21978476\x2ddce7\x2df8ac\x2de1d8\x2daae6a3001b5c.mount: Deactivated successfully. May 13 23:59:39.236747 containerd[1491]: time="2025-05-13T23:59:39.236688373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38013b984883d26c4d67625c3cdeb79c2d2549353eaf229f9ef65aadf9ebd267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:39.237058 kubelet[2658]: E0513 23:59:39.237006 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38013b984883d26c4d67625c3cdeb79c2d2549353eaf229f9ef65aadf9ebd267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:39.237112 kubelet[2658]: E0513 23:59:39.237086 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38013b984883d26c4d67625c3cdeb79c2d2549353eaf229f9ef65aadf9ebd267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:59:39.237137 kubelet[2658]: E0513 23:59:39.237111 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38013b984883d26c4d67625c3cdeb79c2d2549353eaf229f9ef65aadf9ebd267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" May 13 23:59:39.237206 kubelet[2658]: E0513 23:59:39.237169 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-ptzvd_calico-apiserver(49054da8-6d54-4b6e-8457-befd52fd3a07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-ptzvd_calico-apiserver(49054da8-6d54-4b6e-8457-befd52fd3a07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38013b984883d26c4d67625c3cdeb79c2d2549353eaf229f9ef65aadf9ebd267\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" podUID="49054da8-6d54-4b6e-8457-befd52fd3a07" May 13 23:59:40.004554 kubelet[2658]: E0513 23:59:40.004487 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:41.004486 kubelet[2658]: I0513 23:59:41.004431 2658 scope.go:117] "RemoveContainer" containerID="08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe" May 13 23:59:41.004631 kubelet[2658]: E0513 23:59:41.004531 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:41.004631 kubelet[2658]: E0513 23:59:41.004622 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-g2hgr_calico-system(b6b75de9-b29e-4ecb-9883-253cdb37c993)\"" pod="calico-system/calico-node-g2hgr" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" May 13 23:59:42.004147 kubelet[2658]: E0513 23:59:42.004098 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:42.374181 containerd[1491]: time="2025-05-13T23:59:42.374128655Z" level=info msg="StopPodSandbox for \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\"" May 13 23:59:42.378541 containerd[1491]: time="2025-05-13T23:59:42.378504204Z" level=info msg="Container to stop \"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:42.378541 containerd[1491]: time="2025-05-13T23:59:42.378531458Z" level=info msg="Container to stop \"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:42.378541 containerd[1491]: time="2025-05-13T23:59:42.378542027Z" level=info msg="Container to stop \"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:42.385499 systemd[1]: cri-containerd-670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed.scope: Deactivated successfully. May 13 23:59:42.386074 containerd[1491]: time="2025-05-13T23:59:42.386042694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" id:\"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" pid:3205 exit_status:137 exited_at:{seconds:1747180782 nanos:385652540}" May 13 23:59:42.415435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed-rootfs.mount: Deactivated successfully. May 13 23:59:43.033463 containerd[1491]: time="2025-05-13T23:59:43.033384423Z" level=info msg="shim disconnected" id=670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed namespace=k8s.io May 13 23:59:43.033463 containerd[1491]: time="2025-05-13T23:59:43.033429621Z" level=warning msg="cleaning up after shim disconnected" id=670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed namespace=k8s.io May 13 23:59:43.033463 containerd[1491]: time="2025-05-13T23:59:43.033439399Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:43.054482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed-shm.mount: Deactivated successfully. May 13 23:59:43.068490 containerd[1491]: time="2025-05-13T23:59:43.068409619Z" level=info msg="received exit event sandbox_id:\"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" exit_status:137 exited_at:{seconds:1747180782 nanos:385652540}" May 13 23:59:43.070043 containerd[1491]: time="2025-05-13T23:59:43.069973341Z" level=info msg="TearDown network for sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" successfully" May 13 23:59:43.070043 containerd[1491]: time="2025-05-13T23:59:43.069997076Z" level=info msg="StopPodSandbox for \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" returns successfully" May 13 23:59:43.137484 kubelet[2658]: I0513 23:59:43.137388 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-xtables-lock\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.137484 kubelet[2658]: I0513 23:59:43.137445 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-var-lib-calico\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.138207 kubelet[2658]: I0513 23:59:43.137512 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b6b75de9-b29e-4ecb-9883-253cdb37c993-node-certs\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.138207 kubelet[2658]: I0513 23:59:43.137535 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-var-run-calico\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.138207 kubelet[2658]: I0513 23:59:43.137558 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nx54\" (UniqueName: \"kubernetes.io/projected/b6b75de9-b29e-4ecb-9883-253cdb37c993-kube-api-access-2nx54\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.138207 kubelet[2658]: I0513 23:59:43.137586 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6b75de9-b29e-4ecb-9883-253cdb37c993-tigera-ca-bundle\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.138207 kubelet[2658]: I0513 23:59:43.137604 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-log-dir\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.138207 kubelet[2658]: I0513 23:59:43.137625 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-policysync\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.138591 kubelet[2658]: I0513 23:59:43.137642 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-flexvol-driver-host\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.138591 kubelet[2658]: I0513 23:59:43.137791 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.138591 kubelet[2658]: I0513 23:59:43.137844 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.138591 kubelet[2658]: I0513 23:59:43.137869 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.144412 kubelet[2658]: I0513 23:59:43.139846 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.144412 kubelet[2658]: I0513 23:59:43.139939 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-policysync" (OuterVolumeSpecName: "policysync") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.144412 kubelet[2658]: I0513 23:59:43.139974 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.144412 kubelet[2658]: E0513 23:59:43.141861 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" containerName="calico-node" May 13 23:59:43.144412 kubelet[2658]: E0513 23:59:43.141886 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" containerName="calico-node" May 13 23:59:43.144412 kubelet[2658]: E0513 23:59:43.141895 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" containerName="flexvol-driver" May 13 23:59:43.144412 kubelet[2658]: E0513 23:59:43.141916 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" containerName="install-cni" May 13 23:59:43.144412 kubelet[2658]: I0513 23:59:43.141954 2658 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" containerName="calico-node" May 13 23:59:43.144849 kubelet[2658]: I0513 23:59:43.141963 2658 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" containerName="calico-node" May 13 23:59:43.144849 kubelet[2658]: E0513 23:59:43.141996 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" containerName="calico-node" May 13 23:59:43.144849 kubelet[2658]: I0513 23:59:43.142023 2658 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" containerName="calico-node" May 13 23:59:43.145404 systemd[1]: var-lib-kubelet-pods-b6b75de9\x2db29e\x2d4ecb\x2d9883\x2d253cdb37c993-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 13 23:59:43.147439 kubelet[2658]: I0513 23:59:43.146868 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b75de9-b29e-4ecb-9883-253cdb37c993-node-certs" (OuterVolumeSpecName: "node-certs") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:59:43.152119 kubelet[2658]: I0513 23:59:43.151846 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b75de9-b29e-4ecb-9883-253cdb37c993-kube-api-access-2nx54" (OuterVolumeSpecName: "kube-api-access-2nx54") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "kube-api-access-2nx54". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:59:43.152293 systemd[1]: var-lib-kubelet-pods-b6b75de9\x2db29e\x2d4ecb\x2d9883\x2d253cdb37c993-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2nx54.mount: Deactivated successfully. May 13 23:59:43.155726 kubelet[2658]: I0513 23:59:43.153316 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b75de9-b29e-4ecb-9883-253cdb37c993-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:59:43.156972 systemd[1]: var-lib-kubelet-pods-b6b75de9\x2db29e\x2d4ecb\x2d9883\x2d253cdb37c993-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 13 23:59:43.164125 systemd[1]: Created slice kubepods-besteffort-pod5d43a1b3_6a91_483a_b8ca_59f9b8b05278.slice - libcontainer container kubepods-besteffort-pod5d43a1b3_6a91_483a_b8ca_59f9b8b05278.slice. May 13 23:59:43.237989 kubelet[2658]: I0513 23:59:43.237920 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-bin-dir\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.237989 kubelet[2658]: I0513 23:59:43.237961 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-net-dir\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.237989 kubelet[2658]: I0513 23:59:43.237978 2658 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-lib-modules\") pod \"b6b75de9-b29e-4ecb-9883-253cdb37c993\" (UID: \"b6b75de9-b29e-4ecb-9883-253cdb37c993\") " May 13 23:59:43.238258 kubelet[2658]: I0513 23:59:43.238011 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-var-run-calico\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238258 kubelet[2658]: I0513 23:59:43.238033 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-cni-net-dir\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238258 kubelet[2658]: I0513 23:59:43.238047 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bscbn\" (UniqueName: \"kubernetes.io/projected/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-kube-api-access-bscbn\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238258 kubelet[2658]: I0513 23:59:43.238066 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-tigera-ca-bundle\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238258 kubelet[2658]: I0513 23:59:43.238082 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-var-lib-calico\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238450 kubelet[2658]: I0513 23:59:43.238098 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-lib-modules\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238450 kubelet[2658]: I0513 23:59:43.238113 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-xtables-lock\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238450 kubelet[2658]: I0513 23:59:43.238127 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-cni-bin-dir\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238450 kubelet[2658]: I0513 23:59:43.238141 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-cni-log-dir\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238450 kubelet[2658]: I0513 23:59:43.238157 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-node-certs\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238611 kubelet[2658]: I0513 23:59:43.238184 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-flexvol-driver-host\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238611 kubelet[2658]: I0513 23:59:43.238212 2658 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5d43a1b3-6a91-483a-b8ca-59f9b8b05278-policysync\") pod \"calico-node-7rfjl\" (UID: \"5d43a1b3-6a91-483a-b8ca-59f9b8b05278\") " pod="calico-system/calico-node-7rfjl" May 13 23:59:43.238611 kubelet[2658]: I0513 23:59:43.238241 2658 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6b75de9-b29e-4ecb-9883-253cdb37c993-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238611 kubelet[2658]: I0513 23:59:43.238273 2658 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238611 kubelet[2658]: I0513 23:59:43.238284 2658 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-policysync\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238611 kubelet[2658]: I0513 23:59:43.238293 2658 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238611 kubelet[2658]: I0513 23:59:43.238302 2658 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b6b75de9-b29e-4ecb-9883-253cdb37c993-node-certs\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238886 kubelet[2658]: I0513 23:59:43.238309 2658 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238886 kubelet[2658]: I0513 23:59:43.238318 2658 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238886 kubelet[2658]: I0513 23:59:43.238328 2658 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238886 kubelet[2658]: I0513 23:59:43.238343 2658 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2nx54\" (UniqueName: \"kubernetes.io/projected/b6b75de9-b29e-4ecb-9883-253cdb37c993-kube-api-access-2nx54\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.238886 kubelet[2658]: I0513 23:59:43.238420 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.238886 kubelet[2658]: I0513 23:59:43.238448 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.239098 kubelet[2658]: I0513 23:59:43.238485 2658 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6b75de9-b29e-4ecb-9883-253cdb37c993" (UID: "b6b75de9-b29e-4ecb-9883-253cdb37c993"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:43.340877 kubelet[2658]: I0513 23:59:43.340579 2658 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.340877 kubelet[2658]: I0513 23:59:43.340627 2658 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.340877 kubelet[2658]: I0513 23:59:43.340640 2658 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b6b75de9-b29e-4ecb-9883-253cdb37c993-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 13 23:59:43.352996 kubelet[2658]: I0513 23:59:43.352791 2658 scope.go:117] "RemoveContainer" containerID="08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe" May 13 23:59:43.357379 containerd[1491]: time="2025-05-13T23:59:43.357194499Z" level=info msg="RemoveContainer for \"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\"" May 13 23:59:43.363194 systemd[1]: Removed slice kubepods-besteffort-podb6b75de9_b29e_4ecb_9883_253cdb37c993.slice - libcontainer container kubepods-besteffort-podb6b75de9_b29e_4ecb_9883_253cdb37c993.slice. May 13 23:59:43.363413 systemd[1]: kubepods-besteffort-podb6b75de9_b29e_4ecb_9883_253cdb37c993.slice: Consumed 1.046s CPU time, 162.8M memory peak, 4K read from disk, 160.4M written to disk. May 13 23:59:43.379190 containerd[1491]: time="2025-05-13T23:59:43.378947189Z" level=info msg="RemoveContainer for \"08e9e638f38f50a7fa27d4de8a9bdcb9f885dc44b961787d13c669e80beb11fe\" returns successfully" May 13 23:59:43.379745 kubelet[2658]: I0513 23:59:43.379317 2658 scope.go:117] "RemoveContainer" containerID="0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087" May 13 23:59:43.383193 containerd[1491]: time="2025-05-13T23:59:43.383143327Z" level=info msg="RemoveContainer for \"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\"" May 13 23:59:43.406921 containerd[1491]: time="2025-05-13T23:59:43.406553600Z" level=info msg="RemoveContainer for \"0367456d3f5488042e21b10a9f45cc8976816a8d13097fd3f741fd765b130087\" returns successfully" May 13 23:59:43.407102 kubelet[2658]: I0513 23:59:43.406941 2658 scope.go:117] "RemoveContainer" containerID="9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161" May 13 23:59:43.409763 containerd[1491]: time="2025-05-13T23:59:43.409695261Z" level=info msg="RemoveContainer for \"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\"" May 13 23:59:43.423345 containerd[1491]: time="2025-05-13T23:59:43.423289627Z" level=info msg="RemoveContainer for \"9665d3c07611c01a647d65afda8f4a704c3e92b88179236d85ae84b975111161\" returns successfully" May 13 23:59:43.468416 kubelet[2658]: E0513 23:59:43.468347 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:43.469043 containerd[1491]: time="2025-05-13T23:59:43.468893193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7rfjl,Uid:5d43a1b3-6a91-483a-b8ca-59f9b8b05278,Namespace:calico-system,Attempt:0,}" May 13 23:59:43.680803 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:54480.service - OpenSSH per-connection server daemon (10.0.0.1:54480). May 13 23:59:43.689493 containerd[1491]: time="2025-05-13T23:59:43.689436776Z" level=info msg="connecting to shim 538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6" address="unix:///run/containerd/s/3c4be032e13237f6e9eae7646cfbff732cc09885df642f5f83a765fbcb4ffaec" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:43.732998 sshd[4640]: Accepted publickey for core from 10.0.0.1 port 54480 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:43.735349 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:43.736008 systemd[1]: Started cri-containerd-538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6.scope - libcontainer container 538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6. May 13 23:59:43.740552 systemd-logind[1475]: New session 18 of user core. May 13 23:59:43.742566 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:59:43.973774 containerd[1491]: time="2025-05-13T23:59:43.973607059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7rfjl,Uid:5d43a1b3-6a91-483a-b8ca-59f9b8b05278,Namespace:calico-system,Attempt:0,} returns sandbox id \"538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6\"" May 13 23:59:43.974454 kubelet[2658]: E0513 23:59:43.974411 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:43.976329 containerd[1491]: time="2025-05-13T23:59:43.976295033Z" level=info msg="CreateContainer within sandbox \"538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:59:44.049796 sshd[4682]: Connection closed by 10.0.0.1 port 54480 May 13 23:59:44.052208 sshd-session[4640]: pam_unix(sshd:session): session closed for user core May 13 23:59:44.056797 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:54480.service: Deactivated successfully. May 13 23:59:44.059204 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:59:44.060077 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. May 13 23:59:44.061118 systemd-logind[1475]: Removed session 18. May 13 23:59:44.076677 containerd[1491]: time="2025-05-13T23:59:44.076609647Z" level=info msg="Container ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:44.319714 containerd[1491]: time="2025-05-13T23:59:44.319530450Z" level=info msg="CreateContainer within sandbox \"538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388\"" May 13 23:59:44.320245 containerd[1491]: time="2025-05-13T23:59:44.320181218Z" level=info msg="StartContainer for \"ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388\"" May 13 23:59:44.321750 containerd[1491]: time="2025-05-13T23:59:44.321718590Z" level=info msg="connecting to shim ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388" address="unix:///run/containerd/s/3c4be032e13237f6e9eae7646cfbff732cc09885df642f5f83a765fbcb4ffaec" protocol=ttrpc version=3 May 13 23:59:44.348944 systemd[1]: Started cri-containerd-ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388.scope - libcontainer container ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388. May 13 23:59:44.470303 containerd[1491]: time="2025-05-13T23:59:44.470169174Z" level=info msg="StartContainer for \"ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388\" returns successfully" May 13 23:59:44.563959 systemd[1]: cri-containerd-ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388.scope: Deactivated successfully. May 13 23:59:44.566402 systemd[1]: cri-containerd-ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388.scope: Consumed 58ms CPU time, 15.7M memory peak, 7.8M read from disk, 6.3M written to disk. May 13 23:59:44.568358 containerd[1491]: time="2025-05-13T23:59:44.567908932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388\" id:\"ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388\" pid:4739 exited_at:{seconds:1747180784 nanos:566260315}" May 13 23:59:44.568358 containerd[1491]: time="2025-05-13T23:59:44.568048311Z" level=info msg="received exit event container_id:\"ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388\" id:\"ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388\" pid:4739 exited_at:{seconds:1747180784 nanos:566260315}" May 13 23:59:44.600291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea5007f9a2aac2dd3f231ae32ed456ee9507ebd84d1eb6cb5a0254cf65758388-rootfs.mount: Deactivated successfully. May 13 23:59:45.005601 containerd[1491]: time="2025-05-13T23:59:45.004628774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,}" May 13 23:59:45.007784 kubelet[2658]: I0513 23:59:45.007705 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b75de9-b29e-4ecb-9883-253cdb37c993" path="/var/lib/kubelet/pods/b6b75de9-b29e-4ecb-9883-253cdb37c993/volumes" May 13 23:59:45.063690 containerd[1491]: time="2025-05-13T23:59:45.063597062Z" level=error msg="Failed to destroy network for sandbox \"01b140cdcca4e0ecd2e2bb9e2cab8bf84ef127669db946abec152c6b5dbc093d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:45.065084 containerd[1491]: time="2025-05-13T23:59:45.065028191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b140cdcca4e0ecd2e2bb9e2cab8bf84ef127669db946abec152c6b5dbc093d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:45.065358 kubelet[2658]: E0513 23:59:45.065308 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b140cdcca4e0ecd2e2bb9e2cab8bf84ef127669db946abec152c6b5dbc093d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:45.065434 kubelet[2658]: E0513 23:59:45.065388 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b140cdcca4e0ecd2e2bb9e2cab8bf84ef127669db946abec152c6b5dbc093d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:59:45.065434 kubelet[2658]: E0513 23:59:45.065417 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b140cdcca4e0ecd2e2bb9e2cab8bf84ef127669db946abec152c6b5dbc093d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ppkvw" May 13 23:59:45.065521 kubelet[2658]: E0513 23:59:45.065471 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ppkvw_calico-system(6c89c23a-8ac4-492c-ae00-402f1ec38ec8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01b140cdcca4e0ecd2e2bb9e2cab8bf84ef127669db946abec152c6b5dbc093d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ppkvw" podUID="6c89c23a-8ac4-492c-ae00-402f1ec38ec8" May 13 23:59:45.066502 systemd[1]: run-netns-cni\x2d540225d2\x2dbd6a\x2da4ec\x2dc189\x2daf363d85c178.mount: Deactivated successfully. May 13 23:59:45.363140 kubelet[2658]: E0513 23:59:45.363043 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:45.364866 containerd[1491]: time="2025-05-13T23:59:45.364811183Z" level=info msg="CreateContainer within sandbox \"538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:59:45.380583 containerd[1491]: time="2025-05-13T23:59:45.380507879Z" level=info msg="Container 1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:45.394263 containerd[1491]: time="2025-05-13T23:59:45.394192273Z" level=info msg="CreateContainer within sandbox \"538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f\"" May 13 23:59:45.397540 containerd[1491]: time="2025-05-13T23:59:45.395169884Z" level=info msg="StartContainer for \"1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f\"" May 13 23:59:45.397540 containerd[1491]: time="2025-05-13T23:59:45.396749469Z" level=info msg="connecting to shim 1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f" address="unix:///run/containerd/s/3c4be032e13237f6e9eae7646cfbff732cc09885df642f5f83a765fbcb4ffaec" protocol=ttrpc version=3 May 13 23:59:45.418956 systemd[1]: Started cri-containerd-1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f.scope - libcontainer container 1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f. May 13 23:59:45.539960 containerd[1491]: time="2025-05-13T23:59:45.539909437Z" level=info msg="StartContainer for \"1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f\" returns successfully" May 13 23:59:46.368252 kubelet[2658]: E0513 23:59:46.368209 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:47.089652 containerd[1491]: time="2025-05-13T23:59:47.089566927Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 13 23:59:47.091882 systemd[1]: cri-containerd-1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f.scope: Deactivated successfully. May 13 23:59:47.092535 systemd[1]: cri-containerd-1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f.scope: Consumed 835ms CPU time, 115.6M memory peak, 101.2M read from disk. May 13 23:59:47.092825 containerd[1491]: time="2025-05-13T23:59:47.091931134Z" level=info msg="received exit event container_id:\"1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f\" id:\"1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f\" pid:4825 exited_at:{seconds:1747180787 nanos:91528154}" May 13 23:59:47.092825 containerd[1491]: time="2025-05-13T23:59:47.092290599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f\" id:\"1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f\" pid:4825 exited_at:{seconds:1747180787 nanos:91528154}" May 13 23:59:47.119303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cc15ff78cc0106a1e444411e5faeb0039c69afc83cdc690612caf8c559b716f-rootfs.mount: Deactivated successfully. May 13 23:59:47.370527 kubelet[2658]: E0513 23:59:47.370385 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:48.375644 kubelet[2658]: E0513 23:59:48.375603 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:48.387026 containerd[1491]: time="2025-05-13T23:59:48.386964160Z" level=info msg="CreateContainer within sandbox \"538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:59:49.062635 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:34664.service - OpenSSH per-connection server daemon (10.0.0.1:34664). May 13 23:59:49.189800 containerd[1491]: time="2025-05-13T23:59:49.189736028Z" level=info msg="Container 0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:49.256766 sshd[4860]: Accepted publickey for core from 10.0.0.1 port 34664 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:49.258760 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:49.263526 systemd-logind[1475]: New session 19 of user core. May 13 23:59:49.272867 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:59:49.708295 containerd[1491]: time="2025-05-13T23:59:49.708238697Z" level=info msg="CreateContainer within sandbox \"538e7ee5d1e272b956dff026e45a6dc09d3f29d8de2b975bc3c55f5d284ff1c6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9\"" May 13 23:59:49.708875 containerd[1491]: time="2025-05-13T23:59:49.708816467Z" level=info msg="StartContainer for \"0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9\"" May 13 23:59:49.710343 containerd[1491]: time="2025-05-13T23:59:49.710298879Z" level=info msg="connecting to shim 0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9" address="unix:///run/containerd/s/3c4be032e13237f6e9eae7646cfbff732cc09885df642f5f83a765fbcb4ffaec" protocol=ttrpc version=3 May 13 23:59:49.739816 systemd[1]: Started cri-containerd-0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9.scope - libcontainer container 0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9. May 13 23:59:49.870157 sshd[4862]: Connection closed by 10.0.0.1 port 34664 May 13 23:59:49.870527 sshd-session[4860]: pam_unix(sshd:session): session closed for user core May 13 23:59:49.875335 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:34664.service: Deactivated successfully. May 13 23:59:49.877508 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:59:49.878418 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. May 13 23:59:49.879426 systemd-logind[1475]: Removed session 19. May 13 23:59:50.004254 kubelet[2658]: E0513 23:59:50.004125 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:50.004849 containerd[1491]: time="2025-05-13T23:59:50.004544880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:50.005001 containerd[1491]: time="2025-05-13T23:59:50.004945777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,}" May 13 23:59:50.154148 containerd[1491]: time="2025-05-13T23:59:50.154087063Z" level=info msg="StartContainer for \"0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9\" returns successfully" May 13 23:59:50.383759 kubelet[2658]: E0513 23:59:50.383705 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:50.477002 containerd[1491]: time="2025-05-13T23:59:50.476962841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9\" id:\"55a8e28e9cc95c459da6571e3dc29f53da7d24466bbfe5b2caf13aed47eca73f\" pid:4935 exit_status:1 exited_at:{seconds:1747180790 nanos:476616400}" May 13 23:59:50.583907 containerd[1491]: time="2025-05-13T23:59:50.583837857Z" level=error msg="Failed to destroy network for sandbox \"18c62639483967e5282cb28bedc1e29a3b2f1397fc69e135f7f6926776b55b1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.584506 containerd[1491]: time="2025-05-13T23:59:50.584442308Z" level=error msg="Failed to destroy network for sandbox \"21c1ea81c7fe498a917e9f2ef8bbaad2e443436d8da3a1c875bd0fe3fd6886bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.586453 systemd[1]: run-netns-cni\x2d25878dcc\x2d2d13\x2d2d6f\x2d377b\x2d7b6e42cbe5a5.mount: Deactivated successfully. May 13 23:59:50.586596 systemd[1]: run-netns-cni\x2da87c0266\x2d75bd\x2d1939\x2dfecb\x2d17eadc6915bd.mount: Deactivated successfully. May 13 23:59:50.749507 containerd[1491]: time="2025-05-13T23:59:50.749337472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c62639483967e5282cb28bedc1e29a3b2f1397fc69e135f7f6926776b55b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.750027 kubelet[2658]: E0513 23:59:50.749571 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c62639483967e5282cb28bedc1e29a3b2f1397fc69e135f7f6926776b55b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.750027 kubelet[2658]: E0513 23:59:50.749627 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c62639483967e5282cb28bedc1e29a3b2f1397fc69e135f7f6926776b55b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:59:50.750027 kubelet[2658]: E0513 23:59:50.749653 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c62639483967e5282cb28bedc1e29a3b2f1397fc69e135f7f6926776b55b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" May 13 23:59:50.750163 kubelet[2658]: E0513 23:59:50.749718 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c6bb84fcc-8lbpv_calico-apiserver(b4d74f95-719e-4dc2-b743-1167771220e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18c62639483967e5282cb28bedc1e29a3b2f1397fc69e135f7f6926776b55b1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" podUID="b4d74f95-719e-4dc2-b743-1167771220e5" May 13 23:59:51.005697 kubelet[2658]: E0513 23:59:51.005376 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:51.009438 containerd[1491]: time="2025-05-13T23:59:51.008047528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,}" May 13 23:59:51.055386 containerd[1491]: time="2025-05-13T23:59:51.055295445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c1ea81c7fe498a917e9f2ef8bbaad2e443436d8da3a1c875bd0fe3fd6886bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:51.055680 kubelet[2658]: E0513 23:59:51.055605 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c1ea81c7fe498a917e9f2ef8bbaad2e443436d8da3a1c875bd0fe3fd6886bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:51.055766 kubelet[2658]: E0513 23:59:51.055701 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c1ea81c7fe498a917e9f2ef8bbaad2e443436d8da3a1c875bd0fe3fd6886bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:59:51.055766 kubelet[2658]: E0513 23:59:51.055729 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c1ea81c7fe498a917e9f2ef8bbaad2e443436d8da3a1c875bd0fe3fd6886bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rnl27" May 13 23:59:51.055848 kubelet[2658]: E0513 23:59:51.055792 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rnl27_kube-system(f40d0199-33e4-4e2f-9993-c63871326054)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21c1ea81c7fe498a917e9f2ef8bbaad2e443436d8da3a1c875bd0fe3fd6886bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rnl27" podUID="f40d0199-33e4-4e2f-9993-c63871326054" May 13 23:59:51.354478 kubelet[2658]: I0513 23:59:51.353931 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7rfjl" podStartSLOduration=8.353904007 podStartE2EDuration="8.353904007s" podCreationTimestamp="2025-05-13 23:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:51.35333811 +0000 UTC m=+88.469248237" watchObservedRunningTime="2025-05-13 23:59:51.353904007 +0000 UTC m=+88.469814134" May 13 23:59:51.374936 containerd[1491]: time="2025-05-13T23:59:51.374864548Z" level=error msg="Failed to destroy network for sandbox \"58c3ad7294d28d8c0cf3586ca86230792073c5cd29a2e0197e07f8a33f75bf5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:51.377453 systemd[1]: run-netns-cni\x2d4ec8b9b9\x2d5ce2\x2d9d1a\x2db2a7\x2d4e8c24f4c8c9.mount: Deactivated successfully. May 13 23:59:51.385903 kubelet[2658]: E0513 23:59:51.385853 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:51.461373 containerd[1491]: time="2025-05-13T23:59:51.461306520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9\" id:\"3a6d7dd1e04313d906140cfdef34e819be94ca17d9b0eff54b9e2a5f4af6fd29\" pid:5055 exit_status:1 exited_at:{seconds:1747180791 nanos:460879372}" May 13 23:59:51.493215 containerd[1491]: time="2025-05-13T23:59:51.493100590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c3ad7294d28d8c0cf3586ca86230792073c5cd29a2e0197e07f8a33f75bf5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:51.493518 kubelet[2658]: E0513 23:59:51.493450 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c3ad7294d28d8c0cf3586ca86230792073c5cd29a2e0197e07f8a33f75bf5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:51.493587 kubelet[2658]: E0513 23:59:51.493541 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c3ad7294d28d8c0cf3586ca86230792073c5cd29a2e0197e07f8a33f75bf5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:59:51.493587 kubelet[2658]: E0513 23:59:51.493562 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c3ad7294d28d8c0cf3586ca86230792073c5cd29a2e0197e07f8a33f75bf5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l9mww" May 13 23:59:51.493650 kubelet[2658]: E0513 23:59:51.493623 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-l9mww_kube-system(4df2c9e3-a73b-411b-a21e-2c619d05304c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58c3ad7294d28d8c0cf3586ca86230792073c5cd29a2e0197e07f8a33f75bf5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-l9mww" podUID="4df2c9e3-a73b-411b-a21e-2c619d05304c" May 13 23:59:52.005345 containerd[1491]: time="2025-05-13T23:59:52.005282633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,}" May 13 23:59:52.166478 containerd[1491]: time="2025-05-13T23:59:52.166412431Z" level=error msg="Failed to destroy network for sandbox \"5de47949b02a517d806589f018f33edcb460dd8359e2caaa0a69925bc470fea3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:52.169388 systemd[1]: run-netns-cni\x2d41c15a47\x2d14ae\x2d6b7f\x2dcd75\x2da6c57137f31b.mount: Deactivated successfully. May 13 23:59:52.301623 containerd[1491]: time="2025-05-13T23:59:52.301442037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de47949b02a517d806589f018f33edcb460dd8359e2caaa0a69925bc470fea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:52.301820 kubelet[2658]: E0513 23:59:52.301768 2658 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de47949b02a517d806589f018f33edcb460dd8359e2caaa0a69925bc470fea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:52.302286 kubelet[2658]: E0513 23:59:52.301848 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de47949b02a517d806589f018f33edcb460dd8359e2caaa0a69925bc470fea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:59:52.302286 kubelet[2658]: E0513 23:59:52.301869 2658 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de47949b02a517d806589f018f33edcb460dd8359e2caaa0a69925bc470fea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" May 13 23:59:52.302286 kubelet[2658]: E0513 23:59:52.301923 2658 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-688d9b6545-z68xp_calico-system(6687c9e7-fce4-4cea-b426-8f1da2fef6f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5de47949b02a517d806589f018f33edcb460dd8359e2caaa0a69925bc470fea3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" podUID="6687c9e7-fce4-4cea-b426-8f1da2fef6f3" May 13 23:59:54.004998 containerd[1491]: time="2025-05-13T23:59:54.004933364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:54.884044 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:34668.service - OpenSSH per-connection server daemon (10.0.0.1:34668). May 13 23:59:55.076213 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 34668 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 13 23:59:55.077870 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:55.082414 systemd-logind[1475]: New session 20 of user core. May 13 23:59:55.090301 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:59:55.100836 systemd-networkd[1413]: cali021b37c4c7c: Link UP May 13 23:59:55.101651 systemd-networkd[1413]: cali021b37c4c7c: Gained carrier May 13 23:59:55.243106 containerd[1491]: 2025-05-13 23:59:54.155 [INFO][5104] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:59:55.243106 containerd[1491]: 2025-05-13 23:59:54.167 [INFO][5104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0 calico-apiserver-5c6bb84fcc- calico-apiserver 49054da8-6d54-4b6e-8457-befd52fd3a07 708 0 2025-05-13 23:58:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c6bb84fcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c6bb84fcc-ptzvd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali021b37c4c7c [] []}} ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-ptzvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-" May 13 23:59:55.243106 containerd[1491]: 2025-05-13 23:59:54.167 [INFO][5104] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-ptzvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" May 13 23:59:55.243106 containerd[1491]: 2025-05-13 23:59:54.207 [INFO][5131] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" HandleID="k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Workload="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.216 [INFO][5131] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" HandleID="k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Workload="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c6bb84fcc-ptzvd", "timestamp":"2025-05-13 23:59:54.207703768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.216 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.216 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.216 [INFO][5131] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.218 [INFO][5131] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" host="localhost" May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.222 [INFO][5131] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.513 [INFO][5131] ipam/ipam.go 521: Ran out of existing affine blocks for host host="localhost" May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.567 [INFO][5131] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.569 [INFO][5131] ipam/ipam_block_reader_writer.go 154: Found free block: 192.168.88.128/26 May 13 23:59:55.243628 containerd[1491]: 2025-05-13 23:59:54.569 [INFO][5131] ipam/ipam.go 550: Found unclaimed block host="localhost" subnet=192.168.88.128/26 May 13 23:59:55.243900 sshd[5142]: Connection closed by 10.0.0.1 port 34668 May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.569 [INFO][5131] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.590 [INFO][5131] ipam/ipam_block_reader_writer.go 201: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.590 [INFO][5131] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.591 [INFO][5131] ipam/ipam.go 160: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.594 [INFO][5131] ipam/ipam.go 167: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.595 [INFO][5131] ipam/ipam.go 176: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.595 [INFO][5131] ipam/ipam_block_reader_writer.go 223: Attempting to create a new block host="localhost" subnet=192.168.88.128/26 May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.749 [INFO][5131] ipam/ipam_block_reader_writer.go 264: Successfully created block May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.749 [INFO][5131] ipam/ipam_block_reader_writer.go 275: Confirming affinity host="localhost" subnet=192.168.88.128/26 May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.780 [INFO][5131] ipam/ipam_block_reader_writer.go 290: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.780 [INFO][5131] ipam/ipam.go 585: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.781 [INFO][5131] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" host="localhost" May 13 23:59:55.244156 containerd[1491]: 2025-05-13 23:59:54.796 [INFO][5131] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87 May 13 23:59:55.244027 sshd-session[5140]: pam_unix(sshd:session): session closed for user core May 13 23:59:55.244478 containerd[1491]: 2025-05-13 23:59:55.038 [INFO][5131] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" host="localhost" May 13 23:59:55.244478 containerd[1491]: 2025-05-13 23:59:55.089 [INFO][5131] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" host="localhost" May 13 23:59:55.244478 containerd[1491]: 2025-05-13 23:59:55.090 [INFO][5131] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" host="localhost" May 13 23:59:55.244478 containerd[1491]: 2025-05-13 23:59:55.090 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:59:55.244478 containerd[1491]: 2025-05-13 23:59:55.090 [INFO][5131] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" HandleID="k8s-pod-network.9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Workload="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" May 13 23:59:55.244593 containerd[1491]: 2025-05-13 23:59:55.093 [INFO][5104] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-ptzvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0", GenerateName:"calico-apiserver-5c6bb84fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"49054da8-6d54-4b6e-8457-befd52fd3a07", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6bb84fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c6bb84fcc-ptzvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali021b37c4c7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:59:55.244648 containerd[1491]: 2025-05-13 23:59:55.093 [INFO][5104] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.128/32] ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-ptzvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" May 13 23:59:55.244648 containerd[1491]: 2025-05-13 23:59:55.093 [INFO][5104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali021b37c4c7c ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-ptzvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" May 13 23:59:55.244648 containerd[1491]: 2025-05-13 23:59:55.102 [INFO][5104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-ptzvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" May 13 23:59:55.244749 containerd[1491]: 2025-05-13 23:59:55.102 [INFO][5104] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-ptzvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0", GenerateName:"calico-apiserver-5c6bb84fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"49054da8-6d54-4b6e-8457-befd52fd3a07", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6bb84fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87", Pod:"calico-apiserver-5c6bb84fcc-ptzvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali021b37c4c7c", MAC:"f2:ad:5f:18:cb:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:59:55.244800 containerd[1491]: 2025-05-13 23:59:55.240 [INFO][5104] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-ptzvd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--ptzvd-eth0" May 13 23:59:55.249276 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:34668.service: Deactivated successfully. May 13 23:59:55.252227 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:59:55.253449 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. May 13 23:59:55.254494 systemd-logind[1475]: Removed session 20. May 13 23:59:56.273722 containerd[1491]: time="2025-05-13T23:59:56.273656676Z" level=info msg="connecting to shim 9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87" address="unix:///run/containerd/s/68a51658551bd1405d7b26142c964a7f5a88d73aafd48e6519746c926763d9b2" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:56.296803 systemd[1]: Started cri-containerd-9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87.scope - libcontainer container 9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87. May 13 23:59:56.308700 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:59:56.415955 containerd[1491]: time="2025-05-13T23:59:56.415904013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-ptzvd,Uid:49054da8-6d54-4b6e-8457-befd52fd3a07,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87\"" May 13 23:59:56.417307 containerd[1491]: time="2025-05-13T23:59:56.417280776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:59:56.701843 systemd-networkd[1413]: cali021b37c4c7c: Gained IPv6LL May 13 23:59:57.004740 kubelet[2658]: E0513 23:59:57.004583 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:59:57.005183 containerd[1491]: time="2025-05-13T23:59:57.005089875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,}" May 13 23:59:57.221823 systemd-networkd[1413]: cali1001f753a78: Link UP May 13 23:59:57.222413 systemd-networkd[1413]: cali1001f753a78: Gained carrier May 13 23:59:57.237547 containerd[1491]: 2025-05-13 23:59:57.127 [INFO][5224] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:59:57.237547 containerd[1491]: 2025-05-13 23:59:57.140 [INFO][5224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ppkvw-eth0 csi-node-driver- calico-system 6c89c23a-8ac4-492c-ae00-402f1ec38ec8 605 0 2025-05-13 23:58:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ppkvw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1001f753a78 [] []}} ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Namespace="calico-system" Pod="csi-node-driver-ppkvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppkvw-" May 13 23:59:57.237547 containerd[1491]: 2025-05-13 23:59:57.140 [INFO][5224] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Namespace="calico-system" Pod="csi-node-driver-ppkvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppkvw-eth0" May 13 23:59:57.237547 containerd[1491]: 2025-05-13 23:59:57.171 [INFO][5237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" HandleID="k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Workload="localhost-k8s-csi--node--driver--ppkvw-eth0" May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.183 [INFO][5237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" HandleID="k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Workload="localhost-k8s-csi--node--driver--ppkvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ppkvw", "timestamp":"2025-05-13 23:59:57.171028808 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.183 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.184 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.184 [INFO][5237] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.187 [INFO][5237] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" host="localhost" May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.192 [INFO][5237] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.198 [INFO][5237] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.200 [INFO][5237] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.202 [INFO][5237] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:59:57.237867 containerd[1491]: 2025-05-13 23:59:57.203 [INFO][5237] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" host="localhost" May 13 23:59:57.238172 containerd[1491]: 2025-05-13 23:59:57.204 [INFO][5237] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c May 13 23:59:57.238172 containerd[1491]: 2025-05-13 23:59:57.208 [INFO][5237] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" host="localhost" May 13 23:59:57.238172 containerd[1491]: 2025-05-13 23:59:57.215 [INFO][5237] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" host="localhost" May 13 23:59:57.238172 containerd[1491]: 2025-05-13 23:59:57.215 [INFO][5237] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" host="localhost" May 13 23:59:57.238172 containerd[1491]: 2025-05-13 23:59:57.215 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:59:57.238172 containerd[1491]: 2025-05-13 23:59:57.215 [INFO][5237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" HandleID="k8s-pod-network.89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Workload="localhost-k8s-csi--node--driver--ppkvw-eth0" May 13 23:59:57.238421 containerd[1491]: 2025-05-13 23:59:57.219 [INFO][5224] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Namespace="calico-system" Pod="csi-node-driver-ppkvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppkvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ppkvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6c89c23a-8ac4-492c-ae00-402f1ec38ec8", ResourceVersion:"605", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ppkvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1001f753a78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:59:57.238421 containerd[1491]: 2025-05-13 23:59:57.219 [INFO][5224] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Namespace="calico-system" Pod="csi-node-driver-ppkvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppkvw-eth0" May 13 23:59:57.238519 containerd[1491]: 2025-05-13 23:59:57.219 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1001f753a78 ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Namespace="calico-system" Pod="csi-node-driver-ppkvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppkvw-eth0" May 13 23:59:57.238519 containerd[1491]: 2025-05-13 23:59:57.222 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Namespace="calico-system" Pod="csi-node-driver-ppkvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppkvw-eth0" May 13 23:59:57.238584 containerd[1491]: 2025-05-13 23:59:57.222 [INFO][5224] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Namespace="calico-system" Pod="csi-node-driver-ppkvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppkvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ppkvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6c89c23a-8ac4-492c-ae00-402f1ec38ec8", ResourceVersion:"605", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c", Pod:"csi-node-driver-ppkvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1001f753a78", MAC:"ae:b1:e4:4b:15:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:59:57.238655 containerd[1491]: 2025-05-13 23:59:57.233 [INFO][5224] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" Namespace="calico-system" Pod="csi-node-driver-ppkvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--ppkvw-eth0" May 13 23:59:57.274437 containerd[1491]: time="2025-05-13T23:59:57.274260522Z" level=info msg="connecting to shim 89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c" address="unix:///run/containerd/s/07aa313c0021fc627fbba74e4616ff7994dce52ae6d1d175af6831a5b6526e11" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:57.301890 systemd[1]: Started cri-containerd-89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c.scope - libcontainer container 89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c. May 13 23:59:57.315163 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:59:57.386759 containerd[1491]: time="2025-05-13T23:59:57.386654169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ppkvw,Uid:6c89c23a-8ac4-492c-ae00-402f1ec38ec8,Namespace:calico-system,Attempt:0,} returns sandbox id \"89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c\"" May 13 23:59:57.784699 kernel: bpftool[5431]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:59:58.027332 systemd-networkd[1413]: vxlan.calico: Link UP May 13 23:59:58.027342 systemd-networkd[1413]: vxlan.calico: Gained carrier May 13 23:59:59.197877 systemd-networkd[1413]: cali1001f753a78: Gained IPv6LL May 13 23:59:59.709840 systemd-networkd[1413]: vxlan.calico: Gained IPv6LL May 14 00:00:00.275803 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:00.277200 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:44502.service - OpenSSH per-connection server daemon (10.0.0.1:44502). May 14 00:00:01.004296 kubelet[2658]: E0514 00:00:01.004240 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:01.800613 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 44502 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:01.802714 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:01.807604 systemd-logind[1475]: New session 21 of user core. May 14 00:00:01.817880 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:00:02.211953 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:03.004636 kubelet[2658]: E0514 00:00:03.004547 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:03.005281 containerd[1491]: time="2025-05-14T00:00:03.005069640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,}" May 14 00:00:03.474734 sshd[5516]: Connection closed by 10.0.0.1 port 44502 May 14 00:00:03.477272 sshd-session[5514]: pam_unix(sshd:session): session closed for user core May 14 00:00:03.481173 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:44502.service: Deactivated successfully. May 14 00:00:03.483544 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:00:03.484345 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. May 14 00:00:03.485345 systemd-logind[1475]: Removed session 21. May 14 00:00:04.005745 containerd[1491]: time="2025-05-14T00:00:04.005541333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,}" May 14 00:00:04.005745 containerd[1491]: time="2025-05-14T00:00:04.005700513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,}" May 14 00:00:04.462068 containerd[1491]: time="2025-05-14T00:00:04.461999955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:05.026232 containerd[1491]: time="2025-05-14T00:00:05.026130038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 00:00:05.506488 containerd[1491]: time="2025-05-14T00:00:05.506406507Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:05.543981 containerd[1491]: time="2025-05-14T00:00:05.543880528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:05.544681 containerd[1491]: time="2025-05-14T00:00:05.544616160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 9.127298382s" May 14 00:00:05.544728 containerd[1491]: time="2025-05-14T00:00:05.544704593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 00:00:05.546368 containerd[1491]: time="2025-05-14T00:00:05.546091401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 00:00:05.547331 containerd[1491]: time="2025-05-14T00:00:05.547291216Z" level=info msg="CreateContainer within sandbox \"9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:00:05.615533 systemd-networkd[1413]: cali8b32313e308: Link UP May 14 00:00:05.615868 systemd-networkd[1413]: cali8b32313e308: Gained carrier May 14 00:00:05.689324 containerd[1491]: 2025-05-14 00:00:04.276 [INFO][5542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--rnl27-eth0 coredns-6f6b679f8f- kube-system f40d0199-33e4-4e2f-9993-c63871326054 710 0 2025-05-13 23:58:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-rnl27 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8b32313e308 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnl27" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rnl27-" May 14 00:00:05.689324 containerd[1491]: 2025-05-14 00:00:04.277 [INFO][5542] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnl27" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" May 14 00:00:05.689324 containerd[1491]: 2025-05-14 00:00:04.489 [INFO][5559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" HandleID="k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Workload="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:04.866 [INFO][5559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" HandleID="k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Workload="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002aafd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-rnl27", "timestamp":"2025-05-14 00:00:04.489368747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:04.866 [INFO][5559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:04.866 [INFO][5559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:04.866 [INFO][5559] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:04.910 [INFO][5559] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" host="localhost" May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:04.998 [INFO][5559] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:05.003 [INFO][5559] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:05.007 [INFO][5559] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:05.013 [INFO][5559] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:00:05.689630 containerd[1491]: 2025-05-14 00:00:05.013 [INFO][5559] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" host="localhost" May 14 00:00:05.690002 containerd[1491]: 2025-05-14 00:00:05.015 [INFO][5559] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7 May 14 00:00:05.690002 containerd[1491]: 2025-05-14 00:00:05.033 [INFO][5559] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" host="localhost" May 14 00:00:05.690002 containerd[1491]: 2025-05-14 00:00:05.600 [INFO][5559] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" host="localhost" May 14 00:00:05.690002 containerd[1491]: 2025-05-14 00:00:05.600 [INFO][5559] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" host="localhost" May 14 00:00:05.690002 containerd[1491]: 2025-05-14 00:00:05.601 [INFO][5559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:05.690002 containerd[1491]: 2025-05-14 00:00:05.601 [INFO][5559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" HandleID="k8s-pod-network.245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Workload="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" May 14 00:00:05.690184 containerd[1491]: 2025-05-14 00:00:05.603 [INFO][5542] cni-plugin/k8s.go 386: Populated endpoint ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnl27" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--rnl27-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f40d0199-33e4-4e2f-9993-c63871326054", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-rnl27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b32313e308", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:05.690274 containerd[1491]: 2025-05-14 00:00:05.603 [INFO][5542] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnl27" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" May 14 00:00:05.690274 containerd[1491]: 2025-05-14 00:00:05.604 [INFO][5542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b32313e308 ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnl27" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" May 14 00:00:05.690274 containerd[1491]: 2025-05-14 00:00:05.616 [INFO][5542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnl27" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" May 14 00:00:05.690369 containerd[1491]: 2025-05-14 00:00:05.617 [INFO][5542] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnl27" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--rnl27-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f40d0199-33e4-4e2f-9993-c63871326054", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7", Pod:"coredns-6f6b679f8f-rnl27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b32313e308", MAC:"7a:94:96:83:8d:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:05.690369 containerd[1491]: 2025-05-14 00:00:05.684 [INFO][5542] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" Namespace="kube-system" Pod="coredns-6f6b679f8f-rnl27" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rnl27-eth0" May 14 00:00:05.907034 systemd-networkd[1413]: cali4c2f2c2f2ca: Link UP May 14 00:00:05.907258 systemd-networkd[1413]: cali4c2f2c2f2ca: Gained carrier May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:04.852 [INFO][5566] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0 calico-apiserver-5c6bb84fcc- calico-apiserver b4d74f95-719e-4dc2-b743-1167771220e5 704 0 2025-05-13 23:58:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c6bb84fcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c6bb84fcc-8lbpv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c2f2c2f2ca [] []}} ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-8lbpv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:04.852 [INFO][5566] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-8lbpv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:04.883 [INFO][5581] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" HandleID="k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Workload="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.001 [INFO][5581] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" HandleID="k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Workload="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ee790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c6bb84fcc-8lbpv", "timestamp":"2025-05-14 00:00:04.883797196 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.001 [INFO][5581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.600 [INFO][5581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.601 [INFO][5581] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.623 [INFO][5581] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.829 [INFO][5581] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.833 [INFO][5581] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.835 [INFO][5581] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.837 [INFO][5581] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.837 [INFO][5581] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.839 [INFO][5581] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.862 [INFO][5581] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.902 [INFO][5581] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.902 [INFO][5581] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" host="localhost" May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.902 [INFO][5581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:05.987373 containerd[1491]: 2025-05-14 00:00:05.902 [INFO][5581] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" HandleID="k8s-pod-network.a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Workload="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" May 14 00:00:05.988111 containerd[1491]: 2025-05-14 00:00:05.904 [INFO][5566] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-8lbpv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0", GenerateName:"calico-apiserver-5c6bb84fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4d74f95-719e-4dc2-b743-1167771220e5", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6bb84fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c6bb84fcc-8lbpv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c2f2c2f2ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:05.988111 containerd[1491]: 2025-05-14 00:00:05.904 [INFO][5566] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-8lbpv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" May 14 00:00:05.988111 containerd[1491]: 2025-05-14 00:00:05.905 [INFO][5566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c2f2c2f2ca ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-8lbpv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" May 14 00:00:05.988111 containerd[1491]: 2025-05-14 00:00:05.907 [INFO][5566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-8lbpv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" May 14 00:00:05.988111 containerd[1491]: 2025-05-14 00:00:05.907 [INFO][5566] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-8lbpv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0", GenerateName:"calico-apiserver-5c6bb84fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4d74f95-719e-4dc2-b743-1167771220e5", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c6bb84fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c", Pod:"calico-apiserver-5c6bb84fcc-8lbpv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c2f2c2f2ca", MAC:"86:ef:b7:0b:fc:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:05.988111 containerd[1491]: 2025-05-14 00:00:05.984 [INFO][5566] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" Namespace="calico-apiserver" Pod="calico-apiserver-5c6bb84fcc-8lbpv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c6bb84fcc--8lbpv-eth0" May 14 00:00:06.004202 kubelet[2658]: E0514 00:00:06.004160 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:06.004691 containerd[1491]: time="2025-05-14T00:00:06.004602052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,}" May 14 00:00:06.076268 containerd[1491]: time="2025-05-14T00:00:06.076208200Z" level=info msg="Container e7b3d40585a8e250b4975cc3dd10a2bada8cc31cf5a9ee34ebcda25148f3bd63: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:06.270300 systemd-networkd[1413]: calif3902afc877: Link UP May 14 00:00:06.270727 systemd-networkd[1413]: calif3902afc877: Gained carrier May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:05.623 [INFO][5589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0 calico-kube-controllers-688d9b6545- calico-system 6687c9e7-fce4-4cea-b426-8f1da2fef6f3 707 0 2025-05-13 23:58:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:688d9b6545 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-688d9b6545-z68xp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif3902afc877 [] []}} ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Namespace="calico-system" Pod="calico-kube-controllers-688d9b6545-z68xp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:05.624 [INFO][5589] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Namespace="calico-system" Pod="calico-kube-controllers-688d9b6545-z68xp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:05.852 [INFO][5623] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" HandleID="k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Workload="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.029 [INFO][5623] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" HandleID="k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Workload="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f6b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-688d9b6545-z68xp", "timestamp":"2025-05-14 00:00:05.85206171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.029 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.029 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.029 [INFO][5623] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.037 [INFO][5623] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.130 [INFO][5623] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.136 [INFO][5623] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.138 [INFO][5623] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.140 [INFO][5623] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.140 [INFO][5623] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.142 [INFO][5623] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.208 [INFO][5623] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.264 [INFO][5623] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.264 [INFO][5623] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" host="localhost" May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.264 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:06.300758 containerd[1491]: 2025-05-14 00:00:06.264 [INFO][5623] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" HandleID="k8s-pod-network.43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Workload="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" May 14 00:00:06.301309 containerd[1491]: 2025-05-14 00:00:06.267 [INFO][5589] cni-plugin/k8s.go 386: Populated endpoint ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Namespace="calico-system" Pod="calico-kube-controllers-688d9b6545-z68xp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0", GenerateName:"calico-kube-controllers-688d9b6545-", Namespace:"calico-system", SelfLink:"", UID:"6687c9e7-fce4-4cea-b426-8f1da2fef6f3", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"688d9b6545", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-688d9b6545-z68xp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3902afc877", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:06.301309 containerd[1491]: 2025-05-14 00:00:06.267 [INFO][5589] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Namespace="calico-system" Pod="calico-kube-controllers-688d9b6545-z68xp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" May 14 00:00:06.301309 containerd[1491]: 2025-05-14 00:00:06.267 [INFO][5589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3902afc877 ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Namespace="calico-system" Pod="calico-kube-controllers-688d9b6545-z68xp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" May 14 00:00:06.301309 containerd[1491]: 2025-05-14 00:00:06.271 [INFO][5589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Namespace="calico-system" Pod="calico-kube-controllers-688d9b6545-z68xp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" May 14 00:00:06.301309 containerd[1491]: 2025-05-14 00:00:06.271 [INFO][5589] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Namespace="calico-system" Pod="calico-kube-controllers-688d9b6545-z68xp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0", GenerateName:"calico-kube-controllers-688d9b6545-", Namespace:"calico-system", SelfLink:"", UID:"6687c9e7-fce4-4cea-b426-8f1da2fef6f3", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"688d9b6545", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe", Pod:"calico-kube-controllers-688d9b6545-z68xp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3902afc877", MAC:"2a:51:4d:a4:25:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:06.301309 containerd[1491]: 2025-05-14 00:00:06.297 [INFO][5589] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" Namespace="calico-system" Pod="calico-kube-controllers-688d9b6545-z68xp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--688d9b6545--z68xp-eth0" May 14 00:00:06.327818 containerd[1491]: time="2025-05-14T00:00:06.327505305Z" level=info msg="CreateContainer within sandbox \"9acf9f3f748f2e659cb5b7777552bef2c10b1eef7a213730c10d64beb3433a87\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e7b3d40585a8e250b4975cc3dd10a2bada8cc31cf5a9ee34ebcda25148f3bd63\"" May 14 00:00:06.334720 containerd[1491]: time="2025-05-14T00:00:06.329684728Z" level=info msg="StartContainer for \"e7b3d40585a8e250b4975cc3dd10a2bada8cc31cf5a9ee34ebcda25148f3bd63\"" May 14 00:00:06.334720 containerd[1491]: time="2025-05-14T00:00:06.330936174Z" level=info msg="connecting to shim e7b3d40585a8e250b4975cc3dd10a2bada8cc31cf5a9ee34ebcda25148f3bd63" address="unix:///run/containerd/s/68a51658551bd1405d7b26142c964a7f5a88d73aafd48e6519746c926763d9b2" protocol=ttrpc version=3 May 14 00:00:06.359849 containerd[1491]: time="2025-05-14T00:00:06.359761505Z" level=info msg="connecting to shim 245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7" address="unix:///run/containerd/s/8c22452b4eef1427b99106e21614d88a7f154a9473ab64a8a5522fab76cac31f" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:06.408634 containerd[1491]: time="2025-05-14T00:00:06.408017701Z" level=info msg="connecting to shim a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c" address="unix:///run/containerd/s/c4471ea1ef6ec9761667b200c9128eb778e5cb2716f07449e8cf7f8d03a42642" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:06.408396 systemd-networkd[1413]: cali0fc551f20da: Link UP May 14 00:00:06.409617 systemd-networkd[1413]: cali0fc551f20da: Gained carrier May 14 00:00:06.429984 systemd[1]: Started cri-containerd-e7b3d40585a8e250b4975cc3dd10a2bada8cc31cf5a9ee34ebcda25148f3bd63.scope - libcontainer container e7b3d40585a8e250b4975cc3dd10a2bada8cc31cf5a9ee34ebcda25148f3bd63. May 14 00:00:06.438547 containerd[1491]: time="2025-05-14T00:00:06.438264419Z" level=info msg="connecting to shim 43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe" address="unix:///run/containerd/s/37a917a9de45511b310ec9ae9869d28db9b09d6693ed526a3a8cd12b4690610a" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.300 [INFO][5645] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--l9mww-eth0 coredns-6f6b679f8f- kube-system 4df2c9e3-a73b-411b-a21e-2c619d05304c 709 0 2025-05-13 23:58:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-l9mww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0fc551f20da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-l9mww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--l9mww-" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.300 [INFO][5645] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-l9mww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.331 [INFO][5673] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" HandleID="k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Workload="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.341 [INFO][5673] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" HandleID="k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Workload="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000412390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-l9mww", "timestamp":"2025-05-14 00:00:06.331362103 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.342 [INFO][5673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.342 [INFO][5673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.342 [INFO][5673] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.345 [INFO][5673] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.351 [INFO][5673] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.356 [INFO][5673] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.358 [INFO][5673] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.360 [INFO][5673] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.360 [INFO][5673] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.362 [INFO][5673] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.372 [INFO][5673] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.394 [INFO][5673] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.394 [INFO][5673] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" host="localhost" May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.394 [INFO][5673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:06.439254 containerd[1491]: 2025-05-14 00:00:06.394 [INFO][5673] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" HandleID="k8s-pod-network.83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Workload="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" May 14 00:00:06.439786 containerd[1491]: 2025-05-14 00:00:06.399 [INFO][5645] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-l9mww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--l9mww-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4df2c9e3-a73b-411b-a21e-2c619d05304c", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-l9mww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fc551f20da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:06.439786 containerd[1491]: 2025-05-14 00:00:06.399 [INFO][5645] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-l9mww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" May 14 00:00:06.439786 containerd[1491]: 2025-05-14 00:00:06.399 [INFO][5645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fc551f20da ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-l9mww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" May 14 00:00:06.439786 containerd[1491]: 2025-05-14 00:00:06.410 [INFO][5645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-l9mww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" May 14 00:00:06.439786 containerd[1491]: 2025-05-14 00:00:06.412 [INFO][5645] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-l9mww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--l9mww-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4df2c9e3-a73b-411b-a21e-2c619d05304c", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c", Pod:"coredns-6f6b679f8f-l9mww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fc551f20da", MAC:"c6:60:af:3a:b2:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:06.439786 containerd[1491]: 2025-05-14 00:00:06.428 [INFO][5645] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-l9mww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--l9mww-eth0" May 14 00:00:06.439937 systemd[1]: Started cri-containerd-245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7.scope - libcontainer container 245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7. May 14 00:00:06.466792 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:00:06.473555 systemd[1]: Started cri-containerd-a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c.scope - libcontainer container a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c. May 14 00:00:06.493881 systemd[1]: Started cri-containerd-43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe.scope - libcontainer container 43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe. May 14 00:00:06.509521 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:00:06.522557 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:00:06.569576 containerd[1491]: time="2025-05-14T00:00:06.569529440Z" level=info msg="StartContainer for \"e7b3d40585a8e250b4975cc3dd10a2bada8cc31cf5a9ee34ebcda25148f3bd63\" returns successfully" May 14 00:00:06.577201 containerd[1491]: time="2025-05-14T00:00:06.577158580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rnl27,Uid:f40d0199-33e4-4e2f-9993-c63871326054,Namespace:kube-system,Attempt:0,} returns sandbox id \"245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7\"" May 14 00:00:06.578466 kubelet[2658]: E0514 00:00:06.578430 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:06.582507 containerd[1491]: time="2025-05-14T00:00:06.582469339Z" level=info msg="CreateContainer within sandbox \"245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:00:06.634456 containerd[1491]: time="2025-05-14T00:00:06.634401740Z" level=info msg="connecting to shim 83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c" address="unix:///run/containerd/s/87af34268d6f609cb1ce8b0c12a1425a67390a5e713f0115b95588c72fa9fcac" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:06.637589 containerd[1491]: time="2025-05-14T00:00:06.637560609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c6bb84fcc-8lbpv,Uid:b4d74f95-719e-4dc2-b743-1167771220e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c\"" May 14 00:00:06.643164 containerd[1491]: time="2025-05-14T00:00:06.643095102Z" level=info msg="CreateContainer within sandbox \"a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:00:06.659526 containerd[1491]: time="2025-05-14T00:00:06.659431022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-688d9b6545-z68xp,Uid:6687c9e7-fce4-4cea-b426-8f1da2fef6f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe\"" May 14 00:00:06.662894 systemd[1]: Started cri-containerd-83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c.scope - libcontainer container 83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c. May 14 00:00:06.677699 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:00:06.749105 containerd[1491]: time="2025-05-14T00:00:06.749041649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l9mww,Uid:4df2c9e3-a73b-411b-a21e-2c619d05304c,Namespace:kube-system,Attempt:0,} returns sandbox id \"83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c\"" May 14 00:00:06.749867 systemd-networkd[1413]: cali8b32313e308: Gained IPv6LL May 14 00:00:06.750255 kubelet[2658]: E0514 00:00:06.749961 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:06.751982 containerd[1491]: time="2025-05-14T00:00:06.751946793Z" level=info msg="CreateContainer within sandbox \"83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:00:06.872467 containerd[1491]: time="2025-05-14T00:00:06.872402573Z" level=info msg="Container ece5baba27de9bc4e820fff94bd426f8390b50c6442348626e34d8780bfcad21: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:06.902357 kubelet[2658]: I0514 00:00:06.902280 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-ptzvd" podStartSLOduration=77.773411406 podStartE2EDuration="1m26.902255735s" podCreationTimestamp="2025-05-13 23:58:40 +0000 UTC" firstStartedPulling="2025-05-13 23:59:56.416951968 +0000 UTC m=+93.532862095" lastFinishedPulling="2025-05-14 00:00:05.545796287 +0000 UTC m=+102.661706424" observedRunningTime="2025-05-14 00:00:06.847982356 +0000 UTC m=+103.963892483" watchObservedRunningTime="2025-05-14 00:00:06.902255735 +0000 UTC m=+104.018165852" May 14 00:00:06.923781 containerd[1491]: time="2025-05-14T00:00:06.923411496Z" level=info msg="Container b0aa5e7cb57e0a38006cf200c1efb8fa3c9d86b765f43132a3afcbf4bd9d8546: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:07.074933 containerd[1491]: time="2025-05-14T00:00:07.074856300Z" level=info msg="CreateContainer within sandbox \"245a2545f2bd092dac88da3b32c32715f52e987df30b438b1d4abf7d9f05fff7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ece5baba27de9bc4e820fff94bd426f8390b50c6442348626e34d8780bfcad21\"" May 14 00:00:07.075643 containerd[1491]: time="2025-05-14T00:00:07.075597393Z" level=info msg="StartContainer for \"ece5baba27de9bc4e820fff94bd426f8390b50c6442348626e34d8780bfcad21\"" May 14 00:00:07.078707 containerd[1491]: time="2025-05-14T00:00:07.075930823Z" level=info msg="Container 3480cfd7b3e72e607a27c439816730bfe29cc1f43b4bf84e5578b231bbce4995: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:07.078707 containerd[1491]: time="2025-05-14T00:00:07.076605847Z" level=info msg="connecting to shim ece5baba27de9bc4e820fff94bd426f8390b50c6442348626e34d8780bfcad21" address="unix:///run/containerd/s/8c22452b4eef1427b99106e21614d88a7f154a9473ab64a8a5522fab76cac31f" protocol=ttrpc version=3 May 14 00:00:07.104812 systemd[1]: Started cri-containerd-ece5baba27de9bc4e820fff94bd426f8390b50c6442348626e34d8780bfcad21.scope - libcontainer container ece5baba27de9bc4e820fff94bd426f8390b50c6442348626e34d8780bfcad21. May 14 00:00:07.136801 containerd[1491]: time="2025-05-14T00:00:07.136572072Z" level=info msg="CreateContainer within sandbox \"a2d9e632084b6e6d8774a26cecf05acd2352e14c3e2751c68346eeb2f867251c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b0aa5e7cb57e0a38006cf200c1efb8fa3c9d86b765f43132a3afcbf4bd9d8546\"" May 14 00:00:07.137300 containerd[1491]: time="2025-05-14T00:00:07.137278798Z" level=info msg="StartContainer for \"b0aa5e7cb57e0a38006cf200c1efb8fa3c9d86b765f43132a3afcbf4bd9d8546\"" May 14 00:00:07.138303 containerd[1491]: time="2025-05-14T00:00:07.138277022Z" level=info msg="connecting to shim b0aa5e7cb57e0a38006cf200c1efb8fa3c9d86b765f43132a3afcbf4bd9d8546" address="unix:///run/containerd/s/c4471ea1ef6ec9761667b200c9128eb778e5cb2716f07449e8cf7f8d03a42642" protocol=ttrpc version=3 May 14 00:00:07.162825 systemd[1]: Started cri-containerd-b0aa5e7cb57e0a38006cf200c1efb8fa3c9d86b765f43132a3afcbf4bd9d8546.scope - libcontainer container b0aa5e7cb57e0a38006cf200c1efb8fa3c9d86b765f43132a3afcbf4bd9d8546. May 14 00:00:07.389940 systemd-networkd[1413]: cali4c2f2c2f2ca: Gained IPv6LL May 14 00:00:07.437220 containerd[1491]: time="2025-05-14T00:00:07.437151736Z" level=info msg="CreateContainer within sandbox \"83be6a35c6c0715d24b69a55654c9fa337f5f1d28ce75ae5c797be034f080f2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3480cfd7b3e72e607a27c439816730bfe29cc1f43b4bf84e5578b231bbce4995\"" May 14 00:00:07.439219 containerd[1491]: time="2025-05-14T00:00:07.438037190Z" level=info msg="StartContainer for \"3480cfd7b3e72e607a27c439816730bfe29cc1f43b4bf84e5578b231bbce4995\"" May 14 00:00:07.439219 containerd[1491]: time="2025-05-14T00:00:07.438946521Z" level=info msg="connecting to shim 3480cfd7b3e72e607a27c439816730bfe29cc1f43b4bf84e5578b231bbce4995" address="unix:///run/containerd/s/87af34268d6f609cb1ce8b0c12a1425a67390a5e713f0115b95588c72fa9fcac" protocol=ttrpc version=3 May 14 00:00:07.463817 systemd[1]: Started cri-containerd-3480cfd7b3e72e607a27c439816730bfe29cc1f43b4bf84e5578b231bbce4995.scope - libcontainer container 3480cfd7b3e72e607a27c439816730bfe29cc1f43b4bf84e5578b231bbce4995. May 14 00:00:07.581864 systemd-networkd[1413]: calif3902afc877: Gained IPv6LL May 14 00:00:07.670037 containerd[1491]: time="2025-05-14T00:00:07.669830135Z" level=info msg="StartContainer for \"3480cfd7b3e72e607a27c439816730bfe29cc1f43b4bf84e5578b231bbce4995\" returns successfully" May 14 00:00:07.670209 containerd[1491]: time="2025-05-14T00:00:07.669991861Z" level=info msg="StartContainer for \"b0aa5e7cb57e0a38006cf200c1efb8fa3c9d86b765f43132a3afcbf4bd9d8546\" returns successfully" May 14 00:00:07.670818 containerd[1491]: time="2025-05-14T00:00:07.670745108Z" level=info msg="StartContainer for \"ece5baba27de9bc4e820fff94bd426f8390b50c6442348626e34d8780bfcad21\" returns successfully" May 14 00:00:07.784735 kubelet[2658]: E0514 00:00:07.784526 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:07.789763 kubelet[2658]: E0514 00:00:07.789742 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:08.098699 kubelet[2658]: I0514 00:00:08.098513 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-l9mww" podStartSLOduration=95.098484854 podStartE2EDuration="1m35.098484854s" podCreationTimestamp="2025-05-13 23:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:08.097262092 +0000 UTC m=+105.213172219" watchObservedRunningTime="2025-05-14 00:00:08.098484854 +0000 UTC m=+105.214394981" May 14 00:00:08.142171 kubelet[2658]: I0514 00:00:08.142085 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rnl27" podStartSLOduration=95.142060344 podStartE2EDuration="1m35.142060344s" podCreationTimestamp="2025-05-13 23:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:08.140154552 +0000 UTC m=+105.256064680" watchObservedRunningTime="2025-05-14 00:00:08.142060344 +0000 UTC m=+105.257970471" May 14 00:00:08.158418 kubelet[2658]: I0514 00:00:08.158228 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c6bb84fcc-8lbpv" podStartSLOduration=88.158208551 podStartE2EDuration="1m28.158208551s" podCreationTimestamp="2025-05-13 23:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:08.158067076 +0000 UTC m=+105.273977203" watchObservedRunningTime="2025-05-14 00:00:08.158208551 +0000 UTC m=+105.274118678" May 14 00:00:08.158774 systemd-networkd[1413]: cali0fc551f20da: Gained IPv6LL May 14 00:00:08.502321 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:41994.service - OpenSSH per-connection server daemon (10.0.0.1:41994). May 14 00:00:08.570752 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 41994 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:08.573260 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:08.581242 systemd-logind[1475]: New session 22 of user core. May 14 00:00:08.586897 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:00:08.724000 sshd[6030]: Connection closed by 10.0.0.1 port 41994 May 14 00:00:08.724327 sshd-session[6028]: pam_unix(sshd:session): session closed for user core May 14 00:00:08.730266 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:41994.service: Deactivated successfully. May 14 00:00:08.732448 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:00:08.733305 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. May 14 00:00:08.734307 systemd-logind[1475]: Removed session 22. May 14 00:00:08.796655 kubelet[2658]: E0514 00:00:08.795719 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:08.796655 kubelet[2658]: E0514 00:00:08.795767 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:09.797944 kubelet[2658]: E0514 00:00:09.797368 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:09.797944 kubelet[2658]: E0514 00:00:09.797645 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:10.799531 kubelet[2658]: E0514 00:00:10.799283 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:10.799531 kubelet[2658]: E0514 00:00:10.799443 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:13.551905 containerd[1491]: time="2025-05-14T00:00:13.551773984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9\" id:\"4843bcfe6fb5ffdc3758bc4b409ce67763f098316ffc08e2f61d3c56ea4af4e4\" pid:6060 exited_at:{seconds:1747180813 nanos:551348364}" May 14 00:00:13.554155 kubelet[2658]: E0514 00:00:13.554126 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:13.738350 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:42002.service - OpenSSH per-connection server daemon (10.0.0.1:42002). May 14 00:00:13.800573 sshd[6077]: Accepted publickey for core from 10.0.0.1 port 42002 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:13.802536 sshd-session[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:13.807253 systemd-logind[1475]: New session 23 of user core. May 14 00:00:13.816812 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:00:13.949088 sshd[6079]: Connection closed by 10.0.0.1 port 42002 May 14 00:00:13.949494 sshd-session[6077]: pam_unix(sshd:session): session closed for user core May 14 00:00:13.954645 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:42002.service: Deactivated successfully. May 14 00:00:13.957914 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:00:13.958899 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. May 14 00:00:13.960090 systemd-logind[1475]: Removed session 23. May 14 00:00:18.969932 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:38610.service - OpenSSH per-connection server daemon (10.0.0.1:38610). May 14 00:00:19.048144 sshd[6100]: Accepted publickey for core from 10.0.0.1 port 38610 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:19.050393 sshd-session[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:19.059384 systemd-logind[1475]: New session 24 of user core. May 14 00:00:19.066832 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:00:19.130537 containerd[1491]: time="2025-05-14T00:00:19.130469718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:19.133825 containerd[1491]: time="2025-05-14T00:00:19.133772392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 00:00:19.135888 containerd[1491]: time="2025-05-14T00:00:19.135828214Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:19.143325 containerd[1491]: time="2025-05-14T00:00:19.143250234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:19.144788 containerd[1491]: time="2025-05-14T00:00:19.144749981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 13.598611498s" May 14 00:00:19.144973 containerd[1491]: time="2025-05-14T00:00:19.144866167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 00:00:19.150766 containerd[1491]: time="2025-05-14T00:00:19.149547301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 00:00:19.150766 containerd[1491]: time="2025-05-14T00:00:19.150567161Z" level=info msg="CreateContainer within sandbox \"89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 00:00:19.181923 containerd[1491]: time="2025-05-14T00:00:19.181863644Z" level=info msg="Container fca044f19beb3633d742e220fbe501964d85e795f8559292ea57982ae1af82b1: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:19.198901 containerd[1491]: time="2025-05-14T00:00:19.198680997Z" level=info msg="CreateContainer within sandbox \"89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fca044f19beb3633d742e220fbe501964d85e795f8559292ea57982ae1af82b1\"" May 14 00:00:19.199604 containerd[1491]: time="2025-05-14T00:00:19.199538961Z" level=info msg="StartContainer for \"fca044f19beb3633d742e220fbe501964d85e795f8559292ea57982ae1af82b1\"" May 14 00:00:19.201152 containerd[1491]: time="2025-05-14T00:00:19.200935466Z" level=info msg="connecting to shim fca044f19beb3633d742e220fbe501964d85e795f8559292ea57982ae1af82b1" address="unix:///run/containerd/s/07aa313c0021fc627fbba74e4616ff7994dce52ae6d1d175af6831a5b6526e11" protocol=ttrpc version=3 May 14 00:00:19.219588 sshd[6106]: Connection closed by 10.0.0.1 port 38610 May 14 00:00:19.221609 sshd-session[6100]: pam_unix(sshd:session): session closed for user core May 14 00:00:19.233020 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:38610.service: Deactivated successfully. May 14 00:00:19.235777 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:00:19.236927 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. May 14 00:00:19.240548 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:38612.service - OpenSSH per-connection server daemon (10.0.0.1:38612). May 14 00:00:19.241976 systemd-logind[1475]: Removed session 24. May 14 00:00:19.258839 systemd[1]: Started cri-containerd-fca044f19beb3633d742e220fbe501964d85e795f8559292ea57982ae1af82b1.scope - libcontainer container fca044f19beb3633d742e220fbe501964d85e795f8559292ea57982ae1af82b1. May 14 00:00:19.300892 sshd[6127]: Accepted publickey for core from 10.0.0.1 port 38612 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:19.303081 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:19.309541 systemd-logind[1475]: New session 25 of user core. May 14 00:00:19.314098 containerd[1491]: time="2025-05-14T00:00:19.314039095Z" level=info msg="StartContainer for \"fca044f19beb3633d742e220fbe501964d85e795f8559292ea57982ae1af82b1\" returns successfully" May 14 00:00:19.317871 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:00:19.791775 sshd[6150]: Connection closed by 10.0.0.1 port 38612 May 14 00:00:19.798113 sshd-session[6127]: pam_unix(sshd:session): session closed for user core May 14 00:00:19.813348 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:38626.service - OpenSSH per-connection server daemon (10.0.0.1:38626). May 14 00:00:19.813922 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:38612.service: Deactivated successfully. May 14 00:00:19.816652 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:00:19.822647 systemd-logind[1475]: Session 25 logged out. Waiting for processes to exit. May 14 00:00:19.824044 systemd-logind[1475]: Removed session 25. May 14 00:00:19.874828 sshd[6158]: Accepted publickey for core from 10.0.0.1 port 38626 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:19.876514 sshd-session[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:19.881080 systemd-logind[1475]: New session 26 of user core. May 14 00:00:19.886814 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:00:21.696441 sshd[6163]: Connection closed by 10.0.0.1 port 38626 May 14 00:00:21.696969 sshd-session[6158]: pam_unix(sshd:session): session closed for user core May 14 00:00:21.711956 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:38626.service: Deactivated successfully. May 14 00:00:21.714793 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:00:21.715215 systemd[1]: session-26.scope: Consumed 671ms CPU time, 73.8M memory peak. May 14 00:00:21.718240 systemd-logind[1475]: Session 26 logged out. Waiting for processes to exit. May 14 00:00:21.722356 systemd[1]: Started sshd@26-10.0.0.80:22-10.0.0.1:38638.service - OpenSSH per-connection server daemon (10.0.0.1:38638). May 14 00:00:21.725102 systemd-logind[1475]: Removed session 26. May 14 00:00:21.774829 sshd[6202]: Accepted publickey for core from 10.0.0.1 port 38638 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:21.776975 sshd-session[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:21.782962 systemd-logind[1475]: New session 27 of user core. May 14 00:00:21.790913 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:00:22.189996 sshd[6205]: Connection closed by 10.0.0.1 port 38638 May 14 00:00:22.190728 sshd-session[6202]: pam_unix(sshd:session): session closed for user core May 14 00:00:22.200659 systemd[1]: sshd@26-10.0.0.80:22-10.0.0.1:38638.service: Deactivated successfully. May 14 00:00:22.203600 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:00:22.206103 systemd-logind[1475]: Session 27 logged out. Waiting for processes to exit. May 14 00:00:22.207991 systemd[1]: Started sshd@27-10.0.0.80:22-10.0.0.1:38642.service - OpenSSH per-connection server daemon (10.0.0.1:38642). May 14 00:00:22.209305 systemd-logind[1475]: Removed session 27. May 14 00:00:22.262011 sshd[6216]: Accepted publickey for core from 10.0.0.1 port 38642 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:22.263966 sshd-session[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:22.269104 systemd-logind[1475]: New session 28 of user core. May 14 00:00:22.276802 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 00:00:22.611069 sshd[6219]: Connection closed by 10.0.0.1 port 38642 May 14 00:00:22.610859 sshd-session[6216]: pam_unix(sshd:session): session closed for user core May 14 00:00:22.615694 systemd[1]: sshd@27-10.0.0.80:22-10.0.0.1:38642.service: Deactivated successfully. May 14 00:00:22.617883 systemd[1]: session-28.scope: Deactivated successfully. May 14 00:00:22.618717 systemd-logind[1475]: Session 28 logged out. Waiting for processes to exit. May 14 00:00:22.619652 systemd-logind[1475]: Removed session 28. May 14 00:00:23.008874 containerd[1491]: time="2025-05-14T00:00:23.008499680Z" level=info msg="StopPodSandbox for \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\"" May 14 00:00:23.008874 containerd[1491]: time="2025-05-14T00:00:23.008688799Z" level=info msg="TearDown network for sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" successfully" May 14 00:00:23.008874 containerd[1491]: time="2025-05-14T00:00:23.008706263Z" level=info msg="StopPodSandbox for \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" returns successfully" May 14 00:00:23.009500 containerd[1491]: time="2025-05-14T00:00:23.009268931Z" level=info msg="RemovePodSandbox for \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\"" May 14 00:00:23.013918 containerd[1491]: time="2025-05-14T00:00:23.013847950Z" level=info msg="Forcibly stopping sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\"" May 14 00:00:23.014154 containerd[1491]: time="2025-05-14T00:00:23.014088360Z" level=info msg="TearDown network for sandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" successfully" May 14 00:00:23.162087 containerd[1491]: time="2025-05-14T00:00:23.162028857Z" level=info msg="Ensure that sandbox 670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed in task-service has been cleanup successfully" May 14 00:00:23.789824 containerd[1491]: time="2025-05-14T00:00:23.789760656Z" level=info msg="RemovePodSandbox \"670dd8e0671d36aba52d3bcd1a39f51ecd0a1688fb34eb64a8b5ef2ca857e8ed\" returns successfully" May 14 00:00:27.616017 systemd[1]: Started sshd@28-10.0.0.80:22-10.0.0.1:38650.service - OpenSSH per-connection server daemon (10.0.0.1:38650). May 14 00:00:27.681132 sshd[6242]: Accepted publickey for core from 10.0.0.1 port 38650 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:27.682920 sshd-session[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:27.691464 systemd-logind[1475]: New session 29 of user core. May 14 00:00:27.695824 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 00:00:27.882048 containerd[1491]: time="2025-05-14T00:00:27.881848145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:27.887837 containerd[1491]: time="2025-05-14T00:00:27.887761163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 00:00:27.890191 containerd[1491]: time="2025-05-14T00:00:27.890160559Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:27.895488 containerd[1491]: time="2025-05-14T00:00:27.895391406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:27.896365 containerd[1491]: time="2025-05-14T00:00:27.896331873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 8.746744113s" May 14 00:00:27.896425 containerd[1491]: time="2025-05-14T00:00:27.896371961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 00:00:27.898149 containerd[1491]: time="2025-05-14T00:00:27.898066190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 00:00:27.913024 containerd[1491]: time="2025-05-14T00:00:27.912937366Z" level=info msg="CreateContainer within sandbox \"43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 00:00:27.991289 containerd[1491]: time="2025-05-14T00:00:27.989900223Z" level=info msg="Container 4fb5ac8075cabf86126fbb3288cc570ff6967b6cf01503a13cb47abb4cb6ec86: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:28.004785 containerd[1491]: time="2025-05-14T00:00:28.004729607Z" level=info msg="CreateContainer within sandbox \"43f281c4424e5b448dfd6587149f2cb685591b4a4975e6836756550ca65d14fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4fb5ac8075cabf86126fbb3288cc570ff6967b6cf01503a13cb47abb4cb6ec86\"" May 14 00:00:28.006864 containerd[1491]: time="2025-05-14T00:00:28.005474813Z" level=info msg="StartContainer for \"4fb5ac8075cabf86126fbb3288cc570ff6967b6cf01503a13cb47abb4cb6ec86\"" May 14 00:00:28.007147 containerd[1491]: time="2025-05-14T00:00:28.007086130Z" level=info msg="connecting to shim 4fb5ac8075cabf86126fbb3288cc570ff6967b6cf01503a13cb47abb4cb6ec86" address="unix:///run/containerd/s/37a917a9de45511b310ec9ae9869d28db9b09d6693ed526a3a8cd12b4690610a" protocol=ttrpc version=3 May 14 00:00:28.010197 sshd[6244]: Connection closed by 10.0.0.1 port 38650 May 14 00:00:28.012335 sshd-session[6242]: pam_unix(sshd:session): session closed for user core May 14 00:00:28.019548 systemd[1]: sshd@28-10.0.0.80:22-10.0.0.1:38650.service: Deactivated successfully. May 14 00:00:28.022502 systemd[1]: session-29.scope: Deactivated successfully. May 14 00:00:28.025344 systemd-logind[1475]: Session 29 logged out. Waiting for processes to exit. May 14 00:00:28.036050 systemd[1]: Started cri-containerd-4fb5ac8075cabf86126fbb3288cc570ff6967b6cf01503a13cb47abb4cb6ec86.scope - libcontainer container 4fb5ac8075cabf86126fbb3288cc570ff6967b6cf01503a13cb47abb4cb6ec86. May 14 00:00:28.036861 systemd-logind[1475]: Removed session 29. May 14 00:00:28.100180 containerd[1491]: time="2025-05-14T00:00:28.100105258Z" level=info msg="StartContainer for \"4fb5ac8075cabf86126fbb3288cc570ff6967b6cf01503a13cb47abb4cb6ec86\" returns successfully" May 14 00:00:28.885173 containerd[1491]: time="2025-05-14T00:00:28.885129210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fb5ac8075cabf86126fbb3288cc570ff6967b6cf01503a13cb47abb4cb6ec86\" id:\"a24b15fd094b492a09439d6e0d7af3f3ef33e0d25591d41bace4f54021266023\" pid:6303 exited_at:{seconds:1747180828 nanos:884701605}" May 14 00:00:28.924927 kubelet[2658]: I0514 00:00:28.924828 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-688d9b6545-z68xp" podStartSLOduration=86.688023254 podStartE2EDuration="1m47.924798547s" podCreationTimestamp="2025-05-13 23:58:41 +0000 UTC" firstStartedPulling="2025-05-14 00:00:06.660642871 +0000 UTC m=+103.776552998" lastFinishedPulling="2025-05-14 00:00:27.897418164 +0000 UTC m=+125.013328291" observedRunningTime="2025-05-14 00:00:28.890096502 +0000 UTC m=+126.006006629" watchObservedRunningTime="2025-05-14 00:00:28.924798547 +0000 UTC m=+126.040708674" May 14 00:00:33.034344 systemd[1]: Started sshd@29-10.0.0.80:22-10.0.0.1:55382.service - OpenSSH per-connection server daemon (10.0.0.1:55382). May 14 00:00:33.151365 sshd[6317]: Accepted publickey for core from 10.0.0.1 port 55382 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:33.153642 sshd-session[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:33.160325 systemd-logind[1475]: New session 30 of user core. May 14 00:00:33.173983 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 00:00:33.387656 sshd[6320]: Connection closed by 10.0.0.1 port 55382 May 14 00:00:33.388162 sshd-session[6317]: pam_unix(sshd:session): session closed for user core May 14 00:00:33.393213 systemd[1]: sshd@29-10.0.0.80:22-10.0.0.1:55382.service: Deactivated successfully. May 14 00:00:33.396079 systemd[1]: session-30.scope: Deactivated successfully. May 14 00:00:33.397298 systemd-logind[1475]: Session 30 logged out. Waiting for processes to exit. May 14 00:00:33.398638 systemd-logind[1475]: Removed session 30. May 14 00:00:35.004522 kubelet[2658]: E0514 00:00:35.004445 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:36.954245 containerd[1491]: time="2025-05-14T00:00:36.953941387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:36.990586 containerd[1491]: time="2025-05-14T00:00:36.990434024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 00:00:37.043840 containerd[1491]: time="2025-05-14T00:00:37.043733412Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:37.116833 containerd[1491]: time="2025-05-14T00:00:37.116641526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:37.117874 containerd[1491]: time="2025-05-14T00:00:37.117776364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 9.219645448s" May 14 00:00:37.117874 containerd[1491]: time="2025-05-14T00:00:37.117840761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 00:00:37.121544 containerd[1491]: time="2025-05-14T00:00:37.121486708Z" level=info msg="CreateContainer within sandbox \"89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 00:00:37.205982 containerd[1491]: time="2025-05-14T00:00:37.205028842Z" level=info msg="Container e253360d9d1ad85ca97737db5896030855a0a64e13e39d63b48d6802a7764a1d: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:37.357300 containerd[1491]: time="2025-05-14T00:00:37.355333466Z" level=info msg="CreateContainer within sandbox \"89a0aad24cbc0a0614a78ab1b7ec6051998ec0a5853cbae4a8579533109a1d0c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e253360d9d1ad85ca97737db5896030855a0a64e13e39d63b48d6802a7764a1d\"" May 14 00:00:37.361443 containerd[1491]: time="2025-05-14T00:00:37.358712612Z" level=info msg="StartContainer for \"e253360d9d1ad85ca97737db5896030855a0a64e13e39d63b48d6802a7764a1d\"" May 14 00:00:37.367101 containerd[1491]: time="2025-05-14T00:00:37.365698409Z" level=info msg="connecting to shim e253360d9d1ad85ca97737db5896030855a0a64e13e39d63b48d6802a7764a1d" address="unix:///run/containerd/s/07aa313c0021fc627fbba74e4616ff7994dce52ae6d1d175af6831a5b6526e11" protocol=ttrpc version=3 May 14 00:00:37.448511 systemd[1]: Started cri-containerd-e253360d9d1ad85ca97737db5896030855a0a64e13e39d63b48d6802a7764a1d.scope - libcontainer container e253360d9d1ad85ca97737db5896030855a0a64e13e39d63b48d6802a7764a1d. May 14 00:00:37.677203 containerd[1491]: time="2025-05-14T00:00:37.677114443Z" level=info msg="StartContainer for \"e253360d9d1ad85ca97737db5896030855a0a64e13e39d63b48d6802a7764a1d\" returns successfully" May 14 00:00:37.784971 kubelet[2658]: I0514 00:00:37.784784 2658 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 00:00:37.784971 kubelet[2658]: I0514 00:00:37.784985 2658 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 00:00:38.400210 systemd[1]: Started sshd@30-10.0.0.80:22-10.0.0.1:60216.service - OpenSSH per-connection server daemon (10.0.0.1:60216). May 14 00:00:38.455397 sshd[6372]: Accepted publickey for core from 10.0.0.1 port 60216 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:38.457361 sshd-session[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:38.464919 systemd-logind[1475]: New session 31 of user core. May 14 00:00:38.474849 systemd[1]: Started session-31.scope - Session 31 of User core. May 14 00:00:38.656434 sshd[6380]: Connection closed by 10.0.0.1 port 60216 May 14 00:00:38.656735 sshd-session[6372]: pam_unix(sshd:session): session closed for user core May 14 00:00:38.661297 systemd[1]: sshd@30-10.0.0.80:22-10.0.0.1:60216.service: Deactivated successfully. May 14 00:00:38.663774 systemd[1]: session-31.scope: Deactivated successfully. May 14 00:00:38.664472 systemd-logind[1475]: Session 31 logged out. Waiting for processes to exit. May 14 00:00:38.665578 systemd-logind[1475]: Removed session 31. May 14 00:00:43.571109 containerd[1491]: time="2025-05-14T00:00:43.570997042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a47753f3c5b10215746e98a63101dbc1014c4ac3d9a78e36e869bd2afb07fa9\" id:\"5b7b7833bf8a9a9ccddea12295595b7902b9df4dd6ffbd91542456798732dc79\" pid:6405 exited_at:{seconds:1747180843 nanos:569769061}" May 14 00:00:43.671738 systemd[1]: Started sshd@31-10.0.0.80:22-10.0.0.1:60218.service - OpenSSH per-connection server daemon (10.0.0.1:60218). May 14 00:00:43.735511 sshd[6418]: Accepted publickey for core from 10.0.0.1 port 60218 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:43.738977 sshd-session[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:43.745822 systemd-logind[1475]: New session 32 of user core. May 14 00:00:43.752963 systemd[1]: Started session-32.scope - Session 32 of User core. May 14 00:00:43.924574 sshd[6420]: Connection closed by 10.0.0.1 port 60218 May 14 00:00:43.925102 sshd-session[6418]: pam_unix(sshd:session): session closed for user core May 14 00:00:43.931257 systemd[1]: sshd@31-10.0.0.80:22-10.0.0.1:60218.service: Deactivated successfully. May 14 00:00:43.934353 systemd[1]: session-32.scope: Deactivated successfully. May 14 00:00:43.935392 systemd-logind[1475]: Session 32 logged out. Waiting for processes to exit. May 14 00:00:43.936627 systemd-logind[1475]: Removed session 32. May 14 00:00:48.941632 systemd[1]: Started sshd@32-10.0.0.80:22-10.0.0.1:57892.service - OpenSSH per-connection server daemon (10.0.0.1:57892). May 14 00:00:48.998115 sshd[6436]: Accepted publickey for core from 10.0.0.1 port 57892 ssh2: RSA SHA256:7f2XacyFcvGxEsM5obZzQpmkhMs9Q6mfAUEaqBEC3Xw May 14 00:00:48.999614 sshd-session[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:49.006808 systemd-logind[1475]: New session 33 of user core. May 14 00:00:49.014223 systemd[1]: Started session-33.scope - Session 33 of User core. May 14 00:00:49.149606 sshd[6438]: Connection closed by 10.0.0.1 port 57892 May 14 00:00:49.150047 sshd-session[6436]: pam_unix(sshd:session): session closed for user core May 14 00:00:49.159946 systemd[1]: sshd@32-10.0.0.80:22-10.0.0.1:57892.service: Deactivated successfully. May 14 00:00:49.162364 systemd[1]: session-33.scope: Deactivated successfully. May 14 00:00:49.163292 systemd-logind[1475]: Session 33 logged out. Waiting for processes to exit. May 14 00:00:49.164453 systemd-logind[1475]: Removed session 33.