May 27 17:45:37.833328 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:45:37.833362 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:37.833378 kernel: BIOS-provided physical RAM map: May 27 17:45:37.833389 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 17:45:37.833400 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 17:45:37.833411 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 17:45:37.833425 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 27 17:45:37.833439 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 27 17:45:37.833450 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 17:45:37.833461 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 17:45:37.833472 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:45:37.833483 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 17:45:37.833494 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 17:45:37.833503 kernel: NX (Execute Disable) protection: active May 27 17:45:37.833517 kernel: APIC: Static calls initialized May 27 17:45:37.833527 kernel: SMBIOS 2.8 present. May 27 17:45:37.833537 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 27 17:45:37.836809 kernel: DMI: Memory slots populated: 1/1 May 27 17:45:37.836817 kernel: Hypervisor detected: KVM May 27 17:45:37.836825 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 17:45:37.836832 kernel: kvm-clock: using sched offset of 3303165514 cycles May 27 17:45:37.836839 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:45:37.836847 kernel: tsc: Detected 2794.748 MHz processor May 27 17:45:37.836862 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:45:37.836873 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:45:37.836880 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 27 17:45:37.836888 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 17:45:37.836895 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:45:37.836903 kernel: Using GB pages for direct mapping May 27 17:45:37.836910 kernel: ACPI: Early table checksum verification disabled May 27 17:45:37.836917 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 27 17:45:37.836925 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:37.836934 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:37.836941 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:37.836948 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 27 17:45:37.836955 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:37.836963 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:37.836970 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:37.836977 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:37.836984 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] May 27 17:45:37.836996 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] May 27 17:45:37.837004 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 27 17:45:37.837011 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] May 27 17:45:37.837018 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] May 27 17:45:37.837026 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] May 27 17:45:37.837034 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] May 27 17:45:37.837043 kernel: No NUMA configuration found May 27 17:45:37.837051 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 27 17:45:37.837058 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 27 17:45:37.837065 kernel: Zone ranges: May 27 17:45:37.837073 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:45:37.837081 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 27 17:45:37.837088 kernel: Normal empty May 27 17:45:37.837096 kernel: Device empty May 27 17:45:37.837103 kernel: Movable zone start for each node May 27 17:45:37.837110 kernel: Early memory node ranges May 27 17:45:37.837120 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 17:45:37.837127 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 27 17:45:37.837134 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 27 17:45:37.837142 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:45:37.837149 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 17:45:37.837157 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 27 17:45:37.837164 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 17:45:37.837171 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 17:45:37.837179 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:45:37.837188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 17:45:37.837196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 17:45:37.837203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:45:37.837211 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 17:45:37.837219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 17:45:37.837226 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:45:37.837233 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 17:45:37.837241 kernel: TSC deadline timer available May 27 17:45:37.837248 kernel: CPU topo: Max. logical packages: 1 May 27 17:45:37.837257 kernel: CPU topo: Max. logical dies: 1 May 27 17:45:37.837265 kernel: CPU topo: Max. dies per package: 1 May 27 17:45:37.837272 kernel: CPU topo: Max. threads per core: 1 May 27 17:45:37.837279 kernel: CPU topo: Num. cores per package: 4 May 27 17:45:37.837287 kernel: CPU topo: Num. threads per package: 4 May 27 17:45:37.837294 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 17:45:37.837301 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 17:45:37.837309 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 17:45:37.837316 kernel: kvm-guest: setup PV sched yield May 27 17:45:37.837324 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 17:45:37.837333 kernel: Booting paravirtualized kernel on KVM May 27 17:45:37.837341 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:45:37.837349 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 17:45:37.837356 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 17:45:37.837363 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 17:45:37.837371 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 17:45:37.837378 kernel: kvm-guest: PV spinlocks enabled May 27 17:45:37.837385 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:45:37.837394 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:37.837404 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:45:37.837412 kernel: random: crng init done May 27 17:45:37.837419 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:45:37.837427 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:45:37.837434 kernel: Fallback order for Node 0: 0 May 27 17:45:37.837441 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 27 17:45:37.837449 kernel: Policy zone: DMA32 May 27 17:45:37.837456 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:45:37.837466 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 17:45:37.837474 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:45:37.837481 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:45:37.837488 kernel: Dynamic Preempt: voluntary May 27 17:45:37.837496 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:45:37.837507 kernel: rcu: RCU event tracing is enabled. May 27 17:45:37.837514 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 17:45:37.837522 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:45:37.837529 kernel: Rude variant of Tasks RCU enabled. May 27 17:45:37.837537 kernel: Tracing variant of Tasks RCU enabled. May 27 17:45:37.837547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:45:37.837554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 17:45:37.837562 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:45:37.837569 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:45:37.837577 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:45:37.837584 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 17:45:37.837592 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:45:37.837608 kernel: Console: colour VGA+ 80x25 May 27 17:45:37.837615 kernel: printk: legacy console [ttyS0] enabled May 27 17:45:37.837623 kernel: ACPI: Core revision 20240827 May 27 17:45:37.837631 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 17:45:37.837640 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:45:37.837648 kernel: x2apic enabled May 27 17:45:37.837656 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:45:37.837663 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 17:45:37.837672 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 17:45:37.837681 kernel: kvm-guest: setup PV IPIs May 27 17:45:37.837689 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 17:45:37.837697 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:45:37.837705 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 17:45:37.837713 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 17:45:37.837720 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 17:45:37.837728 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 17:45:37.837736 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:45:37.837744 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:45:37.837754 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:45:37.837761 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 17:45:37.837769 kernel: RETBleed: Mitigation: untrained return thunk May 27 17:45:37.837793 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 17:45:37.837801 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 17:45:37.837809 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 17:45:37.837817 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 17:45:37.837825 kernel: x86/bugs: return thunk changed May 27 17:45:37.837836 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 17:45:37.837844 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:45:37.837851 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:45:37.837865 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:45:37.837873 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:45:37.837881 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 17:45:37.837888 kernel: Freeing SMP alternatives memory: 32K May 27 17:45:37.837897 kernel: pid_max: default: 32768 minimum: 301 May 27 17:45:37.837905 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:45:37.837914 kernel: landlock: Up and running. May 27 17:45:37.837922 kernel: SELinux: Initializing. May 27 17:45:37.837930 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:45:37.837938 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:45:37.837946 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 17:45:37.837953 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 17:45:37.837961 kernel: ... version: 0 May 27 17:45:37.837969 kernel: ... bit width: 48 May 27 17:45:37.837976 kernel: ... generic registers: 6 May 27 17:45:37.837986 kernel: ... value mask: 0000ffffffffffff May 27 17:45:37.837994 kernel: ... max period: 00007fffffffffff May 27 17:45:37.838002 kernel: ... fixed-purpose events: 0 May 27 17:45:37.838009 kernel: ... event mask: 000000000000003f May 27 17:45:37.838017 kernel: signal: max sigframe size: 1776 May 27 17:45:37.838025 kernel: rcu: Hierarchical SRCU implementation. May 27 17:45:37.838033 kernel: rcu: Max phase no-delay instances is 400. May 27 17:45:37.838041 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:45:37.838049 kernel: smp: Bringing up secondary CPUs ... May 27 17:45:37.838058 kernel: smpboot: x86: Booting SMP configuration: May 27 17:45:37.838066 kernel: .... node #0, CPUs: #1 #2 #3 May 27 17:45:37.838074 kernel: smp: Brought up 1 node, 4 CPUs May 27 17:45:37.838081 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 17:45:37.838090 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 136904K reserved, 0K cma-reserved) May 27 17:45:37.838098 kernel: devtmpfs: initialized May 27 17:45:37.838105 kernel: x86/mm: Memory block size: 128MB May 27 17:45:37.838113 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:45:37.838121 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 17:45:37.838131 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:45:37.838139 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:45:37.838147 kernel: audit: initializing netlink subsys (disabled) May 27 17:45:37.838155 kernel: audit: type=2000 audit(1748367934.791:1): state=initialized audit_enabled=0 res=1 May 27 17:45:37.838162 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:45:37.838170 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:45:37.838178 kernel: cpuidle: using governor menu May 27 17:45:37.838185 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:45:37.838193 kernel: dca service started, version 1.12.1 May 27 17:45:37.838203 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 17:45:37.838211 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 17:45:37.838219 kernel: PCI: Using configuration type 1 for base access May 27 17:45:37.838227 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:45:37.838235 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:45:37.838242 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:45:37.838250 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:45:37.838258 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:45:37.838265 kernel: ACPI: Added _OSI(Module Device) May 27 17:45:37.838275 kernel: ACPI: Added _OSI(Processor Device) May 27 17:45:37.838283 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:45:37.838291 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:45:37.838298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:45:37.838306 kernel: ACPI: Interpreter enabled May 27 17:45:37.838314 kernel: ACPI: PM: (supports S0 S3 S5) May 27 17:45:37.838321 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:45:37.838329 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:45:37.838337 kernel: PCI: Using E820 reservations for host bridge windows May 27 17:45:37.838346 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 17:45:37.838354 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:45:37.838531 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 17:45:37.838652 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 17:45:37.838768 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 17:45:37.838803 kernel: PCI host bridge to bus 0000:00 May 27 17:45:37.838935 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 17:45:37.839048 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 17:45:37.839157 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 17:45:37.839262 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 27 17:45:37.839368 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 17:45:37.839473 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 27 17:45:37.839577 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:45:37.839707 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 17:45:37.839852 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 17:45:37.839984 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 27 17:45:37.840102 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 27 17:45:37.840216 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 27 17:45:37.840330 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 17:45:37.840461 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:45:37.840589 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 27 17:45:37.840706 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 27 17:45:37.840919 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 27 17:45:37.841047 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 17:45:37.841172 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 27 17:45:37.841309 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 27 17:45:37.841449 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 27 17:45:37.841610 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 17:45:37.841788 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 27 17:45:37.841979 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 27 17:45:37.842131 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 27 17:45:37.842258 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 27 17:45:37.842382 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 17:45:37.842518 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 17:45:37.842673 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 17:45:37.842807 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 27 17:45:37.842935 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 27 17:45:37.843060 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 17:45:37.843176 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 17:45:37.843187 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 17:45:37.843195 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 17:45:37.843207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 17:45:37.843215 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 17:45:37.843223 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 17:45:37.843231 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 17:45:37.843239 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 17:45:37.843247 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 17:45:37.843254 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 17:45:37.843262 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 17:45:37.843272 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 17:45:37.843280 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 17:45:37.843288 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 17:45:37.843296 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 17:45:37.843303 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 17:45:37.843311 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 17:45:37.843319 kernel: iommu: Default domain type: Translated May 27 17:45:37.843327 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:45:37.843335 kernel: PCI: Using ACPI for IRQ routing May 27 17:45:37.843343 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 17:45:37.843354 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 17:45:37.843362 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 27 17:45:37.843480 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 17:45:37.843595 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 17:45:37.843708 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 17:45:37.843718 kernel: vgaarb: loaded May 27 17:45:37.843726 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 17:45:37.843734 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 17:45:37.843745 kernel: clocksource: Switched to clocksource kvm-clock May 27 17:45:37.843753 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:45:37.843761 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:45:37.843769 kernel: pnp: PnP ACPI init May 27 17:45:37.843936 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 17:45:37.843949 kernel: pnp: PnP ACPI: found 6 devices May 27 17:45:37.843957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:45:37.843965 kernel: NET: Registered PF_INET protocol family May 27 17:45:37.843976 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:45:37.843984 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:45:37.843992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:45:37.844000 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:45:37.844008 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:45:37.844016 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:45:37.844024 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:45:37.844032 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:45:37.844039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:45:37.844049 kernel: NET: Registered PF_XDP protocol family May 27 17:45:37.844158 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 17:45:37.844263 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 17:45:37.844387 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 17:45:37.844527 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 27 17:45:37.844638 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 17:45:37.844742 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 27 17:45:37.844753 kernel: PCI: CLS 0 bytes, default 64 May 27 17:45:37.844765 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:45:37.844794 kernel: Initialise system trusted keyrings May 27 17:45:37.844802 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:45:37.844810 kernel: Key type asymmetric registered May 27 17:45:37.844818 kernel: Asymmetric key parser 'x509' registered May 27 17:45:37.844826 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:45:37.844834 kernel: io scheduler mq-deadline registered May 27 17:45:37.844842 kernel: io scheduler kyber registered May 27 17:45:37.844850 kernel: io scheduler bfq registered May 27 17:45:37.844869 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:45:37.844877 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 17:45:37.844885 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 17:45:37.844893 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 17:45:37.844901 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:45:37.844910 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:45:37.844917 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 17:45:37.844925 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 17:45:37.844933 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 17:45:37.844943 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 17:45:37.845075 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 17:45:37.845185 kernel: rtc_cmos 00:04: registered as rtc0 May 27 17:45:37.845294 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T17:45:37 UTC (1748367937) May 27 17:45:37.845412 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 17:45:37.845423 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 17:45:37.845431 kernel: NET: Registered PF_INET6 protocol family May 27 17:45:37.845439 kernel: Segment Routing with IPv6 May 27 17:45:37.845450 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:45:37.845458 kernel: NET: Registered PF_PACKET protocol family May 27 17:45:37.845466 kernel: Key type dns_resolver registered May 27 17:45:37.845474 kernel: IPI shorthand broadcast: enabled May 27 17:45:37.845482 kernel: sched_clock: Marking stable (2914003326, 129703388)->(3066890184, -23183470) May 27 17:45:37.845490 kernel: registered taskstats version 1 May 27 17:45:37.845498 kernel: Loading compiled-in X.509 certificates May 27 17:45:37.845506 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:45:37.845514 kernel: Demotion targets for Node 0: null May 27 17:45:37.845524 kernel: Key type .fscrypt registered May 27 17:45:37.845531 kernel: Key type fscrypt-provisioning registered May 27 17:45:37.845539 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:45:37.845547 kernel: ima: Allocated hash algorithm: sha1 May 27 17:45:37.845557 kernel: ima: No architecture policies found May 27 17:45:37.845567 kernel: clk: Disabling unused clocks May 27 17:45:37.845576 kernel: Warning: unable to open an initial console. May 27 17:45:37.845587 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:45:37.845597 kernel: Write protecting the kernel read-only data: 24576k May 27 17:45:37.845609 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:45:37.845619 kernel: Run /init as init process May 27 17:45:37.845629 kernel: with arguments: May 27 17:45:37.845638 kernel: /init May 27 17:45:37.845648 kernel: with environment: May 27 17:45:37.845658 kernel: HOME=/ May 27 17:45:37.845667 kernel: TERM=linux May 27 17:45:37.845676 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:45:37.845687 systemd[1]: Successfully made /usr/ read-only. May 27 17:45:37.845713 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:45:37.845727 systemd[1]: Detected virtualization kvm. May 27 17:45:37.845737 systemd[1]: Detected architecture x86-64. May 27 17:45:37.845748 systemd[1]: Running in initrd. May 27 17:45:37.845758 systemd[1]: No hostname configured, using default hostname. May 27 17:45:37.845789 systemd[1]: Hostname set to . May 27 17:45:37.845800 systemd[1]: Initializing machine ID from VM UUID. May 27 17:45:37.845810 systemd[1]: Queued start job for default target initrd.target. May 27 17:45:37.845821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:37.845832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:37.845843 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:45:37.845861 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:45:37.845873 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:45:37.845887 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:45:37.845900 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:45:37.845911 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:45:37.845921 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:37.845932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:37.845942 systemd[1]: Reached target paths.target - Path Units. May 27 17:45:37.845953 systemd[1]: Reached target slices.target - Slice Units. May 27 17:45:37.845966 systemd[1]: Reached target swap.target - Swaps. May 27 17:45:37.845976 systemd[1]: Reached target timers.target - Timer Units. May 27 17:45:37.845987 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:45:37.845997 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:45:37.846008 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:45:37.846018 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:45:37.846029 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:37.846040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:45:37.846053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:37.846066 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:45:37.846076 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:45:37.846087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:45:37.846098 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:45:37.846111 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:45:37.846124 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:45:37.846135 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:45:37.846145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:45:37.846156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:37.846167 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:45:37.846181 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:37.846192 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:45:37.846203 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:45:37.846239 systemd-journald[220]: Collecting audit messages is disabled. May 27 17:45:37.846269 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:45:37.846281 systemd-journald[220]: Journal started May 27 17:45:37.846305 systemd-journald[220]: Runtime Journal (/run/log/journal/9591938bc26e404a9fb78977bac02eae) is 6M, max 48.6M, 42.5M free. May 27 17:45:37.832993 systemd-modules-load[222]: Inserted module 'overlay' May 27 17:45:37.873676 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:45:37.873710 kernel: Bridge firewalling registered May 27 17:45:37.873726 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:45:37.861523 systemd-modules-load[222]: Inserted module 'br_netfilter' May 27 17:45:37.874939 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:45:37.876981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:37.883294 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:45:37.886220 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:45:37.901479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:45:37.902226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:45:37.912078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:37.913148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:37.917509 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:45:37.921808 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:45:37.922847 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:45:37.923304 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:37.926844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:45:37.958639 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:37.980097 systemd-resolved[262]: Positive Trust Anchors: May 27 17:45:37.980111 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:45:37.980141 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:45:37.982678 systemd-resolved[262]: Defaulting to hostname 'linux'. May 27 17:45:37.983762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:45:37.990809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:38.068825 kernel: SCSI subsystem initialized May 27 17:45:38.078812 kernel: Loading iSCSI transport class v2.0-870. May 27 17:45:38.088821 kernel: iscsi: registered transport (tcp) May 27 17:45:38.111808 kernel: iscsi: registered transport (qla4xxx) May 27 17:45:38.111893 kernel: QLogic iSCSI HBA Driver May 27 17:45:38.134531 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:45:38.155062 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:38.156243 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:45:38.215995 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:45:38.218581 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:45:38.294820 kernel: raid6: avx2x4 gen() 16108 MB/s May 27 17:45:38.313821 kernel: raid6: avx2x2 gen() 26597 MB/s May 27 17:45:38.337820 kernel: raid6: avx2x1 gen() 25735 MB/s May 27 17:45:38.337912 kernel: raid6: using algorithm avx2x2 gen() 26597 MB/s May 27 17:45:38.354924 kernel: raid6: .... xor() 19567 MB/s, rmw enabled May 27 17:45:38.355021 kernel: raid6: using avx2x2 recovery algorithm May 27 17:45:38.375811 kernel: xor: automatically using best checksumming function avx May 27 17:45:38.547832 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:45:38.556912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:45:38.559082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:38.588565 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 27 17:45:38.594483 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:38.598487 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:45:38.626213 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation May 27 17:45:38.656114 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:45:38.657611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:45:38.723352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:38.725061 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:45:38.759809 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 17:45:38.771234 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 17:45:38.778410 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 17:45:38.778428 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:45:38.778444 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:45:38.782537 kernel: GPT:9289727 != 19775487 May 27 17:45:38.782585 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:45:38.782600 kernel: GPT:9289727 != 19775487 May 27 17:45:38.782613 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:45:38.782626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:45:38.800503 kernel: libata version 3.00 loaded. May 27 17:45:38.799695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:38.799846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:38.802195 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:38.804149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:38.808807 kernel: AES CTR mode by8 optimization enabled May 27 17:45:38.811807 kernel: ahci 0000:00:1f.2: version 3.0 May 27 17:45:38.816222 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:45:38.819850 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 17:45:38.824589 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 17:45:38.824772 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 17:45:38.824955 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 17:45:38.831855 kernel: scsi host0: ahci May 27 17:45:38.834826 kernel: scsi host1: ahci May 27 17:45:38.835805 kernel: scsi host2: ahci May 27 17:45:38.845799 kernel: scsi host3: ahci May 27 17:45:38.846796 kernel: scsi host4: ahci May 27 17:45:38.847250 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 17:45:38.888172 kernel: scsi host5: ahci May 27 17:45:38.888414 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 27 17:45:38.888429 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 27 17:45:38.888456 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 27 17:45:38.888468 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 27 17:45:38.888481 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 27 17:45:38.888493 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 27 17:45:38.889687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:38.909302 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 17:45:38.931652 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 17:45:38.936250 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 17:45:38.946994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:45:38.950291 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:45:39.154554 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 17:45:39.154631 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 17:45:39.154645 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 17:45:39.155795 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 17:45:39.158247 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 17:45:39.158328 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 17:45:39.158343 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 17:45:39.159878 kernel: ata3.00: applying bridge limits May 27 17:45:39.159905 kernel: ata3.00: configured for UDMA/100 May 27 17:45:39.160831 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 17:45:39.171009 disk-uuid[633]: Primary Header is updated. May 27 17:45:39.171009 disk-uuid[633]: Secondary Entries is updated. May 27 17:45:39.171009 disk-uuid[633]: Secondary Header is updated. May 27 17:45:39.175550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:45:39.179807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:45:39.214980 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 17:45:39.215307 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:45:39.225885 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 17:45:39.616284 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:45:39.619164 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:45:39.621921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:39.624499 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:45:39.627984 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:45:39.659561 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:45:40.182538 disk-uuid[634]: The operation has completed successfully. May 27 17:45:40.184429 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:45:40.217666 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:45:40.217849 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:45:40.269582 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:45:40.301237 sh[663]: Success May 27 17:45:40.322222 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:45:40.322264 kernel: device-mapper: uevent: version 1.0.3 May 27 17:45:40.323512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:45:40.332812 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 17:45:40.366009 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:45:40.370328 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:45:40.387280 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:45:40.394310 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:45:40.394340 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (675) May 27 17:45:40.394807 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:45:40.396646 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:40.396666 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:45:40.402145 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:45:40.403707 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:45:40.405353 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:45:40.406592 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:45:40.409483 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:45:40.444634 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (707) May 27 17:45:40.444685 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:40.444696 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:40.445735 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:45:40.453811 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:40.453955 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:45:40.457579 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:45:40.638003 ignition[753]: Ignition 2.21.0 May 27 17:45:40.638017 ignition[753]: Stage: fetch-offline May 27 17:45:40.638076 ignition[753]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:40.638088 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:45:40.638232 ignition[753]: parsed url from cmdline: "" May 27 17:45:40.638236 ignition[753]: no config URL provided May 27 17:45:40.638243 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:45:40.638252 ignition[753]: no config at "/usr/lib/ignition/user.ign" May 27 17:45:40.638279 ignition[753]: op(1): [started] loading QEMU firmware config module May 27 17:45:40.638285 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 17:45:40.646737 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:45:40.648345 ignition[753]: op(1): [finished] loading QEMU firmware config module May 27 17:45:40.653014 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:45:40.691264 ignition[753]: parsing config with SHA512: a473fd7897b65eb6e492f1957865b8da22adab526cb3ff15d3c14bec7e86c75acd9bf5f07dab53838f41e5103291bd527d09551c5d34939d0906c9f4b9431563 May 27 17:45:40.700632 unknown[753]: fetched base config from "system" May 27 17:45:40.700648 unknown[753]: fetched user config from "qemu" May 27 17:45:40.701272 ignition[753]: fetch-offline: fetch-offline passed May 27 17:45:40.701548 systemd-networkd[852]: lo: Link UP May 27 17:45:40.701368 ignition[753]: Ignition finished successfully May 27 17:45:40.701552 systemd-networkd[852]: lo: Gained carrier May 27 17:45:40.703189 systemd-networkd[852]: Enumeration completed May 27 17:45:40.703314 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:45:40.703628 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:40.703632 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:45:40.704645 systemd-networkd[852]: eth0: Link UP May 27 17:45:40.704649 systemd-networkd[852]: eth0: Gained carrier May 27 17:45:40.704659 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:40.705339 systemd[1]: Reached target network.target - Network. May 27 17:45:40.707668 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:45:40.709898 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 17:45:40.710713 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:45:40.716836 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:45:40.754913 ignition[856]: Ignition 2.21.0 May 27 17:45:40.754926 ignition[856]: Stage: kargs May 27 17:45:40.755108 ignition[856]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:40.755121 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:45:40.805095 ignition[856]: kargs: kargs passed May 27 17:45:40.805221 ignition[856]: Ignition finished successfully May 27 17:45:40.810256 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:45:40.812424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:45:40.842699 ignition[864]: Ignition 2.21.0 May 27 17:45:40.842713 ignition[864]: Stage: disks May 27 17:45:40.843029 ignition[864]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:40.843040 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:45:40.845138 ignition[864]: disks: disks passed May 27 17:45:40.845192 ignition[864]: Ignition finished successfully May 27 17:45:40.847748 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:45:40.849699 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:45:40.851674 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:45:40.853980 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:45:40.854039 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:45:40.854368 systemd[1]: Reached target basic.target - Basic System. May 27 17:45:40.855447 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:45:40.888794 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:45:40.897686 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:45:40.898839 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:45:41.026822 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:45:41.027596 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:45:41.029796 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:45:41.033071 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:45:41.035559 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:45:41.037741 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:45:41.037806 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:45:41.037831 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:45:41.046115 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:45:41.049818 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:45:41.055687 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (882) May 27 17:45:41.055707 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:41.055718 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:41.055733 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:45:41.059209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:45:41.095894 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:45:41.101570 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory May 27 17:45:41.106702 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:45:41.109902 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:45:41.198314 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:45:41.199603 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:45:41.202390 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:45:41.223808 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:41.252940 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:45:41.273390 ignition[996]: INFO : Ignition 2.21.0 May 27 17:45:41.273390 ignition[996]: INFO : Stage: mount May 27 17:45:41.275335 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:41.275335 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:45:41.277881 ignition[996]: INFO : mount: mount passed May 27 17:45:41.278658 ignition[996]: INFO : Ignition finished successfully May 27 17:45:41.282300 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:45:41.283471 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:45:41.393236 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:45:41.395249 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:45:41.420427 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1008) May 27 17:45:41.420483 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:41.420495 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:41.422189 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:45:41.426162 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:45:41.458238 ignition[1025]: INFO : Ignition 2.21.0 May 27 17:45:41.458238 ignition[1025]: INFO : Stage: files May 27 17:45:41.460194 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:41.460194 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:45:41.463834 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping May 27 17:45:41.465290 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:45:41.465290 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:45:41.468884 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:45:41.468884 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:45:41.468884 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:45:41.468884 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 17:45:41.468884 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 27 17:45:41.467479 unknown[1025]: wrote ssh authorized keys file for user: core May 27 17:45:41.512060 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:45:41.607679 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 17:45:41.607679 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 17:45:41.612201 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:45:41.612201 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:45:41.612201 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:45:41.612201 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:45:41.612201 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:45:41.612201 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:45:41.612201 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:45:41.791485 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:45:41.793696 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:45:41.793696 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 17:45:41.820925 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 17:45:41.820925 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 17:45:41.825893 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 27 17:45:42.211008 systemd-networkd[852]: eth0: Gained IPv6LL May 27 17:45:42.501034 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 17:45:43.035201 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 17:45:43.035201 ignition[1025]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 17:45:43.038916 ignition[1025]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:45:43.197256 ignition[1025]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:45:43.197256 ignition[1025]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 17:45:43.197256 ignition[1025]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 17:45:43.197256 ignition[1025]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:45:43.204588 ignition[1025]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:45:43.204588 ignition[1025]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 17:45:43.204588 ignition[1025]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 17:45:43.227531 ignition[1025]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:45:43.286365 ignition[1025]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:45:43.288518 ignition[1025]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 17:45:43.288518 ignition[1025]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 17:45:43.291956 ignition[1025]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:45:43.293678 ignition[1025]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:45:43.295899 ignition[1025]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:45:43.297964 ignition[1025]: INFO : files: files passed May 27 17:45:43.298912 ignition[1025]: INFO : Ignition finished successfully May 27 17:45:43.302357 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:45:43.305171 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:45:43.307938 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:45:43.340332 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:45:43.340500 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:45:43.345030 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory May 27 17:45:43.347930 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:43.347930 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:43.352293 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:43.354418 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:45:43.354720 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:45:43.360347 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:45:43.447871 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:45:43.448026 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:45:43.453621 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:45:43.454813 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:45:43.457921 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:45:43.459175 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:45:43.504052 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:45:43.506315 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:45:43.531121 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:43.531323 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:43.533593 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:45:43.535880 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:45:43.536026 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:45:43.554583 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:45:43.554753 systemd[1]: Stopped target basic.target - Basic System. May 27 17:45:43.557546 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:45:43.576686 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:45:43.577838 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:45:43.578334 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:45:43.578668 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:45:43.579192 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:45:43.579535 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:45:43.579895 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:45:43.580371 systemd[1]: Stopped target swap.target - Swaps. May 27 17:45:43.580670 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:45:43.580838 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:45:43.581585 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:43.582134 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:43.582411 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:45:43.582542 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:43.601009 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:45:43.601139 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:45:43.604299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:45:43.604436 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:45:43.605454 systemd[1]: Stopped target paths.target - Path Units. May 27 17:45:43.607449 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:45:43.612847 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:43.613031 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:45:43.615651 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:45:43.617370 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:45:43.617483 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:45:43.619166 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:45:43.619267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:45:43.621010 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:45:43.621150 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:45:43.622980 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:45:43.623104 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:45:43.626913 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:45:43.628672 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:45:43.630238 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:45:43.630399 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:43.633656 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:45:43.633855 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:45:43.644665 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:45:43.649017 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:45:43.673443 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:45:43.760036 ignition[1080]: INFO : Ignition 2.21.0 May 27 17:45:43.760036 ignition[1080]: INFO : Stage: umount May 27 17:45:43.762155 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:43.762155 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:45:43.762155 ignition[1080]: INFO : umount: umount passed May 27 17:45:43.762155 ignition[1080]: INFO : Ignition finished successfully May 27 17:45:43.768612 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:45:43.768770 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:45:43.769308 systemd[1]: Stopped target network.target - Network. May 27 17:45:43.772052 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:45:43.772113 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:45:43.774074 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:45:43.774123 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:45:43.778309 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:45:43.778365 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:45:43.779473 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:45:43.779519 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:45:43.780420 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:45:43.784982 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:45:43.795414 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:45:43.795574 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:45:43.800259 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:45:43.800549 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:45:43.800596 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:43.806290 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:45:43.806576 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:45:43.806695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:45:43.809745 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:45:43.810173 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:45:43.811469 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:45:43.811541 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:43.817489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:45:43.818458 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:45:43.818512 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:45:43.820761 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:45:43.820823 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:43.824110 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:45:43.824168 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:45:43.824632 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:43.826422 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:45:43.843739 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:45:43.843891 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:45:43.877665 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:45:43.877962 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:43.880250 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:45:43.880296 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:45:43.882613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:45:43.882649 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:43.884734 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:45:43.884809 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:45:43.887220 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:45:43.887275 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:45:43.888068 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:45:43.888122 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:45:43.889627 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:45:43.894423 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:45:43.894487 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:43.898902 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:45:43.898961 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:43.902220 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:45:43.902273 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:45:43.944620 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:45:43.944708 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:43.947242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:43.947294 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:43.951761 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:45:43.951906 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:45:44.469984 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:45:44.470122 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:45:44.470800 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:45:44.473059 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:45:44.473129 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:45:44.477132 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:45:44.506850 systemd[1]: Switching root. May 27 17:45:44.552438 systemd-journald[220]: Journal stopped May 27 17:45:46.279966 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 17:45:46.280040 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:45:46.280062 kernel: SELinux: policy capability open_perms=1 May 27 17:45:46.280093 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:45:46.280107 kernel: SELinux: policy capability always_check_network=0 May 27 17:45:46.280128 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:45:46.280150 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:45:46.280165 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:45:46.280180 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:45:46.280194 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:45:46.280209 kernel: audit: type=1403 audit(1748367945.285:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:45:46.280232 systemd[1]: Successfully loaded SELinux policy in 51.725ms. May 27 17:45:46.280265 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.934ms. May 27 17:45:46.280282 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:45:46.280298 systemd[1]: Detected virtualization kvm. May 27 17:45:46.280313 systemd[1]: Detected architecture x86-64. May 27 17:45:46.280328 systemd[1]: Detected first boot. May 27 17:45:46.280344 systemd[1]: Initializing machine ID from VM UUID. May 27 17:45:46.280359 zram_generator::config[1125]: No configuration found. May 27 17:45:46.280375 kernel: Guest personality initialized and is inactive May 27 17:45:46.280393 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 17:45:46.280414 kernel: Initialized host personality May 27 17:45:46.280428 kernel: NET: Registered PF_VSOCK protocol family May 27 17:45:46.280442 systemd[1]: Populated /etc with preset unit settings. May 27 17:45:46.280459 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:45:46.280474 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:45:46.280489 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:45:46.280505 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:45:46.280521 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:45:46.280540 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:45:46.280555 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:45:46.280571 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:45:46.280586 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:45:46.280612 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:45:46.280628 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:45:46.280643 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:45:46.280668 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:46.280687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:46.280703 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:45:46.280718 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:45:46.280734 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:45:46.280750 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:45:46.280765 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:45:46.280848 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:46.280864 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:46.280883 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:45:46.280898 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:45:46.280913 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:45:46.280929 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:45:46.280944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:46.280959 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:45:46.280974 systemd[1]: Reached target slices.target - Slice Units. May 27 17:45:46.280990 systemd[1]: Reached target swap.target - Swaps. May 27 17:45:46.281005 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:45:46.281024 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:45:46.281040 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:45:46.281056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:46.281071 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:45:46.281087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:46.281102 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:45:46.281118 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:45:46.281133 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:45:46.281149 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:45:46.281168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:46.281183 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:45:46.281199 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:45:46.281216 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:45:46.281232 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:45:46.281247 systemd[1]: Reached target machines.target - Containers. May 27 17:45:46.281263 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:45:46.281278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:46.281297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:45:46.281312 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:45:46.281327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:45:46.281343 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:45:46.281364 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:45:46.281380 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:45:46.281395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:45:46.281410 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:45:46.281426 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:45:46.281444 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:45:46.281459 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:45:46.281475 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:45:46.281491 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:46.281507 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:45:46.281524 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:45:46.281539 kernel: fuse: init (API version 7.41) May 27 17:45:46.281554 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:45:46.281569 kernel: loop: module loaded May 27 17:45:46.281587 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:45:46.281603 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:45:46.281619 kernel: ACPI: bus type drm_connector registered May 27 17:45:46.281634 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:45:46.281650 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:45:46.281690 systemd[1]: Stopped verity-setup.service. May 27 17:45:46.281706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:46.281722 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:45:46.281738 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:45:46.281753 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:45:46.281769 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:45:46.281802 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:45:46.281842 systemd-journald[1200]: Collecting audit messages is disabled. May 27 17:45:46.281872 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:45:46.281888 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:45:46.281904 systemd-journald[1200]: Journal started May 27 17:45:46.281936 systemd-journald[1200]: Runtime Journal (/run/log/journal/9591938bc26e404a9fb78977bac02eae) is 6M, max 48.6M, 42.5M free. May 27 17:45:45.858471 systemd[1]: Queued start job for default target multi-user.target. May 27 17:45:45.873045 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 17:45:45.873687 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:45:46.286065 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:45:46.287259 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:46.289058 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:45:46.289341 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:45:46.291206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:45:46.291458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:45:46.293399 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:45:46.293639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:45:46.295436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:45:46.295729 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:45:46.297653 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:45:46.297955 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:45:46.299927 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:45:46.300194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:45:46.301956 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:45:46.303703 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:46.305515 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:45:46.307484 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:45:46.327118 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:45:46.330081 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:45:46.332514 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:45:46.333913 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:45:46.334021 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:45:46.336594 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:45:46.341523 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:45:46.342863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:46.344517 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:45:46.350505 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:45:46.352243 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:45:46.354574 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:45:46.356196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:45:46.357634 systemd-journald[1200]: Time spent on flushing to /var/log/journal/9591938bc26e404a9fb78977bac02eae is 14.360ms for 974 entries. May 27 17:45:46.357634 systemd-journald[1200]: System Journal (/var/log/journal/9591938bc26e404a9fb78977bac02eae) is 8M, max 195.6M, 187.6M free. May 27 17:45:46.387886 systemd-journald[1200]: Received client request to flush runtime journal. May 27 17:45:46.358909 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:45:46.361437 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:45:46.363664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:45:46.366696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:46.368199 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:45:46.369673 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:45:46.385559 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:45:46.387313 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:45:46.391921 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:45:46.393982 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:45:46.404809 kernel: loop0: detected capacity change from 0 to 113872 May 27 17:45:46.417876 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:46.428860 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 27 17:45:46.428880 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 27 17:45:46.433853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:45:46.432106 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:45:46.439242 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:45:46.443464 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:45:46.454836 kernel: loop1: detected capacity change from 0 to 221472 May 27 17:45:46.490843 kernel: loop2: detected capacity change from 0 to 146240 May 27 17:45:46.492130 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:45:46.496302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:45:46.600858 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 27 17:45:46.600880 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 27 17:45:46.605963 kernel: loop3: detected capacity change from 0 to 113872 May 27 17:45:46.607298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:46.614799 kernel: loop4: detected capacity change from 0 to 221472 May 27 17:45:46.631819 kernel: loop5: detected capacity change from 0 to 146240 May 27 17:45:46.646826 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 17:45:46.647506 (sd-merge)[1270]: Merged extensions into '/usr'. May 27 17:45:46.654676 systemd[1]: Reload requested from client PID 1244 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:45:46.654694 systemd[1]: Reloading... May 27 17:45:46.752930 zram_generator::config[1297]: No configuration found. May 27 17:45:46.865972 ldconfig[1239]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:45:46.883176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:45:46.978954 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:45:46.979259 systemd[1]: Reloading finished in 323 ms. May 27 17:45:47.040983 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:45:47.042902 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:45:47.062502 systemd[1]: Starting ensure-sysext.service... May 27 17:45:47.073947 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:45:47.100407 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... May 27 17:45:47.100423 systemd[1]: Reloading... May 27 17:45:47.117378 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:45:47.117427 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:45:47.117819 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:45:47.118134 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:45:47.119481 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:45:47.119943 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. May 27 17:45:47.120032 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. May 27 17:45:47.125599 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:45:47.125704 systemd-tmpfiles[1335]: Skipping /boot May 27 17:45:47.146885 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:45:47.147090 systemd-tmpfiles[1335]: Skipping /boot May 27 17:45:47.171833 zram_generator::config[1362]: No configuration found. May 27 17:45:47.270534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:45:47.355392 systemd[1]: Reloading finished in 254 ms. May 27 17:45:47.398880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:47.407502 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:45:47.410110 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:45:47.425301 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:45:47.454587 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:45:47.456947 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:45:47.461447 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:47.461730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:47.463330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:45:47.466336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:45:47.472625 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:45:47.473994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:47.474193 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:47.477996 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:45:47.479010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:47.480575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:45:47.480810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:45:47.482640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:45:47.482934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:45:47.484934 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:45:47.485158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:45:47.496207 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:45:47.501023 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:45:47.515328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:47.515761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:47.518105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:45:47.521192 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:45:47.532236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:45:47.536688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:45:47.538528 augenrules[1436]: No rules May 27 17:45:47.538011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:47.538193 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:47.538353 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:47.539754 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:45:47.540113 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:45:47.542140 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:45:47.542423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:45:47.544568 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:45:47.545598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:45:47.550674 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:45:47.552669 systemd[1]: Finished ensure-sysext.service. May 27 17:45:47.556123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:45:47.556404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:45:47.558176 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:45:47.558400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:45:47.564010 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:45:47.567746 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:45:47.573762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:45:47.573910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:45:47.578096 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 17:45:47.581890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:47.584692 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:45:47.585818 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:45:47.612466 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:45:47.642575 systemd-udevd[1456]: Using default interface naming scheme 'v255'. May 27 17:45:47.651066 systemd-resolved[1403]: Positive Trust Anchors: May 27 17:45:47.651397 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:45:47.651436 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:45:47.655381 systemd-resolved[1403]: Defaulting to hostname 'linux'. May 27 17:45:47.657183 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:45:47.658505 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:47.670931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:47.674902 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:45:47.683608 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 17:45:47.686066 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:45:47.687856 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:45:47.691028 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:45:47.692520 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:45:47.693927 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:45:47.695507 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:45:47.695550 systemd[1]: Reached target paths.target - Path Units. May 27 17:45:47.697950 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:45:47.699490 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:45:47.700997 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:45:47.702464 systemd[1]: Reached target timers.target - Timer Units. May 27 17:45:47.706413 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:45:47.709999 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:45:47.721681 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:45:47.723646 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:45:47.725256 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:45:47.737109 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:45:47.739109 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:45:47.744901 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:45:47.753613 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:45:47.755203 systemd[1]: Reached target basic.target - Basic System. May 27 17:45:47.756491 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:45:47.756525 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:45:47.758656 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:45:47.763066 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:45:47.767453 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:45:47.771579 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:45:47.773148 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:45:47.778125 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:45:47.782016 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:45:47.786509 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:45:47.795578 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:45:47.798640 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:45:47.805954 jq[1493]: false May 27 17:45:47.814632 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:45:47.816982 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:45:47.817597 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:45:47.819488 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:45:47.823022 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:45:47.827305 systemd-networkd[1463]: lo: Link UP May 27 17:45:47.829813 systemd-networkd[1463]: lo: Gained carrier May 27 17:45:47.830962 systemd-networkd[1463]: Enumeration completed May 27 17:45:47.831069 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:45:47.832950 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:45:47.834751 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Refreshing passwd entry cache May 27 17:45:47.834738 oslogin_cache_refresh[1496]: Refreshing passwd entry cache May 27 17:45:47.834977 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:45:47.835284 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:45:47.845173 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:45:47.845446 systemd[1]: Reached target network.target - Network. May 27 17:45:47.847125 jq[1510]: true May 27 17:45:47.847692 oslogin_cache_refresh[1496]: Failure getting users, quitting May 27 17:45:47.847798 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Failure getting users, quitting May 27 17:45:47.847798 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:45:47.847798 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Refreshing group entry cache May 27 17:45:47.847708 oslogin_cache_refresh[1496]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:45:47.847756 oslogin_cache_refresh[1496]: Refreshing group entry cache May 27 17:45:47.848264 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Failure getting groups, quitting May 27 17:45:47.848264 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:45:47.848255 oslogin_cache_refresh[1496]: Failure getting groups, quitting May 27 17:45:47.848263 oslogin_cache_refresh[1496]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:45:47.854123 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:45:47.858053 update_engine[1508]: I20250527 17:45:47.857945 1508 main.cc:92] Flatcar Update Engine starting May 27 17:45:47.859109 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:45:47.864816 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:45:47.866732 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:45:47.868865 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:45:47.870713 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:45:47.870979 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:45:47.873675 extend-filesystems[1494]: Found loop3 May 27 17:45:47.892656 extend-filesystems[1494]: Found loop4 May 27 17:45:47.892656 extend-filesystems[1494]: Found loop5 May 27 17:45:47.892656 extend-filesystems[1494]: Found sr0 May 27 17:45:47.892656 extend-filesystems[1494]: Found vda May 27 17:45:47.892656 extend-filesystems[1494]: Found vda1 May 27 17:45:47.892656 extend-filesystems[1494]: Found vda2 May 27 17:45:47.892656 extend-filesystems[1494]: Found vda3 May 27 17:45:47.892656 extend-filesystems[1494]: Found usr May 27 17:45:47.892656 extend-filesystems[1494]: Found vda4 May 27 17:45:47.892656 extend-filesystems[1494]: Found vda6 May 27 17:45:47.892656 extend-filesystems[1494]: Found vda7 May 27 17:45:47.892656 extend-filesystems[1494]: Found vda9 May 27 17:45:47.889721 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:45:47.918036 dbus-daemon[1491]: [system] SELinux support is enabled May 27 17:45:47.890252 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:45:47.892260 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:45:47.893302 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:45:47.919450 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:45:47.937327 update_engine[1508]: I20250527 17:45:47.937256 1508 update_check_scheduler.cc:74] Next update check in 3m24s May 27 17:45:47.942102 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:45:47.968340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:45:47.970076 tar[1515]: linux-amd64/helm May 27 17:45:47.971526 systemd[1]: Started update-engine.service - Update Engine. May 27 17:45:47.974066 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:45:47.974097 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:45:47.978352 jq[1526]: true May 27 17:45:47.979647 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:45:48.087900 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:45:48.087945 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:45:48.098638 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:48.098651 systemd-networkd[1463]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:45:48.099380 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:45:48.099416 systemd-networkd[1463]: eth0: Link UP May 27 17:45:48.099653 systemd-networkd[1463]: eth0: Gained carrier May 27 17:45:48.099666 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:48.103985 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:45:48.116243 systemd-networkd[1463]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:45:48.117318 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. May 27 17:45:48.119067 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 17:45:48.119115 systemd-timesyncd[1455]: Initial clock synchronization to Tue 2025-05-27 17:45:48.358508 UTC. May 27 17:45:48.128448 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:45:48.139839 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:45:48.194857 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 17:45:48.195965 systemd-logind[1504]: New seat seat0. May 27 17:45:48.198764 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:45:48.211794 kernel: ACPI: button: Power Button [PWRF] May 27 17:45:48.222835 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:45:48.269803 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 17:45:48.273842 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 17:45:48.282850 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:45:48.290468 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:45:48.318147 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:45:48.319616 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:45:48.334599 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:45:48.340546 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:45:48.368312 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:45:48.374260 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:45:48.377544 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:45:48.379051 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:45:48.396678 bash[1566]: Updated "/home/core/.ssh/authorized_keys" May 27 17:45:48.399715 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:45:48.403526 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:45:48.420133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:48.454594 systemd-logind[1504]: Watching system buttons on /dev/input/event2 (Power Button) May 27 17:45:48.460683 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:45:48.473301 kernel: kvm_amd: TSC scaling supported May 27 17:45:48.473369 kernel: kvm_amd: Nested Virtualization enabled May 27 17:45:48.473383 kernel: kvm_amd: Nested Paging enabled May 27 17:45:48.473396 kernel: kvm_amd: LBR virtualization supported May 27 17:45:48.475280 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 17:45:48.475319 kernel: kvm_amd: Virtual GIF supported May 27 17:45:48.498093 containerd[1533]: time="2025-05-27T17:45:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:45:48.499478 containerd[1533]: time="2025-05-27T17:45:48.499454172Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:45:48.512736 containerd[1533]: time="2025-05-27T17:45:48.512704021Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.799µs" May 27 17:45:48.513927 containerd[1533]: time="2025-05-27T17:45:48.513903931Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:45:48.514001 containerd[1533]: time="2025-05-27T17:45:48.513986305Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:45:48.516121 containerd[1533]: time="2025-05-27T17:45:48.515990634Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:45:48.516121 containerd[1533]: time="2025-05-27T17:45:48.516012786Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:45:48.516121 containerd[1533]: time="2025-05-27T17:45:48.516035038Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:45:48.516247 containerd[1533]: time="2025-05-27T17:45:48.516129645Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:45:48.516247 containerd[1533]: time="2025-05-27T17:45:48.516141247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516394542Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516413177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516423215Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516430830Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516528052Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516753415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516796165Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516805152Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.516837162Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.517018622Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:45:48.518998 containerd[1533]: time="2025-05-27T17:45:48.517076080Z" level=info msg="metadata content store policy set" policy=shared May 27 17:45:48.519804 kernel: EDAC MC: Ver: 3.0.0 May 27 17:45:48.666077 containerd[1533]: time="2025-05-27T17:45:48.665960992Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:45:48.666077 containerd[1533]: time="2025-05-27T17:45:48.666047785Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:45:48.666077 containerd[1533]: time="2025-05-27T17:45:48.666080055Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:45:48.666077 containerd[1533]: time="2025-05-27T17:45:48.666091667Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:45:48.666346 containerd[1533]: time="2025-05-27T17:45:48.666104040Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:45:48.666346 containerd[1533]: time="2025-05-27T17:45:48.666113749Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:45:48.666346 containerd[1533]: time="2025-05-27T17:45:48.666214277Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:45:48.666346 containerd[1533]: time="2025-05-27T17:45:48.666228484Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:45:48.666346 containerd[1533]: time="2025-05-27T17:45:48.666247379Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:45:48.666346 containerd[1533]: time="2025-05-27T17:45:48.666256797Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:45:48.666346 containerd[1533]: time="2025-05-27T17:45:48.666265133Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:45:48.666346 containerd[1533]: time="2025-05-27T17:45:48.666297744Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:45:48.666563 containerd[1533]: time="2025-05-27T17:45:48.666478643Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:45:48.666563 containerd[1533]: time="2025-05-27T17:45:48.666497238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:45:48.666563 containerd[1533]: time="2025-05-27T17:45:48.666512116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:45:48.666563 containerd[1533]: time="2025-05-27T17:45:48.666522155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:45:48.666563 containerd[1533]: time="2025-05-27T17:45:48.666539247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:45:48.666563 containerd[1533]: time="2025-05-27T17:45:48.666549295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:45:48.666563 containerd[1533]: time="2025-05-27T17:45:48.666559545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:45:48.666758 containerd[1533]: time="2025-05-27T17:45:48.666571347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:45:48.666758 containerd[1533]: time="2025-05-27T17:45:48.666582257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:45:48.666758 containerd[1533]: time="2025-05-27T17:45:48.666591645Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:45:48.666758 containerd[1533]: time="2025-05-27T17:45:48.666628314Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:45:48.666758 containerd[1533]: time="2025-05-27T17:45:48.666732469Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:45:48.666758 containerd[1533]: time="2025-05-27T17:45:48.666746265Z" level=info msg="Start snapshots syncer" May 27 17:45:48.666929 containerd[1533]: time="2025-05-27T17:45:48.666822518Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:45:48.667146 containerd[1533]: time="2025-05-27T17:45:48.667106220Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:45:48.667248 containerd[1533]: time="2025-05-27T17:45:48.667158799Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:45:48.667248 containerd[1533]: time="2025-05-27T17:45:48.667231785Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:45:48.667389 containerd[1533]: time="2025-05-27T17:45:48.667368422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:45:48.667426 containerd[1533]: time="2025-05-27T17:45:48.667393118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:45:48.667426 containerd[1533]: time="2025-05-27T17:45:48.667403247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:45:48.667426 containerd[1533]: time="2025-05-27T17:45:48.667413176Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:45:48.667426 containerd[1533]: time="2025-05-27T17:45:48.667424697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:45:48.667541 containerd[1533]: time="2025-05-27T17:45:48.667435037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:45:48.667541 containerd[1533]: time="2025-05-27T17:45:48.667445436Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:45:48.667541 containerd[1533]: time="2025-05-27T17:45:48.667478368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:45:48.667541 containerd[1533]: time="2025-05-27T17:45:48.667488847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:45:48.667541 containerd[1533]: time="2025-05-27T17:45:48.667499437Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:45:48.668165 containerd[1533]: time="2025-05-27T17:45:48.668144156Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668167821Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668177409Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668186616Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668194300Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668202846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668233314Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668261877Z" level=info msg="runtime interface created" May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668267217Z" level=info msg="created NRI interface" May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668283798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:45:48.668306 containerd[1533]: time="2025-05-27T17:45:48.668293677Z" level=info msg="Connect containerd service" May 27 17:45:48.668523 containerd[1533]: time="2025-05-27T17:45:48.668322060Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:45:48.669310 containerd[1533]: time="2025-05-27T17:45:48.669286499Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:45:48.671201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:48.714747 tar[1515]: linux-amd64/LICENSE May 27 17:45:48.714895 tar[1515]: linux-amd64/README.md May 27 17:45:48.733103 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:45:48.769615 containerd[1533]: time="2025-05-27T17:45:48.769542414Z" level=info msg="Start subscribing containerd event" May 27 17:45:48.769795 containerd[1533]: time="2025-05-27T17:45:48.769638414Z" level=info msg="Start recovering state" May 27 17:45:48.769795 containerd[1533]: time="2025-05-27T17:45:48.769769410Z" level=info msg="Start event monitor" May 27 17:45:48.769848 containerd[1533]: time="2025-05-27T17:45:48.769804526Z" level=info msg="Start cni network conf syncer for default" May 27 17:45:48.769848 containerd[1533]: time="2025-05-27T17:45:48.769813803Z" level=info msg="Start streaming server" May 27 17:45:48.769848 containerd[1533]: time="2025-05-27T17:45:48.769831557Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:45:48.769848 containerd[1533]: time="2025-05-27T17:45:48.769839261Z" level=info msg="runtime interface starting up..." May 27 17:45:48.769957 containerd[1533]: time="2025-05-27T17:45:48.769835003Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:45:48.769957 containerd[1533]: time="2025-05-27T17:45:48.769907249Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:45:48.769957 containerd[1533]: time="2025-05-27T17:45:48.769845853Z" level=info msg="starting plugins..." May 27 17:45:48.769957 containerd[1533]: time="2025-05-27T17:45:48.769944689Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:45:48.770092 containerd[1533]: time="2025-05-27T17:45:48.770068721Z" level=info msg="containerd successfully booted in 0.273033s" May 27 17:45:48.770222 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:45:49.507058 systemd-networkd[1463]: eth0: Gained IPv6LL May 27 17:45:49.510172 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:45:49.512308 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:45:49.515057 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 17:45:49.518944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:45:49.527042 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:45:49.553032 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:45:49.556215 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 17:45:49.556559 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 17:45:49.558187 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:45:50.314548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:45:50.316952 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:45:50.318403 systemd[1]: Startup finished in 2.991s (kernel) + 7.647s (initrd) + 5.083s (userspace) = 15.722s. May 27 17:45:50.352324 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:45:50.803661 kubelet[1653]: E0527 17:45:50.803529 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:45:50.807732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:45:50.807965 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:45:50.808330 systemd[1]: kubelet.service: Consumed 1.020s CPU time, 264.4M memory peak. May 27 17:45:51.415183 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:45:51.416477 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:42436.service - OpenSSH per-connection server daemon (10.0.0.1:42436). May 27 17:45:51.481766 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 42436 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:45:51.483866 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:51.491050 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:45:51.492169 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:45:51.499634 systemd-logind[1504]: New session 1 of user core. May 27 17:45:51.515420 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:45:51.518712 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:45:51.534092 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:45:51.536866 systemd-logind[1504]: New session c1 of user core. May 27 17:45:51.703745 systemd[1670]: Queued start job for default target default.target. May 27 17:45:51.727155 systemd[1670]: Created slice app.slice - User Application Slice. May 27 17:45:51.727183 systemd[1670]: Reached target paths.target - Paths. May 27 17:45:51.727225 systemd[1670]: Reached target timers.target - Timers. May 27 17:45:51.728852 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:45:51.741775 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:45:51.741926 systemd[1670]: Reached target sockets.target - Sockets. May 27 17:45:51.741970 systemd[1670]: Reached target basic.target - Basic System. May 27 17:45:51.742011 systemd[1670]: Reached target default.target - Main User Target. May 27 17:45:51.742043 systemd[1670]: Startup finished in 197ms. May 27 17:45:51.742346 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:45:51.743929 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:45:51.812235 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:42440.service - OpenSSH per-connection server daemon (10.0.0.1:42440). May 27 17:45:51.861802 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 42440 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:45:51.863117 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:51.867562 systemd-logind[1504]: New session 2 of user core. May 27 17:45:51.880945 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:45:51.935113 sshd[1683]: Connection closed by 10.0.0.1 port 42440 May 27 17:45:51.935438 sshd-session[1681]: pam_unix(sshd:session): session closed for user core May 27 17:45:51.948257 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:42440.service: Deactivated successfully. May 27 17:45:51.949920 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:45:51.950622 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. May 27 17:45:51.953291 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:42450.service - OpenSSH per-connection server daemon (10.0.0.1:42450). May 27 17:45:51.953843 systemd-logind[1504]: Removed session 2. May 27 17:45:52.011014 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 42450 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:45:52.012474 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:52.016654 systemd-logind[1504]: New session 3 of user core. May 27 17:45:52.025942 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:45:52.076295 sshd[1691]: Connection closed by 10.0.0.1 port 42450 May 27 17:45:52.076696 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 27 17:45:52.092433 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:42450.service: Deactivated successfully. May 27 17:45:52.094163 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:45:52.095031 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. May 27 17:45:52.097780 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:42456.service - OpenSSH per-connection server daemon (10.0.0.1:42456). May 27 17:45:52.098564 systemd-logind[1504]: Removed session 3. May 27 17:45:52.153814 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 42456 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:45:52.155104 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:52.159401 systemd-logind[1504]: New session 4 of user core. May 27 17:45:52.169938 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:45:52.222759 sshd[1699]: Connection closed by 10.0.0.1 port 42456 May 27 17:45:52.223189 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 27 17:45:52.233677 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:42456.service: Deactivated successfully. May 27 17:45:52.235375 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:45:52.236222 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. May 27 17:45:52.239151 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:42472.service - OpenSSH per-connection server daemon (10.0.0.1:42472). May 27 17:45:52.239888 systemd-logind[1504]: Removed session 4. May 27 17:45:52.301440 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 42472 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:45:52.302764 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:52.307082 systemd-logind[1504]: New session 5 of user core. May 27 17:45:52.322934 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:45:52.381175 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:45:52.381495 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:45:52.412157 sudo[1708]: pam_unix(sudo:session): session closed for user root May 27 17:45:52.413727 sshd[1707]: Connection closed by 10.0.0.1 port 42472 May 27 17:45:52.414134 sshd-session[1705]: pam_unix(sshd:session): session closed for user core May 27 17:45:52.427506 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:42472.service: Deactivated successfully. May 27 17:45:52.429336 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:45:52.430062 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. May 27 17:45:52.432935 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:42484.service - OpenSSH per-connection server daemon (10.0.0.1:42484). May 27 17:45:52.433519 systemd-logind[1504]: Removed session 5. May 27 17:45:52.484490 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 42484 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:45:52.485958 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:52.490482 systemd-logind[1504]: New session 6 of user core. May 27 17:45:52.505911 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:45:52.559536 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:45:52.559862 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:45:52.569507 sudo[1718]: pam_unix(sudo:session): session closed for user root May 27 17:45:52.576154 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:45:52.576462 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:45:52.587091 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:45:52.639862 augenrules[1740]: No rules May 27 17:45:52.641700 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:45:52.642021 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:45:52.643375 sudo[1717]: pam_unix(sudo:session): session closed for user root May 27 17:45:52.644961 sshd[1716]: Connection closed by 10.0.0.1 port 42484 May 27 17:45:52.645385 sshd-session[1714]: pam_unix(sshd:session): session closed for user core May 27 17:45:52.665116 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:42484.service: Deactivated successfully. May 27 17:45:52.667151 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:45:52.667967 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. May 27 17:45:52.671037 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:42488.service - OpenSSH per-connection server daemon (10.0.0.1:42488). May 27 17:45:52.671662 systemd-logind[1504]: Removed session 6. May 27 17:45:52.722135 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 42488 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:45:52.723611 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:45:52.728375 systemd-logind[1504]: New session 7 of user core. May 27 17:45:52.737965 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:45:52.792120 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:45:52.792428 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:45:53.114765 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:45:53.132199 (dockerd)[1774]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:45:53.357111 dockerd[1774]: time="2025-05-27T17:45:53.357036343Z" level=info msg="Starting up" May 27 17:45:53.358897 dockerd[1774]: time="2025-05-27T17:45:53.358853607Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:45:53.876311 dockerd[1774]: time="2025-05-27T17:45:53.876252382Z" level=info msg="Loading containers: start." May 27 17:45:53.886828 kernel: Initializing XFRM netlink socket May 27 17:45:54.126180 systemd-networkd[1463]: docker0: Link UP May 27 17:45:54.131915 dockerd[1774]: time="2025-05-27T17:45:54.131832598Z" level=info msg="Loading containers: done." May 27 17:45:54.145584 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1942711217-merged.mount: Deactivated successfully. May 27 17:45:54.147206 dockerd[1774]: time="2025-05-27T17:45:54.147154155Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:45:54.147336 dockerd[1774]: time="2025-05-27T17:45:54.147231600Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:45:54.147364 dockerd[1774]: time="2025-05-27T17:45:54.147339022Z" level=info msg="Initializing buildkit" May 27 17:45:54.176114 dockerd[1774]: time="2025-05-27T17:45:54.176085843Z" level=info msg="Completed buildkit initialization" May 27 17:45:54.182484 dockerd[1774]: time="2025-05-27T17:45:54.182450697Z" level=info msg="Daemon has completed initialization" May 27 17:45:54.182588 dockerd[1774]: time="2025-05-27T17:45:54.182536748Z" level=info msg="API listen on /run/docker.sock" May 27 17:45:54.182721 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:45:54.890938 containerd[1533]: time="2025-05-27T17:45:54.890882476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 27 17:45:55.456063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225752731.mount: Deactivated successfully. May 27 17:45:56.332841 containerd[1533]: time="2025-05-27T17:45:56.332763845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:56.333906 containerd[1533]: time="2025-05-27T17:45:56.333619839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 27 17:45:56.334945 containerd[1533]: time="2025-05-27T17:45:56.334897709Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:56.337321 containerd[1533]: time="2025-05-27T17:45:56.337279706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:56.338073 containerd[1533]: time="2025-05-27T17:45:56.338035814Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.447111336s" May 27 17:45:56.338073 containerd[1533]: time="2025-05-27T17:45:56.338066204Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 27 17:45:56.338675 containerd[1533]: time="2025-05-27T17:45:56.338645216Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 27 17:45:57.513841 containerd[1533]: time="2025-05-27T17:45:57.513766812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:57.514627 containerd[1533]: time="2025-05-27T17:45:57.514602929Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 27 17:45:57.515718 containerd[1533]: time="2025-05-27T17:45:57.515680574Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:57.517968 containerd[1533]: time="2025-05-27T17:45:57.517920188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:57.518801 containerd[1533]: time="2025-05-27T17:45:57.518764372Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.180086466s" May 27 17:45:57.518839 containerd[1533]: time="2025-05-27T17:45:57.518806003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 27 17:45:57.519267 containerd[1533]: time="2025-05-27T17:45:57.519239046Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 27 17:45:58.821841 containerd[1533]: time="2025-05-27T17:45:58.821739744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:58.822725 containerd[1533]: time="2025-05-27T17:45:58.822647086Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 27 17:45:58.824031 containerd[1533]: time="2025-05-27T17:45:58.823971048Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:58.826407 containerd[1533]: time="2025-05-27T17:45:58.826344172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:45:58.827538 containerd[1533]: time="2025-05-27T17:45:58.827499740Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.308237301s" May 27 17:45:58.827538 containerd[1533]: time="2025-05-27T17:45:58.827533756Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 27 17:45:58.828127 containerd[1533]: time="2025-05-27T17:45:58.828088315Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 27 17:45:59.904986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840945058.mount: Deactivated successfully. May 27 17:46:00.507355 containerd[1533]: time="2025-05-27T17:46:00.505412236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:00.508001 containerd[1533]: time="2025-05-27T17:46:00.507959758Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 27 17:46:00.509694 containerd[1533]: time="2025-05-27T17:46:00.509620459Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:00.511844 containerd[1533]: time="2025-05-27T17:46:00.511758899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:00.512321 containerd[1533]: time="2025-05-27T17:46:00.512270423Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.684145202s" May 27 17:46:00.512321 containerd[1533]: time="2025-05-27T17:46:00.512316525Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 27 17:46:00.512853 containerd[1533]: time="2025-05-27T17:46:00.512808993Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:46:00.963330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:46:00.964992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:01.174398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:01.178024 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:46:01.222652 kubelet[2065]: E0527 17:46:01.213761 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:46:01.219957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:46:01.220148 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:46:01.220510 systemd[1]: kubelet.service: Consumed 213ms CPU time, 110.1M memory peak. May 27 17:46:01.306285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376664612.mount: Deactivated successfully. May 27 17:46:01.982745 containerd[1533]: time="2025-05-27T17:46:01.982679822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:01.983868 containerd[1533]: time="2025-05-27T17:46:01.983798927Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 17:46:01.985136 containerd[1533]: time="2025-05-27T17:46:01.985101857Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:01.987621 containerd[1533]: time="2025-05-27T17:46:01.987575597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:01.988543 containerd[1533]: time="2025-05-27T17:46:01.988493934Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.475651479s" May 27 17:46:01.988543 containerd[1533]: time="2025-05-27T17:46:01.988543968Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 17:46:01.989055 containerd[1533]: time="2025-05-27T17:46:01.988982191Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:46:02.428071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565845262.mount: Deactivated successfully. May 27 17:46:02.434744 containerd[1533]: time="2025-05-27T17:46:02.434686047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:46:02.435514 containerd[1533]: time="2025-05-27T17:46:02.435479019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 17:46:02.436723 containerd[1533]: time="2025-05-27T17:46:02.436696894Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:46:02.439083 containerd[1533]: time="2025-05-27T17:46:02.439012660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:46:02.439799 containerd[1533]: time="2025-05-27T17:46:02.439735956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 450.723302ms" May 27 17:46:02.439879 containerd[1533]: time="2025-05-27T17:46:02.439805399Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:46:02.440466 containerd[1533]: time="2025-05-27T17:46:02.440411825Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 27 17:46:03.000664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636188659.mount: Deactivated successfully. May 27 17:46:04.925692 containerd[1533]: time="2025-05-27T17:46:04.925628364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:04.926381 containerd[1533]: time="2025-05-27T17:46:04.926342069Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 27 17:46:04.927428 containerd[1533]: time="2025-05-27T17:46:04.927397216Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:04.930084 containerd[1533]: time="2025-05-27T17:46:04.930045579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:04.930831 containerd[1533]: time="2025-05-27T17:46:04.930789109Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.490311973s" May 27 17:46:04.930873 containerd[1533]: time="2025-05-27T17:46:04.930835312Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 27 17:46:07.588034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:07.588231 systemd[1]: kubelet.service: Consumed 213ms CPU time, 110.1M memory peak. May 27 17:46:07.590390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:07.615850 systemd[1]: Reload requested from client PID 2214 ('systemctl') (unit session-7.scope)... May 27 17:46:07.615866 systemd[1]: Reloading... May 27 17:46:07.698794 zram_generator::config[2257]: No configuration found. May 27 17:46:07.827458 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:46:07.943517 systemd[1]: Reloading finished in 327 ms. May 27 17:46:08.011385 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:46:08.011488 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:46:08.011841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:08.011896 systemd[1]: kubelet.service: Consumed 144ms CPU time, 98.2M memory peak. May 27 17:46:08.013638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:08.190268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:08.194348 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:46:08.233078 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:08.233078 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 17:46:08.233078 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:08.233365 kubelet[2305]: I0527 17:46:08.233214 2305 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:46:08.601799 kubelet[2305]: I0527 17:46:08.601598 2305 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 17:46:08.601799 kubelet[2305]: I0527 17:46:08.601631 2305 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:46:08.602063 kubelet[2305]: I0527 17:46:08.602047 2305 server.go:934] "Client rotation is on, will bootstrap in background" May 27 17:46:08.625315 kubelet[2305]: E0527 17:46:08.625268 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:08.625803 kubelet[2305]: I0527 17:46:08.625758 2305 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:46:08.632934 kubelet[2305]: I0527 17:46:08.632891 2305 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:46:08.638488 kubelet[2305]: I0527 17:46:08.638461 2305 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:46:08.639018 kubelet[2305]: I0527 17:46:08.638990 2305 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 17:46:08.639171 kubelet[2305]: I0527 17:46:08.639124 2305 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:46:08.639339 kubelet[2305]: I0527 17:46:08.639158 2305 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:46:08.639436 kubelet[2305]: I0527 17:46:08.639340 2305 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:46:08.639436 kubelet[2305]: I0527 17:46:08.639349 2305 container_manager_linux.go:300] "Creating device plugin manager" May 27 17:46:08.639483 kubelet[2305]: I0527 17:46:08.639457 2305 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:08.641301 kubelet[2305]: I0527 17:46:08.641272 2305 kubelet.go:408] "Attempting to sync node with API server" May 27 17:46:08.641301 kubelet[2305]: I0527 17:46:08.641295 2305 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:46:08.641364 kubelet[2305]: I0527 17:46:08.641329 2305 kubelet.go:314] "Adding apiserver pod source" May 27 17:46:08.641364 kubelet[2305]: I0527 17:46:08.641352 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:46:08.645936 kubelet[2305]: W0527 17:46:08.644988 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 27 17:46:08.645936 kubelet[2305]: E0527 17:46:08.645046 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:08.645936 kubelet[2305]: W0527 17:46:08.645622 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 27 17:46:08.645936 kubelet[2305]: E0527 17:46:08.645684 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:08.646095 kubelet[2305]: I0527 17:46:08.646067 2305 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:46:08.646526 kubelet[2305]: I0527 17:46:08.646504 2305 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:46:08.647338 kubelet[2305]: W0527 17:46:08.647311 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:46:08.649357 kubelet[2305]: I0527 17:46:08.649331 2305 server.go:1274] "Started kubelet" May 27 17:46:08.650290 kubelet[2305]: I0527 17:46:08.649843 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:46:08.650290 kubelet[2305]: I0527 17:46:08.649925 2305 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:46:08.650290 kubelet[2305]: I0527 17:46:08.650215 2305 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:46:08.651041 kubelet[2305]: I0527 17:46:08.650834 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:46:08.651041 kubelet[2305]: I0527 17:46:08.650849 2305 server.go:449] "Adding debug handlers to kubelet server" May 27 17:46:08.653226 kubelet[2305]: I0527 17:46:08.653099 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:46:08.655057 kubelet[2305]: E0527 17:46:08.654844 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:08.655057 kubelet[2305]: I0527 17:46:08.654870 2305 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 17:46:08.655057 kubelet[2305]: I0527 17:46:08.654960 2305 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 17:46:08.655057 kubelet[2305]: I0527 17:46:08.655004 2305 reconciler.go:26] "Reconciler: start to sync state" May 27 17:46:08.655399 kubelet[2305]: E0527 17:46:08.655354 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" May 27 17:46:08.656234 kubelet[2305]: W0527 17:46:08.656151 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 27 17:46:08.656234 kubelet[2305]: E0527 17:46:08.656208 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:08.656312 kubelet[2305]: E0527 17:46:08.656275 2305 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:46:08.656401 kubelet[2305]: I0527 17:46:08.656377 2305 factory.go:221] Registration of the systemd container factory successfully May 27 17:46:08.656465 kubelet[2305]: I0527 17:46:08.656443 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:46:08.656527 kubelet[2305]: E0527 17:46:08.654953 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184373677c033da3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 17:46:08.649297315 +0000 UTC m=+0.451160002,LastTimestamp:2025-05-27 17:46:08.649297315 +0000 UTC m=+0.451160002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 17:46:08.657237 kubelet[2305]: I0527 17:46:08.657211 2305 factory.go:221] Registration of the containerd container factory successfully May 27 17:46:08.667259 kubelet[2305]: I0527 17:46:08.667229 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:46:08.668982 kubelet[2305]: I0527 17:46:08.668963 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:46:08.669072 kubelet[2305]: I0527 17:46:08.669060 2305 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 17:46:08.669298 kubelet[2305]: I0527 17:46:08.669262 2305 kubelet.go:2321] "Starting kubelet main sync loop" May 27 17:46:08.669479 kubelet[2305]: E0527 17:46:08.669462 2305 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:46:08.669742 kubelet[2305]: W0527 17:46:08.669702 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 27 17:46:08.669831 kubelet[2305]: E0527 17:46:08.669754 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:08.674507 kubelet[2305]: I0527 17:46:08.674492 2305 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 17:46:08.674576 kubelet[2305]: I0527 17:46:08.674566 2305 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 17:46:08.674629 kubelet[2305]: I0527 17:46:08.674621 2305 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:08.755921 kubelet[2305]: E0527 17:46:08.755885 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:08.770230 kubelet[2305]: E0527 17:46:08.770172 2305 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:46:08.856105 kubelet[2305]: E0527 17:46:08.855989 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:08.856105 kubelet[2305]: E0527 17:46:08.856046 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" May 27 17:46:08.956460 kubelet[2305]: E0527 17:46:08.956412 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:08.970682 kubelet[2305]: E0527 17:46:08.970643 2305 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:46:09.057246 kubelet[2305]: E0527 17:46:09.057163 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:09.097117 kubelet[2305]: I0527 17:46:09.097066 2305 policy_none.go:49] "None policy: Start" May 27 17:46:09.098124 kubelet[2305]: I0527 17:46:09.098075 2305 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 17:46:09.098124 kubelet[2305]: I0527 17:46:09.098120 2305 state_mem.go:35] "Initializing new in-memory state store" May 27 17:46:09.105256 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:46:09.126950 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:46:09.130371 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:46:09.144750 kubelet[2305]: I0527 17:46:09.144713 2305 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:46:09.145027 kubelet[2305]: I0527 17:46:09.145006 2305 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:46:09.145058 kubelet[2305]: I0527 17:46:09.145026 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:46:09.145297 kubelet[2305]: I0527 17:46:09.145247 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:46:09.146545 kubelet[2305]: E0527 17:46:09.146510 2305 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 17:46:09.246443 kubelet[2305]: I0527 17:46:09.246409 2305 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 17:46:09.246959 kubelet[2305]: E0527 17:46:09.246812 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 27 17:46:09.257235 kubelet[2305]: E0527 17:46:09.257188 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" May 27 17:46:09.381050 systemd[1]: Created slice kubepods-burstable-podc2e4ae261cd13703b450858fb138d1c6.slice - libcontainer container kubepods-burstable-podc2e4ae261cd13703b450858fb138d1c6.slice. May 27 17:46:09.395315 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 27 17:46:09.399247 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 27 17:46:09.449104 kubelet[2305]: I0527 17:46:09.449057 2305 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 17:46:09.449555 kubelet[2305]: E0527 17:46:09.449500 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 27 17:46:09.459969 kubelet[2305]: I0527 17:46:09.459926 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:09.459969 kubelet[2305]: I0527 17:46:09.459978 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:09.460142 kubelet[2305]: I0527 17:46:09.460019 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2e4ae261cd13703b450858fb138d1c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2e4ae261cd13703b450858fb138d1c6\") " pod="kube-system/kube-apiserver-localhost" May 27 17:46:09.460142 kubelet[2305]: I0527 17:46:09.460045 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2e4ae261cd13703b450858fb138d1c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2e4ae261cd13703b450858fb138d1c6\") " pod="kube-system/kube-apiserver-localhost" May 27 17:46:09.460142 kubelet[2305]: I0527 17:46:09.460071 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:09.460142 kubelet[2305]: I0527 17:46:09.460094 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:09.460271 kubelet[2305]: I0527 17:46:09.460155 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2e4ae261cd13703b450858fb138d1c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2e4ae261cd13703b450858fb138d1c6\") " pod="kube-system/kube-apiserver-localhost" May 27 17:46:09.460271 kubelet[2305]: I0527 17:46:09.460197 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:09.460271 kubelet[2305]: I0527 17:46:09.460223 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 27 17:46:09.572186 kubelet[2305]: W0527 17:46:09.572111 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 27 17:46:09.572271 kubelet[2305]: E0527 17:46:09.572189 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:09.693011 kubelet[2305]: E0527 17:46:09.692887 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:09.693662 containerd[1533]: time="2025-05-27T17:46:09.693595602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2e4ae261cd13703b450858fb138d1c6,Namespace:kube-system,Attempt:0,}" May 27 17:46:09.697931 kubelet[2305]: E0527 17:46:09.697895 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:09.698369 containerd[1533]: time="2025-05-27T17:46:09.698329571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 27 17:46:09.701576 kubelet[2305]: E0527 17:46:09.701550 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:09.701896 containerd[1533]: time="2025-05-27T17:46:09.701807883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 27 17:46:09.851720 kubelet[2305]: I0527 17:46:09.851669 2305 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 17:46:09.852055 kubelet[2305]: E0527 17:46:09.852016 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 27 17:46:09.883969 kubelet[2305]: W0527 17:46:09.883881 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 27 17:46:09.884026 kubelet[2305]: E0527 17:46:09.883967 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:09.932842 kubelet[2305]: W0527 17:46:09.932754 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 27 17:46:09.932842 kubelet[2305]: E0527 17:46:09.932844 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:09.951135 kubelet[2305]: W0527 17:46:09.951015 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 27 17:46:09.951135 kubelet[2305]: E0527 17:46:09.951094 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:09.955793 containerd[1533]: time="2025-05-27T17:46:09.955698854Z" level=info msg="connecting to shim 82e6ecf9aa59e51dd806fee506dd5b17d0c3e5e0cd97b162f7b2b965764de476" address="unix:///run/containerd/s/2a9dfebdcbde43a81fde4675e59599be0a9c27ef615d4618b9157ffe6dab8ab7" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:09.956288 containerd[1533]: time="2025-05-27T17:46:09.956259262Z" level=info msg="connecting to shim 8b286b2854f36168d39c613682892dda3c9ef298ad42616d14af3d8f6724d8b6" address="unix:///run/containerd/s/9d11112980598a86c6d0c7d58773c1488c550c2e21e3460fcbafaef7fbea4a5c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:09.958345 containerd[1533]: time="2025-05-27T17:46:09.958272402Z" level=info msg="connecting to shim 4e41298245540c6e1b8466ec0b8230707fcab52c89c184c0b63a6b0046611f91" address="unix:///run/containerd/s/dece4effa199ac38af760428d4253a7ff7ae1a493f35ec2bb921fb9761408648" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:09.986940 systemd[1]: Started cri-containerd-8b286b2854f36168d39c613682892dda3c9ef298ad42616d14af3d8f6724d8b6.scope - libcontainer container 8b286b2854f36168d39c613682892dda3c9ef298ad42616d14af3d8f6724d8b6. May 27 17:46:09.991921 systemd[1]: Started cri-containerd-4e41298245540c6e1b8466ec0b8230707fcab52c89c184c0b63a6b0046611f91.scope - libcontainer container 4e41298245540c6e1b8466ec0b8230707fcab52c89c184c0b63a6b0046611f91. May 27 17:46:09.993526 systemd[1]: Started cri-containerd-82e6ecf9aa59e51dd806fee506dd5b17d0c3e5e0cd97b162f7b2b965764de476.scope - libcontainer container 82e6ecf9aa59e51dd806fee506dd5b17d0c3e5e0cd97b162f7b2b965764de476. May 27 17:46:10.038640 containerd[1533]: time="2025-05-27T17:46:10.038352040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2e4ae261cd13703b450858fb138d1c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b286b2854f36168d39c613682892dda3c9ef298ad42616d14af3d8f6724d8b6\"" May 27 17:46:10.039960 kubelet[2305]: E0527 17:46:10.039929 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:10.043635 containerd[1533]: time="2025-05-27T17:46:10.043596901Z" level=info msg="CreateContainer within sandbox \"8b286b2854f36168d39c613682892dda3c9ef298ad42616d14af3d8f6724d8b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:46:10.044585 containerd[1533]: time="2025-05-27T17:46:10.044549157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"82e6ecf9aa59e51dd806fee506dd5b17d0c3e5e0cd97b162f7b2b965764de476\"" May 27 17:46:10.045533 kubelet[2305]: E0527 17:46:10.045480 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:10.045744 containerd[1533]: time="2025-05-27T17:46:10.045713133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e41298245540c6e1b8466ec0b8230707fcab52c89c184c0b63a6b0046611f91\"" May 27 17:46:10.046575 kubelet[2305]: E0527 17:46:10.046555 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:10.047629 containerd[1533]: time="2025-05-27T17:46:10.047610880Z" level=info msg="CreateContainer within sandbox \"82e6ecf9aa59e51dd806fee506dd5b17d0c3e5e0cd97b162f7b2b965764de476\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:46:10.048121 containerd[1533]: time="2025-05-27T17:46:10.047913324Z" level=info msg="CreateContainer within sandbox \"4e41298245540c6e1b8466ec0b8230707fcab52c89c184c0b63a6b0046611f91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:46:10.056589 containerd[1533]: time="2025-05-27T17:46:10.056569364Z" level=info msg="Container ee44b61b634b17decbdea166ef747ec08cbf6f3167f1194336189cd0a2a881d4: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:10.057678 kubelet[2305]: E0527 17:46:10.057636 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="1.6s" May 27 17:46:10.060745 containerd[1533]: time="2025-05-27T17:46:10.060719384Z" level=info msg="Container 5cdc587e3b57187dc837f65a106fac378c7f127a53aa1ce3810434bb67ffe37b: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:10.064766 containerd[1533]: time="2025-05-27T17:46:10.064729990Z" level=info msg="Container 45b09ce94dffc3d9336bc5ef80536f2c326181752b960395de9ff947ac10c4ec: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:10.070471 containerd[1533]: time="2025-05-27T17:46:10.070434303Z" level=info msg="CreateContainer within sandbox \"8b286b2854f36168d39c613682892dda3c9ef298ad42616d14af3d8f6724d8b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee44b61b634b17decbdea166ef747ec08cbf6f3167f1194336189cd0a2a881d4\"" May 27 17:46:10.071387 containerd[1533]: time="2025-05-27T17:46:10.071328907Z" level=info msg="StartContainer for \"ee44b61b634b17decbdea166ef747ec08cbf6f3167f1194336189cd0a2a881d4\"" May 27 17:46:10.071894 containerd[1533]: time="2025-05-27T17:46:10.071861508Z" level=info msg="CreateContainer within sandbox \"82e6ecf9aa59e51dd806fee506dd5b17d0c3e5e0cd97b162f7b2b965764de476\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5cdc587e3b57187dc837f65a106fac378c7f127a53aa1ce3810434bb67ffe37b\"" May 27 17:46:10.072252 containerd[1533]: time="2025-05-27T17:46:10.072218170Z" level=info msg="connecting to shim ee44b61b634b17decbdea166ef747ec08cbf6f3167f1194336189cd0a2a881d4" address="unix:///run/containerd/s/9d11112980598a86c6d0c7d58773c1488c550c2e21e3460fcbafaef7fbea4a5c" protocol=ttrpc version=3 May 27 17:46:10.072406 containerd[1533]: time="2025-05-27T17:46:10.072262093Z" level=info msg="StartContainer for \"5cdc587e3b57187dc837f65a106fac378c7f127a53aa1ce3810434bb67ffe37b\"" May 27 17:46:10.073515 containerd[1533]: time="2025-05-27T17:46:10.073484915Z" level=info msg="connecting to shim 5cdc587e3b57187dc837f65a106fac378c7f127a53aa1ce3810434bb67ffe37b" address="unix:///run/containerd/s/2a9dfebdcbde43a81fde4675e59599be0a9c27ef615d4618b9157ffe6dab8ab7" protocol=ttrpc version=3 May 27 17:46:10.081505 containerd[1533]: time="2025-05-27T17:46:10.081385895Z" level=info msg="CreateContainer within sandbox \"4e41298245540c6e1b8466ec0b8230707fcab52c89c184c0b63a6b0046611f91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"45b09ce94dffc3d9336bc5ef80536f2c326181752b960395de9ff947ac10c4ec\"" May 27 17:46:10.082232 containerd[1533]: time="2025-05-27T17:46:10.082214164Z" level=info msg="StartContainer for \"45b09ce94dffc3d9336bc5ef80536f2c326181752b960395de9ff947ac10c4ec\"" May 27 17:46:10.083766 containerd[1533]: time="2025-05-27T17:46:10.083732846Z" level=info msg="connecting to shim 45b09ce94dffc3d9336bc5ef80536f2c326181752b960395de9ff947ac10c4ec" address="unix:///run/containerd/s/dece4effa199ac38af760428d4253a7ff7ae1a493f35ec2bb921fb9761408648" protocol=ttrpc version=3 May 27 17:46:10.092003 systemd[1]: Started cri-containerd-ee44b61b634b17decbdea166ef747ec08cbf6f3167f1194336189cd0a2a881d4.scope - libcontainer container ee44b61b634b17decbdea166ef747ec08cbf6f3167f1194336189cd0a2a881d4. May 27 17:46:10.095659 systemd[1]: Started cri-containerd-5cdc587e3b57187dc837f65a106fac378c7f127a53aa1ce3810434bb67ffe37b.scope - libcontainer container 5cdc587e3b57187dc837f65a106fac378c7f127a53aa1ce3810434bb67ffe37b. May 27 17:46:10.113900 systemd[1]: Started cri-containerd-45b09ce94dffc3d9336bc5ef80536f2c326181752b960395de9ff947ac10c4ec.scope - libcontainer container 45b09ce94dffc3d9336bc5ef80536f2c326181752b960395de9ff947ac10c4ec. May 27 17:46:10.161054 containerd[1533]: time="2025-05-27T17:46:10.160980109Z" level=info msg="StartContainer for \"5cdc587e3b57187dc837f65a106fac378c7f127a53aa1ce3810434bb67ffe37b\" returns successfully" May 27 17:46:10.170072 containerd[1533]: time="2025-05-27T17:46:10.170006894Z" level=info msg="StartContainer for \"ee44b61b634b17decbdea166ef747ec08cbf6f3167f1194336189cd0a2a881d4\" returns successfully" May 27 17:46:10.171916 containerd[1533]: time="2025-05-27T17:46:10.171874500Z" level=info msg="StartContainer for \"45b09ce94dffc3d9336bc5ef80536f2c326181752b960395de9ff947ac10c4ec\" returns successfully" May 27 17:46:10.653921 kubelet[2305]: I0527 17:46:10.653814 2305 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 17:46:10.678891 kubelet[2305]: E0527 17:46:10.678829 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:10.685341 kubelet[2305]: E0527 17:46:10.685202 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:10.685678 kubelet[2305]: E0527 17:46:10.685630 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:11.342466 kubelet[2305]: I0527 17:46:11.342426 2305 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 27 17:46:11.342466 kubelet[2305]: E0527 17:46:11.342466 2305 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 17:46:11.351412 kubelet[2305]: E0527 17:46:11.351378 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:11.386374 kubelet[2305]: E0527 17:46:11.386283 2305 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184373677c033da3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 17:46:08.649297315 +0000 UTC m=+0.451160002,LastTimestamp:2025-05-27 17:46:08.649297315 +0000 UTC m=+0.451160002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 17:46:11.451799 kubelet[2305]: E0527 17:46:11.451719 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:11.551860 kubelet[2305]: E0527 17:46:11.551811 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:11.652275 kubelet[2305]: E0527 17:46:11.652153 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:11.685885 kubelet[2305]: E0527 17:46:11.685705 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:11.685885 kubelet[2305]: E0527 17:46:11.685815 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:11.752657 kubelet[2305]: E0527 17:46:11.752578 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:11.853350 kubelet[2305]: E0527 17:46:11.853292 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:12.644198 kubelet[2305]: I0527 17:46:12.644142 2305 apiserver.go:52] "Watching apiserver" May 27 17:46:12.655539 kubelet[2305]: I0527 17:46:12.655466 2305 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 17:46:12.692314 kubelet[2305]: E0527 17:46:12.692281 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:13.135715 systemd[1]: Reload requested from client PID 2581 ('systemctl') (unit session-7.scope)... May 27 17:46:13.135735 systemd[1]: Reloading... May 27 17:46:13.233818 zram_generator::config[2624]: No configuration found. May 27 17:46:13.331087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:46:13.476815 systemd[1]: Reloading finished in 340 ms. May 27 17:46:13.510921 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:13.536268 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:46:13.536550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:13.536602 systemd[1]: kubelet.service: Consumed 916ms CPU time, 131.5M memory peak. May 27 17:46:13.539124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:13.758756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:13.764019 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:46:13.804734 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:13.804734 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 17:46:13.804734 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:13.805425 kubelet[2669]: I0527 17:46:13.804844 2669 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:46:13.813744 kubelet[2669]: I0527 17:46:13.813624 2669 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 17:46:13.813744 kubelet[2669]: I0527 17:46:13.813663 2669 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:46:13.814639 kubelet[2669]: I0527 17:46:13.814595 2669 server.go:934] "Client rotation is on, will bootstrap in background" May 27 17:46:13.816002 kubelet[2669]: I0527 17:46:13.815970 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:46:13.819077 kubelet[2669]: I0527 17:46:13.818921 2669 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:46:13.822381 kubelet[2669]: I0527 17:46:13.822346 2669 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:46:13.826937 kubelet[2669]: I0527 17:46:13.826904 2669 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:46:13.827069 kubelet[2669]: I0527 17:46:13.827033 2669 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 17:46:13.827245 kubelet[2669]: I0527 17:46:13.827200 2669 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:46:13.827439 kubelet[2669]: I0527 17:46:13.827232 2669 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:46:13.827544 kubelet[2669]: I0527 17:46:13.827441 2669 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:46:13.827544 kubelet[2669]: I0527 17:46:13.827454 2669 container_manager_linux.go:300] "Creating device plugin manager" May 27 17:46:13.827544 kubelet[2669]: I0527 17:46:13.827483 2669 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:13.827650 kubelet[2669]: I0527 17:46:13.827604 2669 kubelet.go:408] "Attempting to sync node with API server" May 27 17:46:13.827650 kubelet[2669]: I0527 17:46:13.827617 2669 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:46:13.827714 kubelet[2669]: I0527 17:46:13.827652 2669 kubelet.go:314] "Adding apiserver pod source" May 27 17:46:13.827714 kubelet[2669]: I0527 17:46:13.827664 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:46:13.829032 kubelet[2669]: I0527 17:46:13.828988 2669 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:46:13.830056 kubelet[2669]: I0527 17:46:13.830029 2669 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:46:13.830499 kubelet[2669]: I0527 17:46:13.830478 2669 server.go:1274] "Started kubelet" May 27 17:46:13.832514 kubelet[2669]: I0527 17:46:13.832484 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:46:13.834398 kubelet[2669]: I0527 17:46:13.834367 2669 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:46:13.835083 kubelet[2669]: I0527 17:46:13.835042 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:46:13.837737 kubelet[2669]: I0527 17:46:13.837717 2669 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:46:13.837828 kubelet[2669]: I0527 17:46:13.837513 2669 server.go:449] "Adding debug handlers to kubelet server" May 27 17:46:13.838615 kubelet[2669]: I0527 17:46:13.835597 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:46:13.840481 kubelet[2669]: I0527 17:46:13.840453 2669 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 17:46:13.840714 kubelet[2669]: E0527 17:46:13.840656 2669 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:46:13.842400 kubelet[2669]: I0527 17:46:13.842377 2669 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 17:46:13.842527 kubelet[2669]: I0527 17:46:13.842512 2669 reconciler.go:26] "Reconciler: start to sync state" May 27 17:46:13.844005 kubelet[2669]: I0527 17:46:13.843989 2669 factory.go:221] Registration of the systemd container factory successfully May 27 17:46:13.844139 kubelet[2669]: I0527 17:46:13.844123 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:46:13.845897 kubelet[2669]: I0527 17:46:13.845865 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:46:13.846238 kubelet[2669]: E0527 17:46:13.846207 2669 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:46:13.846438 kubelet[2669]: I0527 17:46:13.846424 2669 factory.go:221] Registration of the containerd container factory successfully May 27 17:46:13.847329 kubelet[2669]: I0527 17:46:13.847285 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:46:13.847329 kubelet[2669]: I0527 17:46:13.847319 2669 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 17:46:13.847428 kubelet[2669]: I0527 17:46:13.847339 2669 kubelet.go:2321] "Starting kubelet main sync loop" May 27 17:46:13.847428 kubelet[2669]: E0527 17:46:13.847383 2669 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:46:13.879491 kubelet[2669]: I0527 17:46:13.879460 2669 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 17:46:13.879491 kubelet[2669]: I0527 17:46:13.879483 2669 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 17:46:13.879491 kubelet[2669]: I0527 17:46:13.879503 2669 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:13.879678 kubelet[2669]: I0527 17:46:13.879657 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:46:13.879678 kubelet[2669]: I0527 17:46:13.879665 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:46:13.879721 kubelet[2669]: I0527 17:46:13.879684 2669 policy_none.go:49] "None policy: Start" May 27 17:46:13.880314 kubelet[2669]: I0527 17:46:13.880292 2669 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 17:46:13.880350 kubelet[2669]: I0527 17:46:13.880335 2669 state_mem.go:35] "Initializing new in-memory state store" May 27 17:46:13.880592 kubelet[2669]: I0527 17:46:13.880567 2669 state_mem.go:75] "Updated machine memory state" May 27 17:46:13.885405 kubelet[2669]: I0527 17:46:13.885368 2669 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:46:13.885631 kubelet[2669]: I0527 17:46:13.885616 2669 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:46:13.885675 kubelet[2669]: I0527 17:46:13.885632 2669 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:46:13.885897 kubelet[2669]: I0527 17:46:13.885874 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:46:13.953978 kubelet[2669]: E0527 17:46:13.953933 2669 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 17:46:13.990156 kubelet[2669]: I0527 17:46:13.990116 2669 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 17:46:13.997149 kubelet[2669]: I0527 17:46:13.997100 2669 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 27 17:46:13.997284 kubelet[2669]: I0527 17:46:13.997185 2669 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 27 17:46:14.043820 kubelet[2669]: I0527 17:46:14.043673 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2e4ae261cd13703b450858fb138d1c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2e4ae261cd13703b450858fb138d1c6\") " pod="kube-system/kube-apiserver-localhost" May 27 17:46:14.043820 kubelet[2669]: I0527 17:46:14.043714 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2e4ae261cd13703b450858fb138d1c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2e4ae261cd13703b450858fb138d1c6\") " pod="kube-system/kube-apiserver-localhost" May 27 17:46:14.043820 kubelet[2669]: I0527 17:46:14.043735 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:14.043820 kubelet[2669]: I0527 17:46:14.043752 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:14.043820 kubelet[2669]: I0527 17:46:14.043769 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 27 17:46:14.044077 kubelet[2669]: I0527 17:46:14.043798 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2e4ae261cd13703b450858fb138d1c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2e4ae261cd13703b450858fb138d1c6\") " pod="kube-system/kube-apiserver-localhost" May 27 17:46:14.044522 kubelet[2669]: I0527 17:46:14.044341 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:14.044522 kubelet[2669]: I0527 17:46:14.044397 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:14.044522 kubelet[2669]: I0527 17:46:14.044426 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:46:14.253382 kubelet[2669]: E0527 17:46:14.253328 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:14.253675 kubelet[2669]: E0527 17:46:14.253640 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:14.254508 kubelet[2669]: E0527 17:46:14.254486 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:14.828952 kubelet[2669]: I0527 17:46:14.828923 2669 apiserver.go:52] "Watching apiserver" May 27 17:46:14.843111 kubelet[2669]: I0527 17:46:14.843068 2669 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 17:46:14.862493 kubelet[2669]: E0527 17:46:14.862477 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:14.862556 kubelet[2669]: E0527 17:46:14.862478 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:14.868294 kubelet[2669]: E0527 17:46:14.868254 2669 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 17:46:14.868445 kubelet[2669]: E0527 17:46:14.868427 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:14.889206 kubelet[2669]: I0527 17:46:14.889148 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.889127507 podStartE2EDuration="2.889127507s" podCreationTimestamp="2025-05-27 17:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:14.880533594 +0000 UTC m=+1.112834492" watchObservedRunningTime="2025-05-27 17:46:14.889127507 +0000 UTC m=+1.121428395" May 27 17:46:14.896821 kubelet[2669]: I0527 17:46:14.896353 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8963154150000001 podStartE2EDuration="1.896315415s" podCreationTimestamp="2025-05-27 17:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:14.889499931 +0000 UTC m=+1.121800839" watchObservedRunningTime="2025-05-27 17:46:14.896315415 +0000 UTC m=+1.128616313" May 27 17:46:14.896821 kubelet[2669]: I0527 17:46:14.896463 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.896458054 podStartE2EDuration="1.896458054s" podCreationTimestamp="2025-05-27 17:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:14.896072432 +0000 UTC m=+1.128373350" watchObservedRunningTime="2025-05-27 17:46:14.896458054 +0000 UTC m=+1.128758952" May 27 17:46:15.863299 kubelet[2669]: E0527 17:46:15.863266 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:17.147638 kubelet[2669]: E0527 17:46:17.147591 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:19.116272 kubelet[2669]: I0527 17:46:19.116231 2669 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:46:19.116729 containerd[1533]: time="2025-05-27T17:46:19.116575213Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:46:19.117050 kubelet[2669]: I0527 17:46:19.116843 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:46:20.056067 systemd[1]: Created slice kubepods-besteffort-podd533f9d9_c1af_4ef5_96e1_ea5dde76205f.slice - libcontainer container kubepods-besteffort-podd533f9d9_c1af_4ef5_96e1_ea5dde76205f.slice. May 27 17:46:20.086060 kubelet[2669]: I0527 17:46:20.086016 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d533f9d9-c1af-4ef5-96e1-ea5dde76205f-kube-proxy\") pod \"kube-proxy-7kgbm\" (UID: \"d533f9d9-c1af-4ef5-96e1-ea5dde76205f\") " pod="kube-system/kube-proxy-7kgbm" May 27 17:46:20.086060 kubelet[2669]: I0527 17:46:20.086058 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d533f9d9-c1af-4ef5-96e1-ea5dde76205f-lib-modules\") pod \"kube-proxy-7kgbm\" (UID: \"d533f9d9-c1af-4ef5-96e1-ea5dde76205f\") " pod="kube-system/kube-proxy-7kgbm" May 27 17:46:20.086287 kubelet[2669]: I0527 17:46:20.086078 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d533f9d9-c1af-4ef5-96e1-ea5dde76205f-xtables-lock\") pod \"kube-proxy-7kgbm\" (UID: \"d533f9d9-c1af-4ef5-96e1-ea5dde76205f\") " pod="kube-system/kube-proxy-7kgbm" May 27 17:46:20.086287 kubelet[2669]: I0527 17:46:20.086100 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knq2\" (UniqueName: \"kubernetes.io/projected/d533f9d9-c1af-4ef5-96e1-ea5dde76205f-kube-api-access-2knq2\") pod \"kube-proxy-7kgbm\" (UID: \"d533f9d9-c1af-4ef5-96e1-ea5dde76205f\") " pod="kube-system/kube-proxy-7kgbm" May 27 17:46:20.225299 systemd[1]: Created slice kubepods-besteffort-podff318a34_6e5c_4f3e_b1de_3c19682d8c74.slice - libcontainer container kubepods-besteffort-podff318a34_6e5c_4f3e_b1de_3c19682d8c74.slice. May 27 17:46:20.287174 kubelet[2669]: I0527 17:46:20.287133 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hm4\" (UniqueName: \"kubernetes.io/projected/ff318a34-6e5c-4f3e-b1de-3c19682d8c74-kube-api-access-b8hm4\") pod \"tigera-operator-7c5755cdcb-f52fk\" (UID: \"ff318a34-6e5c-4f3e-b1de-3c19682d8c74\") " pod="tigera-operator/tigera-operator-7c5755cdcb-f52fk" May 27 17:46:20.287569 kubelet[2669]: I0527 17:46:20.287199 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff318a34-6e5c-4f3e-b1de-3c19682d8c74-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-f52fk\" (UID: \"ff318a34-6e5c-4f3e-b1de-3c19682d8c74\") " pod="tigera-operator/tigera-operator-7c5755cdcb-f52fk" May 27 17:46:20.365496 kubelet[2669]: E0527 17:46:20.365383 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:20.365832 containerd[1533]: time="2025-05-27T17:46:20.365799017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7kgbm,Uid:d533f9d9-c1af-4ef5-96e1-ea5dde76205f,Namespace:kube-system,Attempt:0,}" May 27 17:46:20.387558 containerd[1533]: time="2025-05-27T17:46:20.387507619Z" level=info msg="connecting to shim e726d59d32ef4e2c57e5617ac59feeb03751b564bedfafdd523a13d96c8cf46f" address="unix:///run/containerd/s/c50510efc31d0bdd06375ec415dd0ac1bb136ac337a63a7e6f8d0f4e5b7a661e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:20.419935 systemd[1]: Started cri-containerd-e726d59d32ef4e2c57e5617ac59feeb03751b564bedfafdd523a13d96c8cf46f.scope - libcontainer container e726d59d32ef4e2c57e5617ac59feeb03751b564bedfafdd523a13d96c8cf46f. May 27 17:46:20.447315 containerd[1533]: time="2025-05-27T17:46:20.447268452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7kgbm,Uid:d533f9d9-c1af-4ef5-96e1-ea5dde76205f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e726d59d32ef4e2c57e5617ac59feeb03751b564bedfafdd523a13d96c8cf46f\"" May 27 17:46:20.447845 kubelet[2669]: E0527 17:46:20.447825 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:20.449521 containerd[1533]: time="2025-05-27T17:46:20.449483575Z" level=info msg="CreateContainer within sandbox \"e726d59d32ef4e2c57e5617ac59feeb03751b564bedfafdd523a13d96c8cf46f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:46:20.462858 containerd[1533]: time="2025-05-27T17:46:20.461700029Z" level=info msg="Container 1e6dfb91e04cb6dc3f7f4986a8c67627d99b78d15e6ff2d11863706299d95c8a: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:20.471034 containerd[1533]: time="2025-05-27T17:46:20.470985579Z" level=info msg="CreateContainer within sandbox \"e726d59d32ef4e2c57e5617ac59feeb03751b564bedfafdd523a13d96c8cf46f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1e6dfb91e04cb6dc3f7f4986a8c67627d99b78d15e6ff2d11863706299d95c8a\"" May 27 17:46:20.471651 containerd[1533]: time="2025-05-27T17:46:20.471588867Z" level=info msg="StartContainer for \"1e6dfb91e04cb6dc3f7f4986a8c67627d99b78d15e6ff2d11863706299d95c8a\"" May 27 17:46:20.473443 containerd[1533]: time="2025-05-27T17:46:20.473412293Z" level=info msg="connecting to shim 1e6dfb91e04cb6dc3f7f4986a8c67627d99b78d15e6ff2d11863706299d95c8a" address="unix:///run/containerd/s/c50510efc31d0bdd06375ec415dd0ac1bb136ac337a63a7e6f8d0f4e5b7a661e" protocol=ttrpc version=3 May 27 17:46:20.494152 systemd[1]: Started cri-containerd-1e6dfb91e04cb6dc3f7f4986a8c67627d99b78d15e6ff2d11863706299d95c8a.scope - libcontainer container 1e6dfb91e04cb6dc3f7f4986a8c67627d99b78d15e6ff2d11863706299d95c8a. May 27 17:46:20.529373 containerd[1533]: time="2025-05-27T17:46:20.529273392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-f52fk,Uid:ff318a34-6e5c-4f3e-b1de-3c19682d8c74,Namespace:tigera-operator,Attempt:0,}" May 27 17:46:20.534315 containerd[1533]: time="2025-05-27T17:46:20.534281104Z" level=info msg="StartContainer for \"1e6dfb91e04cb6dc3f7f4986a8c67627d99b78d15e6ff2d11863706299d95c8a\" returns successfully" May 27 17:46:20.554464 containerd[1533]: time="2025-05-27T17:46:20.554254493Z" level=info msg="connecting to shim 72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd" address="unix:///run/containerd/s/00ae24a079c8c68308955cd5fb350523209a2aecd102250aa863f04b113357c0" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:20.579974 systemd[1]: Started cri-containerd-72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd.scope - libcontainer container 72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd. May 27 17:46:20.623820 containerd[1533]: time="2025-05-27T17:46:20.623548885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-f52fk,Uid:ff318a34-6e5c-4f3e-b1de-3c19682d8c74,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\"" May 27 17:46:20.625802 containerd[1533]: time="2025-05-27T17:46:20.625725937Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 17:46:20.874479 kubelet[2669]: E0527 17:46:20.873760 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:20.883506 kubelet[2669]: I0527 17:46:20.883450 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7kgbm" podStartSLOduration=0.883430617 podStartE2EDuration="883.430617ms" podCreationTimestamp="2025-05-27 17:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:20.883245841 +0000 UTC m=+7.115546739" watchObservedRunningTime="2025-05-27 17:46:20.883430617 +0000 UTC m=+7.115731515" May 27 17:46:21.198745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1304842315.mount: Deactivated successfully. May 27 17:46:22.138587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083557306.mount: Deactivated successfully. May 27 17:46:22.447106 containerd[1533]: time="2025-05-27T17:46:22.446974383Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:22.447678 containerd[1533]: time="2025-05-27T17:46:22.447636794Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 17:46:22.448977 containerd[1533]: time="2025-05-27T17:46:22.448940013Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:22.450769 containerd[1533]: time="2025-05-27T17:46:22.450735219Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:22.451323 containerd[1533]: time="2025-05-27T17:46:22.451295210Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.825544534s" May 27 17:46:22.451362 containerd[1533]: time="2025-05-27T17:46:22.451323459Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 17:46:22.453171 containerd[1533]: time="2025-05-27T17:46:22.453144608Z" level=info msg="CreateContainer within sandbox \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 17:46:22.459919 containerd[1533]: time="2025-05-27T17:46:22.459855758Z" level=info msg="Container 40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:22.466685 containerd[1533]: time="2025-05-27T17:46:22.466641179Z" level=info msg="CreateContainer within sandbox \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\"" May 27 17:46:22.467123 containerd[1533]: time="2025-05-27T17:46:22.467099434Z" level=info msg="StartContainer for \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\"" May 27 17:46:22.468130 containerd[1533]: time="2025-05-27T17:46:22.468103376Z" level=info msg="connecting to shim 40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3" address="unix:///run/containerd/s/00ae24a079c8c68308955cd5fb350523209a2aecd102250aa863f04b113357c0" protocol=ttrpc version=3 May 27 17:46:22.512910 systemd[1]: Started cri-containerd-40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3.scope - libcontainer container 40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3. May 27 17:46:22.544994 containerd[1533]: time="2025-05-27T17:46:22.544957353Z" level=info msg="StartContainer for \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" returns successfully" May 27 17:46:22.711571 kubelet[2669]: E0527 17:46:22.711422 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:22.878638 kubelet[2669]: E0527 17:46:22.878548 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:23.998674 kubelet[2669]: E0527 17:46:23.998619 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:24.023703 kubelet[2669]: I0527 17:46:24.023617 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-f52fk" podStartSLOduration=2.196631893 podStartE2EDuration="4.023596312s" podCreationTimestamp="2025-05-27 17:46:20 +0000 UTC" firstStartedPulling="2025-05-27 17:46:20.625217468 +0000 UTC m=+6.857518366" lastFinishedPulling="2025-05-27 17:46:22.452181887 +0000 UTC m=+8.684482785" observedRunningTime="2025-05-27 17:46:22.895530407 +0000 UTC m=+9.127831305" watchObservedRunningTime="2025-05-27 17:46:24.023596312 +0000 UTC m=+10.255897211" May 27 17:46:24.883810 kubelet[2669]: E0527 17:46:24.883196 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:27.151335 kubelet[2669]: E0527 17:46:27.151212 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:28.073532 sudo[1752]: pam_unix(sudo:session): session closed for user root May 27 17:46:28.076877 sshd[1751]: Connection closed by 10.0.0.1 port 42488 May 27 17:46:28.078155 sshd-session[1749]: pam_unix(sshd:session): session closed for user core May 27 17:46:28.084855 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. May 27 17:46:28.087457 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:42488.service: Deactivated successfully. May 27 17:46:28.090497 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:46:28.091072 systemd[1]: session-7.scope: Consumed 4.510s CPU time, 223.3M memory peak. May 27 17:46:28.094660 systemd-logind[1504]: Removed session 7. May 27 17:46:30.957423 systemd[1]: Created slice kubepods-besteffort-pod8fcf9a5b_32f2_4eb5_96e8_f5b40b783bfb.slice - libcontainer container kubepods-besteffort-pod8fcf9a5b_32f2_4eb5_96e8_f5b40b783bfb.slice. May 27 17:46:31.054195 kubelet[2669]: I0527 17:46:31.054102 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fcf9a5b-32f2-4eb5-96e8-f5b40b783bfb-tigera-ca-bundle\") pod \"calico-typha-6d56548d6d-wbr2m\" (UID: \"8fcf9a5b-32f2-4eb5-96e8-f5b40b783bfb\") " pod="calico-system/calico-typha-6d56548d6d-wbr2m" May 27 17:46:31.054195 kubelet[2669]: I0527 17:46:31.054170 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsgm2\" (UniqueName: \"kubernetes.io/projected/8fcf9a5b-32f2-4eb5-96e8-f5b40b783bfb-kube-api-access-gsgm2\") pod \"calico-typha-6d56548d6d-wbr2m\" (UID: \"8fcf9a5b-32f2-4eb5-96e8-f5b40b783bfb\") " pod="calico-system/calico-typha-6d56548d6d-wbr2m" May 27 17:46:31.054195 kubelet[2669]: I0527 17:46:31.054212 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8fcf9a5b-32f2-4eb5-96e8-f5b40b783bfb-typha-certs\") pod \"calico-typha-6d56548d6d-wbr2m\" (UID: \"8fcf9a5b-32f2-4eb5-96e8-f5b40b783bfb\") " pod="calico-system/calico-typha-6d56548d6d-wbr2m" May 27 17:46:31.262522 kubelet[2669]: E0527 17:46:31.262460 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:31.264030 containerd[1533]: time="2025-05-27T17:46:31.263613740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d56548d6d-wbr2m,Uid:8fcf9a5b-32f2-4eb5-96e8-f5b40b783bfb,Namespace:calico-system,Attempt:0,}" May 27 17:46:31.379822 systemd[1]: Created slice kubepods-besteffort-pod1290fdfb_b0ab_446e_a3a4_ace4bfb5ee07.slice - libcontainer container kubepods-besteffort-pod1290fdfb_b0ab_446e_a3a4_ace4bfb5ee07.slice. May 27 17:46:31.398013 containerd[1533]: time="2025-05-27T17:46:31.397956800Z" level=info msg="connecting to shim c8c2e129c253fe7d849eb7a2536901de92562230755c9366647c306abcdcddf2" address="unix:///run/containerd/s/32ecb3a679ae46f194c6efe49ee1c9309124b44efa92457a7f2aeda4caba30b8" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:31.432116 systemd[1]: Started cri-containerd-c8c2e129c253fe7d849eb7a2536901de92562230755c9366647c306abcdcddf2.scope - libcontainer container c8c2e129c253fe7d849eb7a2536901de92562230755c9366647c306abcdcddf2. May 27 17:46:31.481687 containerd[1533]: time="2025-05-27T17:46:31.481639257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d56548d6d-wbr2m,Uid:8fcf9a5b-32f2-4eb5-96e8-f5b40b783bfb,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8c2e129c253fe7d849eb7a2536901de92562230755c9366647c306abcdcddf2\"" May 27 17:46:31.482520 kubelet[2669]: E0527 17:46:31.482482 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:31.483404 containerd[1533]: time="2025-05-27T17:46:31.483369440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 17:46:31.524765 kubelet[2669]: E0527 17:46:31.524594 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:31.558761 kubelet[2669]: I0527 17:46:31.558691 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-policysync\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.558761 kubelet[2669]: I0527 17:46:31.558742 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-cni-net-dir\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.558761 kubelet[2669]: I0527 17:46:31.558763 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-lib-modules\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559001 kubelet[2669]: I0527 17:46:31.558817 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k47nw\" (UniqueName: \"kubernetes.io/projected/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-kube-api-access-k47nw\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559001 kubelet[2669]: I0527 17:46:31.558899 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-node-certs\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559001 kubelet[2669]: I0527 17:46:31.558938 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-var-run-calico\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559001 kubelet[2669]: I0527 17:46:31.558983 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-xtables-lock\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559102 kubelet[2669]: I0527 17:46:31.559028 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-flexvol-driver-host\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559131 kubelet[2669]: I0527 17:46:31.559113 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-cni-log-dir\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559230 kubelet[2669]: I0527 17:46:31.559193 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-cni-bin-dir\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559267 kubelet[2669]: I0527 17:46:31.559246 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-tigera-ca-bundle\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.559297 kubelet[2669]: I0527 17:46:31.559269 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07-var-lib-calico\") pod \"calico-node-68gs9\" (UID: \"1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07\") " pod="calico-system/calico-node-68gs9" May 27 17:46:31.660475 kubelet[2669]: I0527 17:46:31.660424 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c1488e45-b4c4-4b5a-9c26-a912011cdd13-registration-dir\") pod \"csi-node-driver-lf5vj\" (UID: \"c1488e45-b4c4-4b5a-9c26-a912011cdd13\") " pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:31.660844 kubelet[2669]: I0527 17:46:31.660793 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j9xw\" (UniqueName: \"kubernetes.io/projected/c1488e45-b4c4-4b5a-9c26-a912011cdd13-kube-api-access-9j9xw\") pod \"csi-node-driver-lf5vj\" (UID: \"c1488e45-b4c4-4b5a-9c26-a912011cdd13\") " pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:31.660844 kubelet[2669]: I0527 17:46:31.660840 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c1488e45-b4c4-4b5a-9c26-a912011cdd13-varrun\") pod \"csi-node-driver-lf5vj\" (UID: \"c1488e45-b4c4-4b5a-9c26-a912011cdd13\") " pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:31.661056 kubelet[2669]: I0527 17:46:31.661023 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c1488e45-b4c4-4b5a-9c26-a912011cdd13-socket-dir\") pod \"csi-node-driver-lf5vj\" (UID: \"c1488e45-b4c4-4b5a-9c26-a912011cdd13\") " pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:31.661943 kubelet[2669]: E0527 17:46:31.661909 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.661943 kubelet[2669]: W0527 17:46:31.661931 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.661943 kubelet[2669]: E0527 17:46:31.661954 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.662270 kubelet[2669]: E0527 17:46:31.662213 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.662270 kubelet[2669]: W0527 17:46:31.662232 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.662270 kubelet[2669]: E0527 17:46:31.662242 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.662617 kubelet[2669]: E0527 17:46:31.662569 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.662617 kubelet[2669]: W0527 17:46:31.662583 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.662617 kubelet[2669]: E0527 17:46:31.662593 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.662871 kubelet[2669]: E0527 17:46:31.662840 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.662871 kubelet[2669]: W0527 17:46:31.662863 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.662961 kubelet[2669]: E0527 17:46:31.662881 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.662961 kubelet[2669]: I0527 17:46:31.662899 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1488e45-b4c4-4b5a-9c26-a912011cdd13-kubelet-dir\") pod \"csi-node-driver-lf5vj\" (UID: \"c1488e45-b4c4-4b5a-9c26-a912011cdd13\") " pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:31.663130 kubelet[2669]: E0527 17:46:31.663113 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.663130 kubelet[2669]: W0527 17:46:31.663127 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.663189 kubelet[2669]: E0527 17:46:31.663148 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.663447 kubelet[2669]: E0527 17:46:31.663422 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.663447 kubelet[2669]: W0527 17:46:31.663441 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.663447 kubelet[2669]: E0527 17:46:31.663451 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.663718 kubelet[2669]: E0527 17:46:31.663695 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.663752 kubelet[2669]: W0527 17:46:31.663718 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.663752 kubelet[2669]: E0527 17:46:31.663739 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.664869 kubelet[2669]: E0527 17:46:31.664850 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.664869 kubelet[2669]: W0527 17:46:31.664863 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.664869 kubelet[2669]: E0527 17:46:31.664872 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.676867 kubelet[2669]: E0527 17:46:31.676826 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.676867 kubelet[2669]: W0527 17:46:31.676856 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.677010 kubelet[2669]: E0527 17:46:31.676884 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.686470 containerd[1533]: time="2025-05-27T17:46:31.686433238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68gs9,Uid:1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07,Namespace:calico-system,Attempt:0,}" May 27 17:46:31.712331 containerd[1533]: time="2025-05-27T17:46:31.712286919Z" level=info msg="connecting to shim f6d6314702107177562b6e6b8806f7e312e65f8b8256b7c51388c6d346ef1e90" address="unix:///run/containerd/s/2a79a3d9c2263cac99f1f9617845bac70ac5738fa43da8419497c04df27060f4" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:31.742020 systemd[1]: Started cri-containerd-f6d6314702107177562b6e6b8806f7e312e65f8b8256b7c51388c6d346ef1e90.scope - libcontainer container f6d6314702107177562b6e6b8806f7e312e65f8b8256b7c51388c6d346ef1e90. May 27 17:46:31.764374 kubelet[2669]: E0527 17:46:31.764285 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.764374 kubelet[2669]: W0527 17:46:31.764313 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.764374 kubelet[2669]: E0527 17:46:31.764337 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.765192 kubelet[2669]: E0527 17:46:31.765039 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.765192 kubelet[2669]: W0527 17:46:31.765052 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.765192 kubelet[2669]: E0527 17:46:31.765070 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.765464 kubelet[2669]: E0527 17:46:31.765449 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.765544 kubelet[2669]: W0527 17:46:31.765530 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.765641 kubelet[2669]: E0527 17:46:31.765626 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.766742 kubelet[2669]: E0527 17:46:31.766722 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.767030 kubelet[2669]: W0527 17:46:31.766910 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.767030 kubelet[2669]: E0527 17:46:31.766932 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.767307 kubelet[2669]: E0527 17:46:31.767293 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.767454 kubelet[2669]: W0527 17:46:31.767387 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.767619 kubelet[2669]: E0527 17:46:31.767543 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.768024 kubelet[2669]: E0527 17:46:31.767954 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.768024 kubelet[2669]: W0527 17:46:31.767966 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.768125 kubelet[2669]: E0527 17:46:31.768040 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.768530 kubelet[2669]: E0527 17:46:31.768499 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.768530 kubelet[2669]: W0527 17:46:31.768515 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.768876 kubelet[2669]: E0527 17:46:31.768843 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.769827 kubelet[2669]: E0527 17:46:31.769741 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.769827 kubelet[2669]: W0527 17:46:31.769765 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.770041 kubelet[2669]: E0527 17:46:31.770011 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.770119 kubelet[2669]: E0527 17:46:31.770098 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.770119 kubelet[2669]: W0527 17:46:31.770115 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.770348 kubelet[2669]: E0527 17:46:31.770285 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.770402 kubelet[2669]: E0527 17:46:31.770379 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.770402 kubelet[2669]: W0527 17:46:31.770391 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.770543 kubelet[2669]: E0527 17:46:31.770499 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.770753 kubelet[2669]: E0527 17:46:31.770731 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.770753 kubelet[2669]: W0527 17:46:31.770751 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.770858 kubelet[2669]: E0527 17:46:31.770786 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.771216 kubelet[2669]: E0527 17:46:31.771194 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.771216 kubelet[2669]: W0527 17:46:31.771212 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.771385 kubelet[2669]: E0527 17:46:31.771339 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.771579 kubelet[2669]: E0527 17:46:31.771564 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.772889 kubelet[2669]: W0527 17:46:31.771638 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.772889 kubelet[2669]: E0527 17:46:31.771794 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.772889 kubelet[2669]: E0527 17:46:31.771997 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.772889 kubelet[2669]: W0527 17:46:31.772009 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.772889 kubelet[2669]: E0527 17:46:31.772073 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.772889 kubelet[2669]: E0527 17:46:31.772423 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.772889 kubelet[2669]: W0527 17:46:31.772434 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.772889 kubelet[2669]: E0527 17:46:31.772540 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.773124 kubelet[2669]: E0527 17:46:31.773047 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.773124 kubelet[2669]: W0527 17:46:31.773059 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.773182 kubelet[2669]: E0527 17:46:31.773174 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.773605 kubelet[2669]: E0527 17:46:31.773546 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.773605 kubelet[2669]: W0527 17:46:31.773566 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.774244 kubelet[2669]: E0527 17:46:31.773972 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.774244 kubelet[2669]: E0527 17:46:31.774076 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.774244 kubelet[2669]: W0527 17:46:31.774086 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.774244 kubelet[2669]: E0527 17:46:31.774159 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.775884 kubelet[2669]: E0527 17:46:31.774446 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.775884 kubelet[2669]: W0527 17:46:31.774457 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.775884 kubelet[2669]: E0527 17:46:31.774506 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.775884 kubelet[2669]: E0527 17:46:31.774688 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.775884 kubelet[2669]: W0527 17:46:31.774699 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.775884 kubelet[2669]: E0527 17:46:31.774718 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.775884 kubelet[2669]: E0527 17:46:31.775403 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.775884 kubelet[2669]: W0527 17:46:31.775415 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.775884 kubelet[2669]: E0527 17:46:31.775426 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.779005 kubelet[2669]: E0527 17:46:31.778976 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.779005 kubelet[2669]: W0527 17:46:31.779001 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.779160 kubelet[2669]: E0527 17:46:31.779041 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.779970 kubelet[2669]: E0527 17:46:31.779951 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.779970 kubelet[2669]: W0527 17:46:31.779965 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.782321 kubelet[2669]: E0527 17:46:31.779985 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.782321 kubelet[2669]: E0527 17:46:31.780245 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.782321 kubelet[2669]: W0527 17:46:31.780255 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.782321 kubelet[2669]: E0527 17:46:31.780266 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.782321 kubelet[2669]: E0527 17:46:31.781043 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.782321 kubelet[2669]: W0527 17:46:31.781070 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.782321 kubelet[2669]: E0527 17:46:31.781164 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.789142 kubelet[2669]: E0527 17:46:31.789081 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:31.789142 kubelet[2669]: W0527 17:46:31.789106 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:31.789142 kubelet[2669]: E0527 17:46:31.789124 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:31.797372 containerd[1533]: time="2025-05-27T17:46:31.797286714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68gs9,Uid:1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6d6314702107177562b6e6b8806f7e312e65f8b8256b7c51388c6d346ef1e90\"" May 27 17:46:32.848285 kubelet[2669]: E0527 17:46:32.848228 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:32.959975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634685913.mount: Deactivated successfully. May 27 17:46:33.596884 update_engine[1508]: I20250527 17:46:33.596771 1508 update_attempter.cc:509] Updating boot flags... May 27 17:46:34.848468 kubelet[2669]: E0527 17:46:34.848406 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:35.594224 containerd[1533]: time="2025-05-27T17:46:35.594156812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:35.595488 containerd[1533]: time="2025-05-27T17:46:35.595441107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 17:46:35.596967 containerd[1533]: time="2025-05-27T17:46:35.596891571Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:35.598915 containerd[1533]: time="2025-05-27T17:46:35.598876254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:35.599531 containerd[1533]: time="2025-05-27T17:46:35.599494775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 4.116089736s" May 27 17:46:35.599531 containerd[1533]: time="2025-05-27T17:46:35.599525230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 17:46:35.601765 containerd[1533]: time="2025-05-27T17:46:35.601211872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 17:46:35.608208 containerd[1533]: time="2025-05-27T17:46:35.608165905Z" level=info msg="CreateContainer within sandbox \"c8c2e129c253fe7d849eb7a2536901de92562230755c9366647c306abcdcddf2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 17:46:35.616238 containerd[1533]: time="2025-05-27T17:46:35.616193218Z" level=info msg="Container f9f3f0df201e33b4722fccf1edb12275f4a2c0a2058aac949899dc9655453341: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:35.623588 containerd[1533]: time="2025-05-27T17:46:35.623535387Z" level=info msg="CreateContainer within sandbox \"c8c2e129c253fe7d849eb7a2536901de92562230755c9366647c306abcdcddf2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f9f3f0df201e33b4722fccf1edb12275f4a2c0a2058aac949899dc9655453341\"" May 27 17:46:35.624108 containerd[1533]: time="2025-05-27T17:46:35.624068625Z" level=info msg="StartContainer for \"f9f3f0df201e33b4722fccf1edb12275f4a2c0a2058aac949899dc9655453341\"" May 27 17:46:35.625196 containerd[1533]: time="2025-05-27T17:46:35.625162560Z" level=info msg="connecting to shim f9f3f0df201e33b4722fccf1edb12275f4a2c0a2058aac949899dc9655453341" address="unix:///run/containerd/s/32ecb3a679ae46f194c6efe49ee1c9309124b44efa92457a7f2aeda4caba30b8" protocol=ttrpc version=3 May 27 17:46:35.651994 systemd[1]: Started cri-containerd-f9f3f0df201e33b4722fccf1edb12275f4a2c0a2058aac949899dc9655453341.scope - libcontainer container f9f3f0df201e33b4722fccf1edb12275f4a2c0a2058aac949899dc9655453341. May 27 17:46:35.730297 containerd[1533]: time="2025-05-27T17:46:35.730188241Z" level=info msg="StartContainer for \"f9f3f0df201e33b4722fccf1edb12275f4a2c0a2058aac949899dc9655453341\" returns successfully" May 27 17:46:35.910905 kubelet[2669]: E0527 17:46:35.910472 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:35.929157 kubelet[2669]: I0527 17:46:35.929092 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d56548d6d-wbr2m" podStartSLOduration=1.8117687569999998 podStartE2EDuration="5.929069713s" podCreationTimestamp="2025-05-27 17:46:30 +0000 UTC" firstStartedPulling="2025-05-27 17:46:31.483066198 +0000 UTC m=+17.715367096" lastFinishedPulling="2025-05-27 17:46:35.600367164 +0000 UTC m=+21.832668052" observedRunningTime="2025-05-27 17:46:35.928901961 +0000 UTC m=+22.161202859" watchObservedRunningTime="2025-05-27 17:46:35.929069713 +0000 UTC m=+22.161370611" May 27 17:46:35.988423 kubelet[2669]: E0527 17:46:35.988311 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.988423 kubelet[2669]: W0527 17:46:35.988331 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.988423 kubelet[2669]: E0527 17:46:35.988352 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.988654 kubelet[2669]: E0527 17:46:35.988586 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.988654 kubelet[2669]: W0527 17:46:35.988594 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.988654 kubelet[2669]: E0527 17:46:35.988603 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.988845 kubelet[2669]: E0527 17:46:35.988820 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.988845 kubelet[2669]: W0527 17:46:35.988832 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.988845 kubelet[2669]: E0527 17:46:35.988842 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.989071 kubelet[2669]: E0527 17:46:35.989054 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.989071 kubelet[2669]: W0527 17:46:35.989066 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.989132 kubelet[2669]: E0527 17:46:35.989075 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.989264 kubelet[2669]: E0527 17:46:35.989248 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.989264 kubelet[2669]: W0527 17:46:35.989260 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.989264 kubelet[2669]: E0527 17:46:35.989268 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.989470 kubelet[2669]: E0527 17:46:35.989447 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.989470 kubelet[2669]: W0527 17:46:35.989468 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.989623 kubelet[2669]: E0527 17:46:35.989478 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.989687 kubelet[2669]: E0527 17:46:35.989658 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.989725 kubelet[2669]: W0527 17:46:35.989692 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.989725 kubelet[2669]: E0527 17:46:35.989701 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.989959 kubelet[2669]: E0527 17:46:35.989940 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.989959 kubelet[2669]: W0527 17:46:35.989953 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.989959 kubelet[2669]: E0527 17:46:35.989961 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.990176 kubelet[2669]: E0527 17:46:35.990149 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.990176 kubelet[2669]: W0527 17:46:35.990164 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.990176 kubelet[2669]: E0527 17:46:35.990172 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.990446 kubelet[2669]: E0527 17:46:35.990344 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.990446 kubelet[2669]: W0527 17:46:35.990367 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.990446 kubelet[2669]: E0527 17:46:35.990375 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.990550 kubelet[2669]: E0527 17:46:35.990541 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.990550 kubelet[2669]: W0527 17:46:35.990549 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.990631 kubelet[2669]: E0527 17:46:35.990558 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.990856 kubelet[2669]: E0527 17:46:35.990821 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.990856 kubelet[2669]: W0527 17:46:35.990846 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.991042 kubelet[2669]: E0527 17:46:35.990871 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.991208 kubelet[2669]: E0527 17:46:35.991186 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.991208 kubelet[2669]: W0527 17:46:35.991198 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.991208 kubelet[2669]: E0527 17:46:35.991206 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.991462 kubelet[2669]: E0527 17:46:35.991439 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.991462 kubelet[2669]: W0527 17:46:35.991452 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.991574 kubelet[2669]: E0527 17:46:35.991468 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:35.991655 kubelet[2669]: E0527 17:46:35.991637 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:35.991655 kubelet[2669]: W0527 17:46:35.991648 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:35.991655 kubelet[2669]: E0527 17:46:35.991656 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.001166 kubelet[2669]: E0527 17:46:36.001134 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.001166 kubelet[2669]: W0527 17:46:36.001153 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.001501 kubelet[2669]: E0527 17:46:36.001163 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.001501 kubelet[2669]: E0527 17:46:36.001442 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.001501 kubelet[2669]: W0527 17:46:36.001449 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.001501 kubelet[2669]: E0527 17:46:36.001464 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.001755 kubelet[2669]: E0527 17:46:36.001699 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.001755 kubelet[2669]: W0527 17:46:36.001723 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.001755 kubelet[2669]: E0527 17:46:36.001745 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.001985 kubelet[2669]: E0527 17:46:36.001959 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.001985 kubelet[2669]: W0527 17:46:36.001984 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.002111 kubelet[2669]: E0527 17:46:36.001999 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.002189 kubelet[2669]: E0527 17:46:36.002172 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.002189 kubelet[2669]: W0527 17:46:36.002183 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.002269 kubelet[2669]: E0527 17:46:36.002194 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.002425 kubelet[2669]: E0527 17:46:36.002402 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.002425 kubelet[2669]: W0527 17:46:36.002417 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.002425 kubelet[2669]: E0527 17:46:36.002430 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.002804 kubelet[2669]: E0527 17:46:36.002747 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.002804 kubelet[2669]: W0527 17:46:36.002765 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.002804 kubelet[2669]: E0527 17:46:36.002791 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.002986 kubelet[2669]: E0527 17:46:36.002961 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.002986 kubelet[2669]: W0527 17:46:36.002977 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.003051 kubelet[2669]: E0527 17:46:36.002994 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.003177 kubelet[2669]: E0527 17:46:36.003158 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.003177 kubelet[2669]: W0527 17:46:36.003170 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.003253 kubelet[2669]: E0527 17:46:36.003191 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.003387 kubelet[2669]: E0527 17:46:36.003371 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.003387 kubelet[2669]: W0527 17:46:36.003382 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.003471 kubelet[2669]: E0527 17:46:36.003401 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.003696 kubelet[2669]: E0527 17:46:36.003677 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.003696 kubelet[2669]: W0527 17:46:36.003693 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.003814 kubelet[2669]: E0527 17:46:36.003712 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.008856 kubelet[2669]: E0527 17:46:36.004943 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.008856 kubelet[2669]: W0527 17:46:36.008289 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.008856 kubelet[2669]: E0527 17:46:36.008305 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.008856 kubelet[2669]: E0527 17:46:36.008542 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.008856 kubelet[2669]: W0527 17:46:36.008550 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.008856 kubelet[2669]: E0527 17:46:36.008566 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.008856 kubelet[2669]: E0527 17:46:36.008699 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.008856 kubelet[2669]: W0527 17:46:36.008706 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.008856 kubelet[2669]: E0527 17:46:36.008720 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.008856 kubelet[2669]: E0527 17:46:36.008866 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.009252 kubelet[2669]: W0527 17:46:36.008873 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.009252 kubelet[2669]: E0527 17:46:36.008885 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.009252 kubelet[2669]: E0527 17:46:36.009045 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.009252 kubelet[2669]: W0527 17:46:36.009056 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.009252 kubelet[2669]: E0527 17:46:36.009064 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.009425 kubelet[2669]: E0527 17:46:36.009385 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.009425 kubelet[2669]: W0527 17:46:36.009397 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.009425 kubelet[2669]: E0527 17:46:36.009407 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.009753 kubelet[2669]: E0527 17:46:36.009544 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.009753 kubelet[2669]: W0527 17:46:36.009561 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.009753 kubelet[2669]: E0527 17:46:36.009568 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.848837 kubelet[2669]: E0527 17:46:36.848745 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:36.911701 kubelet[2669]: I0527 17:46:36.911667 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:46:36.912312 kubelet[2669]: E0527 17:46:36.912038 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:36.938091 containerd[1533]: time="2025-05-27T17:46:36.938034214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:36.938872 containerd[1533]: time="2025-05-27T17:46:36.938841772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 17:46:36.940036 containerd[1533]: time="2025-05-27T17:46:36.939998126Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:36.941964 containerd[1533]: time="2025-05-27T17:46:36.941929810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:36.942423 containerd[1533]: time="2025-05-27T17:46:36.942386968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.341119316s" May 27 17:46:36.942423 containerd[1533]: time="2025-05-27T17:46:36.942415780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 17:46:36.944223 containerd[1533]: time="2025-05-27T17:46:36.944190507Z" level=info msg="CreateContainer within sandbox \"f6d6314702107177562b6e6b8806f7e312e65f8b8256b7c51388c6d346ef1e90\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 17:46:36.954223 containerd[1533]: time="2025-05-27T17:46:36.954153126Z" level=info msg="Container 110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:36.965019 containerd[1533]: time="2025-05-27T17:46:36.964978219Z" level=info msg="CreateContainer within sandbox \"f6d6314702107177562b6e6b8806f7e312e65f8b8256b7c51388c6d346ef1e90\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552\"" May 27 17:46:36.965497 containerd[1533]: time="2025-05-27T17:46:36.965474091Z" level=info msg="StartContainer for \"110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552\"" May 27 17:46:36.967141 containerd[1533]: time="2025-05-27T17:46:36.967085289Z" level=info msg="connecting to shim 110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552" address="unix:///run/containerd/s/2a79a3d9c2263cac99f1f9617845bac70ac5738fa43da8419497c04df27060f4" protocol=ttrpc version=3 May 27 17:46:36.996048 systemd[1]: Started cri-containerd-110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552.scope - libcontainer container 110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552. May 27 17:46:36.997026 kubelet[2669]: E0527 17:46:36.997002 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.997026 kubelet[2669]: W0527 17:46:36.997023 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.997151 kubelet[2669]: E0527 17:46:36.997042 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.997349 kubelet[2669]: E0527 17:46:36.997323 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.997349 kubelet[2669]: W0527 17:46:36.997338 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.997438 kubelet[2669]: E0527 17:46:36.997353 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.997546 kubelet[2669]: E0527 17:46:36.997520 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.997546 kubelet[2669]: W0527 17:46:36.997530 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.997546 kubelet[2669]: E0527 17:46:36.997540 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.997737 kubelet[2669]: E0527 17:46:36.997705 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.997737 kubelet[2669]: W0527 17:46:36.997717 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.997737 kubelet[2669]: E0527 17:46:36.997730 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.997941 kubelet[2669]: E0527 17:46:36.997910 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.997941 kubelet[2669]: W0527 17:46:36.997919 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.997941 kubelet[2669]: E0527 17:46:36.997929 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.998163 kubelet[2669]: E0527 17:46:36.998107 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.998163 kubelet[2669]: W0527 17:46:36.998116 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.998163 kubelet[2669]: E0527 17:46:36.998125 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.998331 kubelet[2669]: E0527 17:46:36.998289 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.998331 kubelet[2669]: W0527 17:46:36.998296 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.998331 kubelet[2669]: E0527 17:46:36.998312 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.998508 kubelet[2669]: E0527 17:46:36.998489 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.998508 kubelet[2669]: W0527 17:46:36.998504 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.998689 kubelet[2669]: E0527 17:46:36.998517 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.998742 kubelet[2669]: E0527 17:46:36.998706 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.998742 kubelet[2669]: W0527 17:46:36.998715 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.998742 kubelet[2669]: E0527 17:46:36.998725 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.998957 kubelet[2669]: E0527 17:46:36.998937 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.998957 kubelet[2669]: W0527 17:46:36.998951 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.999030 kubelet[2669]: E0527 17:46:36.998962 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.999172 kubelet[2669]: E0527 17:46:36.999152 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.999172 kubelet[2669]: W0527 17:46:36.999165 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.999258 kubelet[2669]: E0527 17:46:36.999176 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.999379 kubelet[2669]: E0527 17:46:36.999362 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.999379 kubelet[2669]: W0527 17:46:36.999375 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.999379 kubelet[2669]: E0527 17:46:36.999385 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.999573 kubelet[2669]: E0527 17:46:36.999563 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.999608 kubelet[2669]: W0527 17:46:36.999574 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:36.999647 kubelet[2669]: E0527 17:46:36.999584 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:36.999958 kubelet[2669]: E0527 17:46:36.999942 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:36.999958 kubelet[2669]: W0527 17:46:36.999955 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.000053 kubelet[2669]: E0527 17:46:36.999966 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.000197 kubelet[2669]: E0527 17:46:37.000159 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.000197 kubelet[2669]: W0527 17:46:37.000172 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.000197 kubelet[2669]: E0527 17:46:37.000182 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.011646 kubelet[2669]: E0527 17:46:37.011610 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.011646 kubelet[2669]: W0527 17:46:37.011634 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.011646 kubelet[2669]: E0527 17:46:37.011653 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.011929 kubelet[2669]: E0527 17:46:37.011902 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.011929 kubelet[2669]: W0527 17:46:37.011917 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.012017 kubelet[2669]: E0527 17:46:37.011948 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.012217 kubelet[2669]: E0527 17:46:37.012202 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.012217 kubelet[2669]: W0527 17:46:37.012215 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.012284 kubelet[2669]: E0527 17:46:37.012242 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.012500 kubelet[2669]: E0527 17:46:37.012485 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.012500 kubelet[2669]: W0527 17:46:37.012499 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.012571 kubelet[2669]: E0527 17:46:37.012514 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.012749 kubelet[2669]: E0527 17:46:37.012721 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.012820 kubelet[2669]: W0527 17:46:37.012750 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.012852 kubelet[2669]: E0527 17:46:37.012767 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.013058 kubelet[2669]: E0527 17:46:37.013043 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.013098 kubelet[2669]: W0527 17:46:37.013083 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.013201 kubelet[2669]: E0527 17:46:37.013176 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.013354 kubelet[2669]: E0527 17:46:37.013338 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.013496 kubelet[2669]: W0527 17:46:37.013354 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.013496 kubelet[2669]: E0527 17:46:37.013393 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.013583 kubelet[2669]: E0527 17:46:37.013567 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.013583 kubelet[2669]: W0527 17:46:37.013580 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.013675 kubelet[2669]: E0527 17:46:37.013659 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.013874 kubelet[2669]: E0527 17:46:37.013860 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.013918 kubelet[2669]: W0527 17:46:37.013875 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.013918 kubelet[2669]: E0527 17:46:37.013902 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.014275 kubelet[2669]: E0527 17:46:37.014260 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.014275 kubelet[2669]: W0527 17:46:37.014275 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.014329 kubelet[2669]: E0527 17:46:37.014290 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.014533 kubelet[2669]: E0527 17:46:37.014510 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.014533 kubelet[2669]: W0527 17:46:37.014524 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.014588 kubelet[2669]: E0527 17:46:37.014551 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.014824 kubelet[2669]: E0527 17:46:37.014808 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.014824 kubelet[2669]: W0527 17:46:37.014821 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.014886 kubelet[2669]: E0527 17:46:37.014838 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.015125 kubelet[2669]: E0527 17:46:37.015106 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.015125 kubelet[2669]: W0527 17:46:37.015120 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.015219 kubelet[2669]: E0527 17:46:37.015157 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.015328 kubelet[2669]: E0527 17:46:37.015312 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.015328 kubelet[2669]: W0527 17:46:37.015325 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.015413 kubelet[2669]: E0527 17:46:37.015343 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.015583 kubelet[2669]: E0527 17:46:37.015570 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.015641 kubelet[2669]: W0527 17:46:37.015583 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.015641 kubelet[2669]: E0527 17:46:37.015610 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.015890 kubelet[2669]: E0527 17:46:37.015866 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.015890 kubelet[2669]: W0527 17:46:37.015879 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.015992 kubelet[2669]: E0527 17:46:37.015975 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.016255 kubelet[2669]: E0527 17:46:37.016227 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.016255 kubelet[2669]: W0527 17:46:37.016240 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.016255 kubelet[2669]: E0527 17:46:37.016250 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.016473 kubelet[2669]: E0527 17:46:37.016455 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 17:46:37.016473 kubelet[2669]: W0527 17:46:37.016469 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 17:46:37.016553 kubelet[2669]: E0527 17:46:37.016479 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 17:46:37.040726 containerd[1533]: time="2025-05-27T17:46:37.040685305Z" level=info msg="StartContainer for \"110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552\" returns successfully" May 27 17:46:37.049721 systemd[1]: cri-containerd-110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552.scope: Deactivated successfully. May 27 17:46:37.051560 containerd[1533]: time="2025-05-27T17:46:37.051508830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552\" id:\"110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552\" pid:3329 exited_at:{seconds:1748367997 nanos:51132370}" May 27 17:46:37.051560 containerd[1533]: time="2025-05-27T17:46:37.051509011Z" level=info msg="received exit event container_id:\"110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552\" id:\"110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552\" pid:3329 exited_at:{seconds:1748367997 nanos:51132370}" May 27 17:46:37.075739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-110fd1d733a4f5962b0cc90bf073724961909911afb74a597e4e6a68d3ac6552-rootfs.mount: Deactivated successfully. May 27 17:46:37.916282 containerd[1533]: time="2025-05-27T17:46:37.916037352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 17:46:38.848387 kubelet[2669]: E0527 17:46:38.848306 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:40.847898 kubelet[2669]: E0527 17:46:40.847835 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:41.920414 containerd[1533]: time="2025-05-27T17:46:41.920358453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:41.921728 containerd[1533]: time="2025-05-27T17:46:41.921623390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 17:46:41.923527 containerd[1533]: time="2025-05-27T17:46:41.923474940Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:41.926334 containerd[1533]: time="2025-05-27T17:46:41.926292175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:41.926977 containerd[1533]: time="2025-05-27T17:46:41.926934744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.010860625s" May 27 17:46:41.926977 containerd[1533]: time="2025-05-27T17:46:41.926962792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 17:46:41.929292 containerd[1533]: time="2025-05-27T17:46:41.929235451Z" level=info msg="CreateContainer within sandbox \"f6d6314702107177562b6e6b8806f7e312e65f8b8256b7c51388c6d346ef1e90\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 17:46:42.012592 containerd[1533]: time="2025-05-27T17:46:42.012528427Z" level=info msg="Container a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:42.038171 containerd[1533]: time="2025-05-27T17:46:42.038105484Z" level=info msg="CreateContainer within sandbox \"f6d6314702107177562b6e6b8806f7e312e65f8b8256b7c51388c6d346ef1e90\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d\"" May 27 17:46:42.038546 containerd[1533]: time="2025-05-27T17:46:42.038517169Z" level=info msg="StartContainer for \"a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d\"" May 27 17:46:42.039925 containerd[1533]: time="2025-05-27T17:46:42.039902835Z" level=info msg="connecting to shim a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d" address="unix:///run/containerd/s/2a79a3d9c2263cac99f1f9617845bac70ac5738fa43da8419497c04df27060f4" protocol=ttrpc version=3 May 27 17:46:42.068016 systemd[1]: Started cri-containerd-a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d.scope - libcontainer container a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d. May 27 17:46:42.110614 containerd[1533]: time="2025-05-27T17:46:42.110555097Z" level=info msg="StartContainer for \"a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d\" returns successfully" May 27 17:46:42.848165 kubelet[2669]: E0527 17:46:42.848072 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:43.390536 systemd[1]: cri-containerd-a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d.scope: Deactivated successfully. May 27 17:46:43.391033 systemd[1]: cri-containerd-a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d.scope: Consumed 595ms CPU time, 178.3M memory peak, 3.2M read from disk, 170.9M written to disk. May 27 17:46:43.391535 containerd[1533]: time="2025-05-27T17:46:43.391439997Z" level=info msg="received exit event container_id:\"a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d\" id:\"a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d\" pid:3415 exited_at:{seconds:1748368003 nanos:391203358}" May 27 17:46:43.391535 containerd[1533]: time="2025-05-27T17:46:43.391505452Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d\" id:\"a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d\" pid:3415 exited_at:{seconds:1748368003 nanos:391203358}" May 27 17:46:43.414953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a246e8ce56ecd5b0ba1b0f50500a1f1fff5615cdc25fe3cf624a79f9fbe75d4d-rootfs.mount: Deactivated successfully. May 27 17:46:43.469264 kubelet[2669]: I0527 17:46:43.469190 2669 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 27 17:46:43.869001 systemd[1]: Created slice kubepods-besteffort-pod8d78faec_b5d7_41f8_8249_0e8e52b5d1c7.slice - libcontainer container kubepods-besteffort-pod8d78faec_b5d7_41f8_8249_0e8e52b5d1c7.slice. May 27 17:46:43.874511 systemd[1]: Created slice kubepods-burstable-podc5c48ec0_a2ac_4764_bfa4_f9c5138bf260.slice - libcontainer container kubepods-burstable-podc5c48ec0_a2ac_4764_bfa4_f9c5138bf260.slice. May 27 17:46:43.880952 systemd[1]: Created slice kubepods-burstable-pod0520455b_5dee_4789_be5a_7de7b54d80f7.slice - libcontainer container kubepods-burstable-pod0520455b_5dee_4789_be5a_7de7b54d80f7.slice. May 27 17:46:43.887491 systemd[1]: Created slice kubepods-besteffort-pod1492d653_d5be_4d5d_a8b8_83419919ce71.slice - libcontainer container kubepods-besteffort-pod1492d653_d5be_4d5d_a8b8_83419919ce71.slice. May 27 17:46:43.892889 systemd[1]: Created slice kubepods-besteffort-podc240021b_2352_4222_8f50_18cae2a0375b.slice - libcontainer container kubepods-besteffort-podc240021b_2352_4222_8f50_18cae2a0375b.slice. May 27 17:46:43.899028 systemd[1]: Created slice kubepods-besteffort-pod9e2cee50_d728_4ff4_b38e_209d5e558f27.slice - libcontainer container kubepods-besteffort-pod9e2cee50_d728_4ff4_b38e_209d5e558f27.slice. May 27 17:46:43.906658 systemd[1]: Created slice kubepods-besteffort-pod574ead5f_32c9_4c0b_bef7_1affef3c0fad.slice - libcontainer container kubepods-besteffort-pod574ead5f_32c9_4c0b_bef7_1affef3c0fad.slice. May 27 17:46:43.934954 containerd[1533]: time="2025-05-27T17:46:43.934907215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:46:43.955054 kubelet[2669]: I0527 17:46:43.955002 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e2cee50-d728-4ff4-b38e-209d5e558f27-calico-apiserver-certs\") pod \"calico-apiserver-647ff8d844-6cwjz\" (UID: \"9e2cee50-d728-4ff4-b38e-209d5e558f27\") " pod="calico-apiserver/calico-apiserver-647ff8d844-6cwjz" May 27 17:46:43.955054 kubelet[2669]: I0527 17:46:43.955038 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfh9\" (UniqueName: \"kubernetes.io/projected/9e2cee50-d728-4ff4-b38e-209d5e558f27-kube-api-access-dmfh9\") pod \"calico-apiserver-647ff8d844-6cwjz\" (UID: \"9e2cee50-d728-4ff4-b38e-209d5e558f27\") " pod="calico-apiserver/calico-apiserver-647ff8d844-6cwjz" May 27 17:46:43.955054 kubelet[2669]: I0527 17:46:43.955054 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-pm2wx\" (UID: \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\") " pod="calico-system/goldmane-8f77d7b6c-pm2wx" May 27 17:46:43.955054 kubelet[2669]: I0527 17:46:43.955068 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rb7p\" (UniqueName: \"kubernetes.io/projected/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-kube-api-access-7rb7p\") pod \"goldmane-8f77d7b6c-pm2wx\" (UID: \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\") " pod="calico-system/goldmane-8f77d7b6c-pm2wx" May 27 17:46:43.955668 kubelet[2669]: I0527 17:46:43.955087 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1492d653-d5be-4d5d-a8b8-83419919ce71-whisker-ca-bundle\") pod \"whisker-866668bb89-mvv6p\" (UID: \"1492d653-d5be-4d5d-a8b8-83419919ce71\") " pod="calico-system/whisker-866668bb89-mvv6p" May 27 17:46:43.955668 kubelet[2669]: I0527 17:46:43.955100 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-pm2wx\" (UID: \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\") " pod="calico-system/goldmane-8f77d7b6c-pm2wx" May 27 17:46:43.955668 kubelet[2669]: I0527 17:46:43.955112 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c48ec0-a2ac-4764-bfa4-f9c5138bf260-config-volume\") pod \"coredns-7c65d6cfc9-rkzrw\" (UID: \"c5c48ec0-a2ac-4764-bfa4-f9c5138bf260\") " pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:46:43.955668 kubelet[2669]: I0527 17:46:43.955129 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcp2h\" (UniqueName: \"kubernetes.io/projected/574ead5f-32c9-4c0b-bef7-1affef3c0fad-kube-api-access-pcp2h\") pod \"calico-kube-controllers-849878876c-q75fc\" (UID: \"574ead5f-32c9-4c0b-bef7-1affef3c0fad\") " pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:46:43.955668 kubelet[2669]: I0527 17:46:43.955149 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5x7b\" (UniqueName: \"kubernetes.io/projected/c5c48ec0-a2ac-4764-bfa4-f9c5138bf260-kube-api-access-h5x7b\") pod \"coredns-7c65d6cfc9-rkzrw\" (UID: \"c5c48ec0-a2ac-4764-bfa4-f9c5138bf260\") " pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:46:43.955867 kubelet[2669]: I0527 17:46:43.955175 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwkrv\" (UniqueName: \"kubernetes.io/projected/c240021b-2352-4222-8f50-18cae2a0375b-kube-api-access-pwkrv\") pod \"calico-apiserver-647ff8d844-tznbj\" (UID: \"c240021b-2352-4222-8f50-18cae2a0375b\") " pod="calico-apiserver/calico-apiserver-647ff8d844-tznbj" May 27 17:46:43.955867 kubelet[2669]: I0527 17:46:43.955193 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/574ead5f-32c9-4c0b-bef7-1affef3c0fad-tigera-ca-bundle\") pod \"calico-kube-controllers-849878876c-q75fc\" (UID: \"574ead5f-32c9-4c0b-bef7-1affef3c0fad\") " pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:46:43.955867 kubelet[2669]: I0527 17:46:43.955207 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps76r\" (UniqueName: \"kubernetes.io/projected/1492d653-d5be-4d5d-a8b8-83419919ce71-kube-api-access-ps76r\") pod \"whisker-866668bb89-mvv6p\" (UID: \"1492d653-d5be-4d5d-a8b8-83419919ce71\") " pod="calico-system/whisker-866668bb89-mvv6p" May 27 17:46:43.955867 kubelet[2669]: I0527 17:46:43.955223 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-config\") pod \"goldmane-8f77d7b6c-pm2wx\" (UID: \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\") " pod="calico-system/goldmane-8f77d7b6c-pm2wx" May 27 17:46:43.955867 kubelet[2669]: I0527 17:46:43.955258 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0520455b-5dee-4789-be5a-7de7b54d80f7-config-volume\") pod \"coredns-7c65d6cfc9-glbgz\" (UID: \"0520455b-5dee-4789-be5a-7de7b54d80f7\") " pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:46:43.956038 kubelet[2669]: I0527 17:46:43.955290 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c240021b-2352-4222-8f50-18cae2a0375b-calico-apiserver-certs\") pod \"calico-apiserver-647ff8d844-tznbj\" (UID: \"c240021b-2352-4222-8f50-18cae2a0375b\") " pod="calico-apiserver/calico-apiserver-647ff8d844-tznbj" May 27 17:46:43.956038 kubelet[2669]: I0527 17:46:43.955308 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1492d653-d5be-4d5d-a8b8-83419919ce71-whisker-backend-key-pair\") pod \"whisker-866668bb89-mvv6p\" (UID: \"1492d653-d5be-4d5d-a8b8-83419919ce71\") " pod="calico-system/whisker-866668bb89-mvv6p" May 27 17:46:43.956038 kubelet[2669]: I0527 17:46:43.955325 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv8bw\" (UniqueName: \"kubernetes.io/projected/0520455b-5dee-4789-be5a-7de7b54d80f7-kube-api-access-mv8bw\") pod \"coredns-7c65d6cfc9-glbgz\" (UID: \"0520455b-5dee-4789-be5a-7de7b54d80f7\") " pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:46:43.971431 kubelet[2669]: I0527 17:46:43.971387 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:46:43.971431 kubelet[2669]: I0527 17:46:43.971435 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:46:43.974279 kubelet[2669]: I0527 17:46:43.974235 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:46:43.988108 kubelet[2669]: I0527 17:46:43.988064 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:46:43.988237 kubelet[2669]: I0527 17:46:43.988168 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-647ff8d844-6cwjz","calico-apiserver/calico-apiserver-647ff8d844-tznbj","calico-system/whisker-866668bb89-mvv6p","calico-system/goldmane-8f77d7b6c-pm2wx","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-rkzrw","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/calico-node-68gs9","calico-system/csi-node-driver-lf5vj","tigera-operator/tigera-operator-7c5755cdcb-f52fk","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:46:43.988330 kubelet[2669]: E0527 17:46:43.988307 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[calico-apiserver-certs kube-api-access-dmfh9], unattached volumes=[], failed to process volumes=[]: context canceled" pod="calico-apiserver/calico-apiserver-647ff8d844-6cwjz" podUID="9e2cee50-d728-4ff4-b38e-209d5e558f27" May 27 17:46:44.173159 containerd[1533]: time="2025-05-27T17:46:44.172972604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-pm2wx,Uid:8d78faec-b5d7-41f8-8249-0e8e52b5d1c7,Namespace:calico-system,Attempt:0,}" May 27 17:46:44.178606 kubelet[2669]: E0527 17:46:44.178430 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:44.179081 containerd[1533]: time="2025-05-27T17:46:44.179016573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,}" May 27 17:46:44.183494 kubelet[2669]: E0527 17:46:44.183424 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:44.184181 containerd[1533]: time="2025-05-27T17:46:44.184127469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,}" May 27 17:46:44.191263 containerd[1533]: time="2025-05-27T17:46:44.191210316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-866668bb89-mvv6p,Uid:1492d653-d5be-4d5d-a8b8-83419919ce71,Namespace:calico-system,Attempt:0,}" May 27 17:46:44.197811 containerd[1533]: time="2025-05-27T17:46:44.197611120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647ff8d844-tznbj,Uid:c240021b-2352-4222-8f50-18cae2a0375b,Namespace:calico-apiserver,Attempt:0,}" May 27 17:46:44.211347 containerd[1533]: time="2025-05-27T17:46:44.211308551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,}" May 27 17:46:44.305801 containerd[1533]: time="2025-05-27T17:46:44.305430055Z" level=error msg="Failed to destroy network for sandbox \"3b0d65a8e6febd466f80f0a28ae89c7961802c597ff27b978e5d2df10a574488\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.311691 containerd[1533]: time="2025-05-27T17:46:44.311642759Z" level=error msg="Failed to destroy network for sandbox \"86249ecc219873d3e131d8785e3631747e12d0e181bba23c65ba88db817093b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.314956 containerd[1533]: time="2025-05-27T17:46:44.314921112Z" level=error msg="Failed to destroy network for sandbox \"b34b29c62031da5f2eda1e14c3868d3f219008934df6f7b329516783d3cba686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.318155 containerd[1533]: time="2025-05-27T17:46:44.318120341Z" level=error msg="Failed to destroy network for sandbox \"9fb3e186bb06dab4a8667c9e96bab73a898a01dca8011f1b10f44deaf13ace24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.318634 containerd[1533]: time="2025-05-27T17:46:44.318586331Z" level=error msg="Failed to destroy network for sandbox \"5aacfb82b73cda07e274d8c3af75e8153c43cf5a973c0b0cb001012181b49f11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.319268 containerd[1533]: time="2025-05-27T17:46:44.319238625Z" level=error msg="Failed to destroy network for sandbox \"069c4b62f10bc88268cc0e3585cc8a49fd882671dc0a86beac2aad8c144bb8a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.384501 containerd[1533]: time="2025-05-27T17:46:44.384427297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-866668bb89-mvv6p,Uid:1492d653-d5be-4d5d-a8b8-83419919ce71,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b0d65a8e6febd466f80f0a28ae89c7961802c597ff27b978e5d2df10a574488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.389060 containerd[1533]: time="2025-05-27T17:46:44.388973340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-647ff8d844-tznbj,Uid:c240021b-2352-4222-8f50-18cae2a0375b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86249ecc219873d3e131d8785e3631747e12d0e181bba23c65ba88db817093b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.402514 kubelet[2669]: E0527 17:46:44.402406 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86249ecc219873d3e131d8785e3631747e12d0e181bba23c65ba88db817093b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.402730 kubelet[2669]: E0527 17:46:44.402545 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86249ecc219873d3e131d8785e3631747e12d0e181bba23c65ba88db817093b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647ff8d844-tznbj" May 27 17:46:44.402730 kubelet[2669]: E0527 17:46:44.402420 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b0d65a8e6febd466f80f0a28ae89c7961802c597ff27b978e5d2df10a574488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.402730 kubelet[2669]: E0527 17:46:44.402677 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b0d65a8e6febd466f80f0a28ae89c7961802c597ff27b978e5d2df10a574488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-866668bb89-mvv6p" May 27 17:46:44.402730 kubelet[2669]: E0527 17:46:44.402589 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86249ecc219873d3e131d8785e3631747e12d0e181bba23c65ba88db817093b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-647ff8d844-tznbj" May 27 17:46:44.402910 kubelet[2669]: E0527 17:46:44.402703 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b0d65a8e6febd466f80f0a28ae89c7961802c597ff27b978e5d2df10a574488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-866668bb89-mvv6p" May 27 17:46:44.403712 kubelet[2669]: E0527 17:46:44.403634 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-647ff8d844-tznbj_calico-apiserver(c240021b-2352-4222-8f50-18cae2a0375b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-647ff8d844-tznbj_calico-apiserver(c240021b-2352-4222-8f50-18cae2a0375b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86249ecc219873d3e131d8785e3631747e12d0e181bba23c65ba88db817093b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-647ff8d844-tznbj" podUID="c240021b-2352-4222-8f50-18cae2a0375b" May 27 17:46:44.403943 kubelet[2669]: E0527 17:46:44.403883 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-866668bb89-mvv6p_calico-system(1492d653-d5be-4d5d-a8b8-83419919ce71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-866668bb89-mvv6p_calico-system(1492d653-d5be-4d5d-a8b8-83419919ce71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b0d65a8e6febd466f80f0a28ae89c7961802c597ff27b978e5d2df10a574488\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-866668bb89-mvv6p" podUID="1492d653-d5be-4d5d-a8b8-83419919ce71" May 27 17:46:44.449182 containerd[1533]: time="2025-05-27T17:46:44.448866416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34b29c62031da5f2eda1e14c3868d3f219008934df6f7b329516783d3cba686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.449651 kubelet[2669]: E0527 17:46:44.449372 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34b29c62031da5f2eda1e14c3868d3f219008934df6f7b329516783d3cba686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.449651 kubelet[2669]: E0527 17:46:44.449438 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34b29c62031da5f2eda1e14c3868d3f219008934df6f7b329516783d3cba686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:46:44.449651 kubelet[2669]: E0527 17:46:44.449462 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34b29c62031da5f2eda1e14c3868d3f219008934df6f7b329516783d3cba686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:46:44.449755 kubelet[2669]: E0527 17:46:44.449517 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b34b29c62031da5f2eda1e14c3868d3f219008934df6f7b329516783d3cba686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rkzrw" podUID="c5c48ec0-a2ac-4764-bfa4-f9c5138bf260" May 27 17:46:44.467673 containerd[1533]: time="2025-05-27T17:46:44.467519274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb3e186bb06dab4a8667c9e96bab73a898a01dca8011f1b10f44deaf13ace24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.468029 kubelet[2669]: E0527 17:46:44.467962 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb3e186bb06dab4a8667c9e96bab73a898a01dca8011f1b10f44deaf13ace24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.468238 kubelet[2669]: E0527 17:46:44.468045 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb3e186bb06dab4a8667c9e96bab73a898a01dca8011f1b10f44deaf13ace24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:46:44.468238 kubelet[2669]: E0527 17:46:44.468077 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb3e186bb06dab4a8667c9e96bab73a898a01dca8011f1b10f44deaf13ace24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:46:44.468238 kubelet[2669]: E0527 17:46:44.468140 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fb3e186bb06dab4a8667c9e96bab73a898a01dca8011f1b10f44deaf13ace24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849878876c-q75fc" podUID="574ead5f-32c9-4c0b-bef7-1affef3c0fad" May 27 17:46:44.469417 containerd[1533]: time="2025-05-27T17:46:44.469349323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-pm2wx,Uid:8d78faec-b5d7-41f8-8249-0e8e52b5d1c7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aacfb82b73cda07e274d8c3af75e8153c43cf5a973c0b0cb001012181b49f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.469804 kubelet[2669]: E0527 17:46:44.469754 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aacfb82b73cda07e274d8c3af75e8153c43cf5a973c0b0cb001012181b49f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.469873 kubelet[2669]: E0527 17:46:44.469828 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aacfb82b73cda07e274d8c3af75e8153c43cf5a973c0b0cb001012181b49f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-pm2wx" May 27 17:46:44.469873 kubelet[2669]: E0527 17:46:44.469854 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aacfb82b73cda07e274d8c3af75e8153c43cf5a973c0b0cb001012181b49f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-pm2wx" May 27 17:46:44.469961 kubelet[2669]: E0527 17:46:44.469886 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-pm2wx_calico-system(8d78faec-b5d7-41f8-8249-0e8e52b5d1c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-pm2wx_calico-system(8d78faec-b5d7-41f8-8249-0e8e52b5d1c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5aacfb82b73cda07e274d8c3af75e8153c43cf5a973c0b0cb001012181b49f11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-pm2wx" podUID="8d78faec-b5d7-41f8-8249-0e8e52b5d1c7" May 27 17:46:44.473616 containerd[1533]: time="2025-05-27T17:46:44.473382750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"069c4b62f10bc88268cc0e3585cc8a49fd882671dc0a86beac2aad8c144bb8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.474116 kubelet[2669]: E0527 17:46:44.474060 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"069c4b62f10bc88268cc0e3585cc8a49fd882671dc0a86beac2aad8c144bb8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.474184 kubelet[2669]: E0527 17:46:44.474132 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"069c4b62f10bc88268cc0e3585cc8a49fd882671dc0a86beac2aad8c144bb8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:46:44.474184 kubelet[2669]: E0527 17:46:44.474160 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"069c4b62f10bc88268cc0e3585cc8a49fd882671dc0a86beac2aad8c144bb8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:46:44.474252 kubelet[2669]: E0527 17:46:44.474208 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"069c4b62f10bc88268cc0e3585cc8a49fd882671dc0a86beac2aad8c144bb8a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-glbgz" podUID="0520455b-5dee-4789-be5a-7de7b54d80f7" May 27 17:46:44.856147 systemd[1]: Created slice kubepods-besteffort-podc1488e45_b4c4_4b5a_9c26_a912011cdd13.slice - libcontainer container kubepods-besteffort-podc1488e45_b4c4_4b5a_9c26_a912011cdd13.slice. May 27 17:46:44.858850 containerd[1533]: time="2025-05-27T17:46:44.858801700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,}" May 27 17:46:44.912405 containerd[1533]: time="2025-05-27T17:46:44.912338064Z" level=error msg="Failed to destroy network for sandbox \"74f462e5be92973122552c37add6ac949ed1ec3cab5b61e933758fb0fba6c30e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.914671 systemd[1]: run-netns-cni\x2d58622288\x2d75d8\x2d7996\x2de064\x2d3f15b375f469.mount: Deactivated successfully. May 27 17:46:44.931071 containerd[1533]: time="2025-05-27T17:46:44.931002065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"74f462e5be92973122552c37add6ac949ed1ec3cab5b61e933758fb0fba6c30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.931317 kubelet[2669]: E0527 17:46:44.931272 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74f462e5be92973122552c37add6ac949ed1ec3cab5b61e933758fb0fba6c30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:44.931391 kubelet[2669]: E0527 17:46:44.931335 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74f462e5be92973122552c37add6ac949ed1ec3cab5b61e933758fb0fba6c30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:44.931391 kubelet[2669]: E0527 17:46:44.931356 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74f462e5be92973122552c37add6ac949ed1ec3cab5b61e933758fb0fba6c30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:44.931464 kubelet[2669]: E0527 17:46:44.931410 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74f462e5be92973122552c37add6ac949ed1ec3cab5b61e933758fb0fba6c30e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:44.939759 kubelet[2669]: I0527 17:46:44.939727 2669 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-647ff8d844-6cwjz" May 27 17:46:44.939759 kubelet[2669]: I0527 17:46:44.939754 2669 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-647ff8d844-6cwjz"] May 27 17:46:44.963317 kubelet[2669]: I0527 17:46:44.963261 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e2cee50-d728-4ff4-b38e-209d5e558f27-calico-apiserver-certs\") pod \"9e2cee50-d728-4ff4-b38e-209d5e558f27\" (UID: \"9e2cee50-d728-4ff4-b38e-209d5e558f27\") " May 27 17:46:44.963317 kubelet[2669]: I0527 17:46:44.963301 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmfh9\" (UniqueName: \"kubernetes.io/projected/9e2cee50-d728-4ff4-b38e-209d5e558f27-kube-api-access-dmfh9\") pod \"9e2cee50-d728-4ff4-b38e-209d5e558f27\" (UID: \"9e2cee50-d728-4ff4-b38e-209d5e558f27\") " May 27 17:46:44.967000 kubelet[2669]: I0527 17:46:44.966967 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2cee50-d728-4ff4-b38e-209d5e558f27-kube-api-access-dmfh9" (OuterVolumeSpecName: "kube-api-access-dmfh9") pod "9e2cee50-d728-4ff4-b38e-209d5e558f27" (UID: "9e2cee50-d728-4ff4-b38e-209d5e558f27"). InnerVolumeSpecName "kube-api-access-dmfh9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:46:44.967732 kubelet[2669]: I0527 17:46:44.967687 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e2cee50-d728-4ff4-b38e-209d5e558f27-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9e2cee50-d728-4ff4-b38e-209d5e558f27" (UID: "9e2cee50-d728-4ff4-b38e-209d5e558f27"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 17:46:44.968454 systemd[1]: var-lib-kubelet-pods-9e2cee50\x2dd728\x2d4ff4\x2db38e\x2d209d5e558f27-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmfh9.mount: Deactivated successfully. May 27 17:46:44.968572 systemd[1]: var-lib-kubelet-pods-9e2cee50\x2dd728\x2d4ff4\x2db38e\x2d209d5e558f27-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 27 17:46:45.063890 kubelet[2669]: I0527 17:46:45.063839 2669 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e2cee50-d728-4ff4-b38e-209d5e558f27-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 27 17:46:45.063890 kubelet[2669]: I0527 17:46:45.063873 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmfh9\" (UniqueName: \"kubernetes.io/projected/9e2cee50-d728-4ff4-b38e-209d5e558f27-kube-api-access-dmfh9\") on node \"localhost\" DevicePath \"\"" May 27 17:46:45.868254 systemd[1]: Removed slice kubepods-besteffort-pod9e2cee50_d728_4ff4_b38e_209d5e558f27.slice - libcontainer container kubepods-besteffort-pod9e2cee50_d728_4ff4_b38e_209d5e558f27.slice. May 27 17:46:46.940812 kubelet[2669]: I0527 17:46:46.940764 2669 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-647ff8d844-6cwjz"] May 27 17:46:46.960913 kubelet[2669]: I0527 17:46:46.960840 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:46:46.961754 kubelet[2669]: E0527 17:46:46.961714 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:46.968700 kubelet[2669]: I0527 17:46:46.968669 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:46:46.968700 kubelet[2669]: I0527 17:46:46.968697 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:46:46.971448 kubelet[2669]: I0527 17:46:46.971419 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:46:47.023680 kubelet[2669]: I0527 17:46:47.023643 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:46:47.024005 kubelet[2669]: I0527 17:46:47.023972 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-8f77d7b6c-pm2wx","calico-apiserver/calico-apiserver-647ff8d844-tznbj","calico-system/whisker-866668bb89-mvv6p","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/csi-node-driver-lf5vj","calico-system/calico-node-68gs9","tigera-operator/tigera-operator-7c5755cdcb-f52fk","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:46:47.033910 kubelet[2669]: I0527 17:46:47.033879 2669 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-8f77d7b6c-pm2wx" May 27 17:46:47.034141 kubelet[2669]: I0527 17:46:47.034091 2669 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-8f77d7b6c-pm2wx"] May 27 17:46:47.075603 kubelet[2669]: I0527 17:46:47.075500 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-goldmane-key-pair\") pod \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\" (UID: \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\") " May 27 17:46:47.076800 kubelet[2669]: I0527 17:46:47.076756 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-config\") pod \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\" (UID: \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\") " May 27 17:46:47.077976 kubelet[2669]: I0527 17:46:47.077936 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rb7p\" (UniqueName: \"kubernetes.io/projected/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-kube-api-access-7rb7p\") pod \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\" (UID: \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\") " May 27 17:46:47.078366 kubelet[2669]: I0527 17:46:47.078252 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-goldmane-ca-bundle\") pod \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\" (UID: \"8d78faec-b5d7-41f8-8249-0e8e52b5d1c7\") " May 27 17:46:47.078366 kubelet[2669]: I0527 17:46:47.077578 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-config" (OuterVolumeSpecName: "config") pod "8d78faec-b5d7-41f8-8249-0e8e52b5d1c7" (UID: "8d78faec-b5d7-41f8-8249-0e8e52b5d1c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 17:46:47.078366 kubelet[2669]: I0527 17:46:47.078350 2669 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-config\") on node \"localhost\" DevicePath \"\"" May 27 17:46:47.078666 kubelet[2669]: I0527 17:46:47.078636 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "8d78faec-b5d7-41f8-8249-0e8e52b5d1c7" (UID: "8d78faec-b5d7-41f8-8249-0e8e52b5d1c7"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 17:46:47.083076 kubelet[2669]: I0527 17:46:47.080151 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "8d78faec-b5d7-41f8-8249-0e8e52b5d1c7" (UID: "8d78faec-b5d7-41f8-8249-0e8e52b5d1c7"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 17:46:47.082314 systemd[1]: var-lib-kubelet-pods-8d78faec\x2db5d7\x2d41f8\x2d8249\x2d0e8e52b5d1c7-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:46:47.085514 kubelet[2669]: I0527 17:46:47.085473 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-kube-api-access-7rb7p" (OuterVolumeSpecName: "kube-api-access-7rb7p") pod "8d78faec-b5d7-41f8-8249-0e8e52b5d1c7" (UID: "8d78faec-b5d7-41f8-8249-0e8e52b5d1c7"). InnerVolumeSpecName "kube-api-access-7rb7p". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:46:47.087750 systemd[1]: var-lib-kubelet-pods-8d78faec\x2db5d7\x2d41f8\x2d8249\x2d0e8e52b5d1c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7rb7p.mount: Deactivated successfully. May 27 17:46:47.179154 kubelet[2669]: I0527 17:46:47.179103 2669 reconciler_common.go:293] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-goldmane-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 17:46:47.179154 kubelet[2669]: I0527 17:46:47.179137 2669 reconciler_common.go:293] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-goldmane-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 17:46:47.179154 kubelet[2669]: I0527 17:46:47.179146 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rb7p\" (UniqueName: \"kubernetes.io/projected/8d78faec-b5d7-41f8-8249-0e8e52b5d1c7-kube-api-access-7rb7p\") on node \"localhost\" DevicePath \"\"" May 27 17:46:47.855493 systemd[1]: Removed slice kubepods-besteffort-pod8d78faec_b5d7_41f8_8249_0e8e52b5d1c7.slice - libcontainer container kubepods-besteffort-pod8d78faec_b5d7_41f8_8249_0e8e52b5d1c7.slice. May 27 17:46:47.940887 kubelet[2669]: E0527 17:46:47.940674 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:48.034476 kubelet[2669]: I0527 17:46:48.034414 2669 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-8f77d7b6c-pm2wx"] May 27 17:46:48.044968 kubelet[2669]: I0527 17:46:48.044924 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:46:48.044968 kubelet[2669]: I0527 17:46:48.044959 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:46:48.047011 kubelet[2669]: I0527 17:46:48.046981 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:46:48.057007 kubelet[2669]: I0527 17:46:48.056975 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:46:48.057108 kubelet[2669]: I0527 17:46:48.057051 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-866668bb89-mvv6p","calico-apiserver/calico-apiserver-647ff8d844-tznbj","kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/calico-node-68gs9","calico-system/csi-node-driver-lf5vj","tigera-operator/tigera-operator-7c5755cdcb-f52fk","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:46:48.060850 kubelet[2669]: I0527 17:46:48.060821 2669 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-866668bb89-mvv6p" May 27 17:46:48.060850 kubelet[2669]: I0527 17:46:48.060841 2669 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-866668bb89-mvv6p"] May 27 17:46:48.073717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254625504.mount: Deactivated successfully. May 27 17:46:48.076733 containerd[1533]: time="2025-05-27T17:46:48.076653736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 17:46:48.085808 kubelet[2669]: I0527 17:46:48.084276 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1492d653-d5be-4d5d-a8b8-83419919ce71-whisker-backend-key-pair\") pod \"1492d653-d5be-4d5d-a8b8-83419919ce71\" (UID: \"1492d653-d5be-4d5d-a8b8-83419919ce71\") " May 27 17:46:48.085808 kubelet[2669]: I0527 17:46:48.084311 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1492d653-d5be-4d5d-a8b8-83419919ce71-whisker-ca-bundle\") pod \"1492d653-d5be-4d5d-a8b8-83419919ce71\" (UID: \"1492d653-d5be-4d5d-a8b8-83419919ce71\") " May 27 17:46:48.085808 kubelet[2669]: I0527 17:46:48.084332 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps76r\" (UniqueName: \"kubernetes.io/projected/1492d653-d5be-4d5d-a8b8-83419919ce71-kube-api-access-ps76r\") pod \"1492d653-d5be-4d5d-a8b8-83419919ce71\" (UID: \"1492d653-d5be-4d5d-a8b8-83419919ce71\") " May 27 17:46:48.087859 kubelet[2669]: I0527 17:46:48.087246 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1492d653-d5be-4d5d-a8b8-83419919ce71-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1492d653-d5be-4d5d-a8b8-83419919ce71" (UID: "1492d653-d5be-4d5d-a8b8-83419919ce71"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 17:46:48.087939 containerd[1533]: time="2025-05-27T17:46:48.087352462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount254625504: write /var/lib/containerd/tmpmounts/containerd-mount254625504/usr/bin/calico-node: no space left on device" May 27 17:46:48.088014 kubelet[2669]: E0527 17:46:48.087893 2669 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount254625504: write /var/lib/containerd/tmpmounts/containerd-mount254625504/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 17:46:48.088014 kubelet[2669]: E0527 17:46:48.087934 2669 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount254625504: write /var/lib/containerd/tmpmounts/containerd-mount254625504/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 17:46:48.092963 systemd[1]: var-lib-kubelet-pods-1492d653\x2dd5be\x2d4d5d\x2da8b8\x2d83419919ce71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dps76r.mount: Deactivated successfully. May 27 17:46:48.096631 kubelet[2669]: I0527 17:46:48.096326 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1492d653-d5be-4d5d-a8b8-83419919ce71-kube-api-access-ps76r" (OuterVolumeSpecName: "kube-api-access-ps76r") pod "1492d653-d5be-4d5d-a8b8-83419919ce71" (UID: "1492d653-d5be-4d5d-a8b8-83419919ce71"). InnerVolumeSpecName "kube-api-access-ps76r". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:46:48.097341 kubelet[2669]: I0527 17:46:48.097189 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1492d653-d5be-4d5d-a8b8-83419919ce71-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1492d653-d5be-4d5d-a8b8-83419919ce71" (UID: "1492d653-d5be-4d5d-a8b8-83419919ce71"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 17:46:48.097933 kubelet[2669]: E0527 17:46:48.097811 2669 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k47nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-68gs9_calico-system(1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount254625504: write /var/lib/containerd/tmpmounts/containerd-mount254625504/usr/bin/calico-node: no space left on device" logger="UnhandledError" May 27 17:46:48.099204 systemd[1]: var-lib-kubelet-pods-1492d653\x2dd5be\x2d4d5d\x2da8b8\x2d83419919ce71-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 17:46:48.100186 kubelet[2669]: E0527 17:46:48.100045 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount254625504: write /var/lib/containerd/tmpmounts/containerd-mount254625504/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-68gs9" podUID="1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07" May 27 17:46:48.185521 kubelet[2669]: I0527 17:46:48.185376 2669 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1492d653-d5be-4d5d-a8b8-83419919ce71-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 17:46:48.185521 kubelet[2669]: I0527 17:46:48.185412 2669 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1492d653-d5be-4d5d-a8b8-83419919ce71-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 17:46:48.185521 kubelet[2669]: I0527 17:46:48.185421 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps76r\" (UniqueName: \"kubernetes.io/projected/1492d653-d5be-4d5d-a8b8-83419919ce71-kube-api-access-ps76r\") on node \"localhost\" DevicePath \"\"" May 27 17:46:48.943615 kubelet[2669]: E0527 17:46:48.943574 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\"\"" pod="calico-system/calico-node-68gs9" podUID="1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07" May 27 17:46:48.950693 systemd[1]: Removed slice kubepods-besteffort-pod1492d653_d5be_4d5d_a8b8_83419919ce71.slice - libcontainer container kubepods-besteffort-pod1492d653_d5be_4d5d_a8b8_83419919ce71.slice. May 27 17:46:49.061557 kubelet[2669]: I0527 17:46:49.061487 2669 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-866668bb89-mvv6p"] May 27 17:46:49.079338 kubelet[2669]: I0527 17:46:49.079295 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:46:49.079338 kubelet[2669]: I0527 17:46:49.079332 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:46:49.081032 kubelet[2669]: I0527 17:46:49.081006 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:46:49.092164 kubelet[2669]: I0527 17:46:49.092136 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:46:49.092243 kubelet[2669]: I0527 17:46:49.092222 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-647ff8d844-tznbj","kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/csi-node-driver-lf5vj","calico-system/calico-node-68gs9","tigera-operator/tigera-operator-7c5755cdcb-f52fk","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:46:49.096599 kubelet[2669]: I0527 17:46:49.096574 2669 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-647ff8d844-tznbj" May 27 17:46:49.096599 kubelet[2669]: I0527 17:46:49.096594 2669 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-647ff8d844-tznbj"] May 27 17:46:49.191763 kubelet[2669]: I0527 17:46:49.191709 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c240021b-2352-4222-8f50-18cae2a0375b-calico-apiserver-certs\") pod \"c240021b-2352-4222-8f50-18cae2a0375b\" (UID: \"c240021b-2352-4222-8f50-18cae2a0375b\") " May 27 17:46:49.191763 kubelet[2669]: I0527 17:46:49.191759 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwkrv\" (UniqueName: \"kubernetes.io/projected/c240021b-2352-4222-8f50-18cae2a0375b-kube-api-access-pwkrv\") pod \"c240021b-2352-4222-8f50-18cae2a0375b\" (UID: \"c240021b-2352-4222-8f50-18cae2a0375b\") " May 27 17:46:49.195385 kubelet[2669]: I0527 17:46:49.195295 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c240021b-2352-4222-8f50-18cae2a0375b-kube-api-access-pwkrv" (OuterVolumeSpecName: "kube-api-access-pwkrv") pod "c240021b-2352-4222-8f50-18cae2a0375b" (UID: "c240021b-2352-4222-8f50-18cae2a0375b"). InnerVolumeSpecName "kube-api-access-pwkrv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:46:49.195615 kubelet[2669]: I0527 17:46:49.195595 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c240021b-2352-4222-8f50-18cae2a0375b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c240021b-2352-4222-8f50-18cae2a0375b" (UID: "c240021b-2352-4222-8f50-18cae2a0375b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 17:46:49.196990 systemd[1]: var-lib-kubelet-pods-c240021b\x2d2352\x2d4222\x2d8f50\x2d18cae2a0375b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwkrv.mount: Deactivated successfully. May 27 17:46:49.197112 systemd[1]: var-lib-kubelet-pods-c240021b\x2d2352\x2d4222\x2d8f50\x2d18cae2a0375b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 27 17:46:49.292663 kubelet[2669]: I0527 17:46:49.292600 2669 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c240021b-2352-4222-8f50-18cae2a0375b-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 27 17:46:49.292663 kubelet[2669]: I0527 17:46:49.292640 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwkrv\" (UniqueName: \"kubernetes.io/projected/c240021b-2352-4222-8f50-18cae2a0375b-kube-api-access-pwkrv\") on node \"localhost\" DevicePath \"\"" May 27 17:46:49.855412 systemd[1]: Removed slice kubepods-besteffort-podc240021b_2352_4222_8f50_18cae2a0375b.slice - libcontainer container kubepods-besteffort-podc240021b_2352_4222_8f50_18cae2a0375b.slice. May 27 17:46:50.097031 kubelet[2669]: I0527 17:46:50.096967 2669 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-647ff8d844-tznbj"] May 27 17:46:51.886366 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:45000.service - OpenSSH per-connection server daemon (10.0.0.1:45000). May 27 17:46:51.948310 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 45000 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:46:51.950110 sshd-session[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:51.955216 systemd-logind[1504]: New session 8 of user core. May 27 17:46:51.967974 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:46:52.096148 sshd[3709]: Connection closed by 10.0.0.1 port 45000 May 27 17:46:52.096498 sshd-session[3707]: pam_unix(sshd:session): session closed for user core May 27 17:46:52.101302 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:45000.service: Deactivated successfully. May 27 17:46:52.103306 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:46:52.104122 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. May 27 17:46:52.105433 systemd-logind[1504]: Removed session 8. May 27 17:46:55.848678 kubelet[2669]: E0527 17:46:55.848635 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:55.849289 containerd[1533]: time="2025-05-27T17:46:55.849010437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,}" May 27 17:46:55.899170 containerd[1533]: time="2025-05-27T17:46:55.899109777Z" level=error msg="Failed to destroy network for sandbox \"ba573021a4b0641e98195c357b7be2a82d2643751509dcaf42b700eed416339a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:55.900909 containerd[1533]: time="2025-05-27T17:46:55.900845057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba573021a4b0641e98195c357b7be2a82d2643751509dcaf42b700eed416339a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:55.901314 systemd[1]: run-netns-cni\x2dbcd4c069\x2dbd4f\x2d43cf\x2d6615\x2dc18830f57108.mount: Deactivated successfully. May 27 17:46:55.901678 kubelet[2669]: E0527 17:46:55.901289 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba573021a4b0641e98195c357b7be2a82d2643751509dcaf42b700eed416339a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:55.901678 kubelet[2669]: E0527 17:46:55.901357 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba573021a4b0641e98195c357b7be2a82d2643751509dcaf42b700eed416339a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:46:55.901678 kubelet[2669]: E0527 17:46:55.901383 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba573021a4b0641e98195c357b7be2a82d2643751509dcaf42b700eed416339a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:46:55.901678 kubelet[2669]: E0527 17:46:55.901463 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba573021a4b0641e98195c357b7be2a82d2643751509dcaf42b700eed416339a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rkzrw" podUID="c5c48ec0-a2ac-4764-bfa4-f9c5138bf260" May 27 17:46:56.848091 kubelet[2669]: E0527 17:46:56.848046 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:46:56.848531 containerd[1533]: time="2025-05-27T17:46:56.848488707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,}" May 27 17:46:56.900266 containerd[1533]: time="2025-05-27T17:46:56.900203048Z" level=error msg="Failed to destroy network for sandbox \"df893df5989c6a4fbb54989858580af02dab73f14e5608e2cf773578f5a817d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:56.902335 containerd[1533]: time="2025-05-27T17:46:56.902290264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df893df5989c6a4fbb54989858580af02dab73f14e5608e2cf773578f5a817d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:56.902673 systemd[1]: run-netns-cni\x2d89416207\x2defbb\x2de7e2\x2d6845\x2d448f45ec3c01.mount: Deactivated successfully. May 27 17:46:56.902968 kubelet[2669]: E0527 17:46:56.902886 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df893df5989c6a4fbb54989858580af02dab73f14e5608e2cf773578f5a817d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:56.903209 kubelet[2669]: E0527 17:46:56.902995 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df893df5989c6a4fbb54989858580af02dab73f14e5608e2cf773578f5a817d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:46:56.903209 kubelet[2669]: E0527 17:46:56.903014 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df893df5989c6a4fbb54989858580af02dab73f14e5608e2cf773578f5a817d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:46:56.903209 kubelet[2669]: E0527 17:46:56.903058 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df893df5989c6a4fbb54989858580af02dab73f14e5608e2cf773578f5a817d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-glbgz" podUID="0520455b-5dee-4789-be5a-7de7b54d80f7" May 27 17:46:57.115114 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:49100.service - OpenSSH per-connection server daemon (10.0.0.1:49100). May 27 17:46:57.171188 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 49100 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:46:57.173127 sshd-session[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:57.178225 systemd-logind[1504]: New session 9 of user core. May 27 17:46:57.188933 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:46:57.317219 sshd[3789]: Connection closed by 10.0.0.1 port 49100 May 27 17:46:57.317592 sshd-session[3787]: pam_unix(sshd:session): session closed for user core May 27 17:46:57.323040 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:49100.service: Deactivated successfully. May 27 17:46:57.325616 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:46:57.326589 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. May 27 17:46:57.328153 systemd-logind[1504]: Removed session 9. May 27 17:46:57.849448 containerd[1533]: time="2025-05-27T17:46:57.849387174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,}" May 27 17:46:57.902017 containerd[1533]: time="2025-05-27T17:46:57.901960459Z" level=error msg="Failed to destroy network for sandbox \"37f132b20ded3446230c411852bb7a269275ca288312e2eea9b6b04eb7e789f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:57.903617 containerd[1533]: time="2025-05-27T17:46:57.903496249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f132b20ded3446230c411852bb7a269275ca288312e2eea9b6b04eb7e789f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:57.903900 kubelet[2669]: E0527 17:46:57.903865 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f132b20ded3446230c411852bb7a269275ca288312e2eea9b6b04eb7e789f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:57.904221 kubelet[2669]: E0527 17:46:57.903922 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f132b20ded3446230c411852bb7a269275ca288312e2eea9b6b04eb7e789f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:46:57.904221 kubelet[2669]: E0527 17:46:57.903945 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f132b20ded3446230c411852bb7a269275ca288312e2eea9b6b04eb7e789f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:46:57.904221 kubelet[2669]: E0527 17:46:57.904005 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37f132b20ded3446230c411852bb7a269275ca288312e2eea9b6b04eb7e789f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849878876c-q75fc" podUID="574ead5f-32c9-4c0b-bef7-1affef3c0fad" May 27 17:46:57.904354 systemd[1]: run-netns-cni\x2daa4c6b0c\x2dc707\x2d9914\x2dad93\x2dd9e1621e5233.mount: Deactivated successfully. May 27 17:46:58.849279 containerd[1533]: time="2025-05-27T17:46:58.849219907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,}" May 27 17:46:58.899748 containerd[1533]: time="2025-05-27T17:46:58.899696033Z" level=error msg="Failed to destroy network for sandbox \"19af757400a519972708e98b7ffb22571f3e2f4223c1c4b1a0b333d482d929e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:58.901230 containerd[1533]: time="2025-05-27T17:46:58.901163291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19af757400a519972708e98b7ffb22571f3e2f4223c1c4b1a0b333d482d929e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:58.901459 kubelet[2669]: E0527 17:46:58.901367 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19af757400a519972708e98b7ffb22571f3e2f4223c1c4b1a0b333d482d929e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:46:58.901701 kubelet[2669]: E0527 17:46:58.901421 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19af757400a519972708e98b7ffb22571f3e2f4223c1c4b1a0b333d482d929e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:58.901739 kubelet[2669]: E0527 17:46:58.901705 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19af757400a519972708e98b7ffb22571f3e2f4223c1c4b1a0b333d482d929e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:46:58.901800 kubelet[2669]: E0527 17:46:58.901748 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19af757400a519972708e98b7ffb22571f3e2f4223c1c4b1a0b333d482d929e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:46:58.902045 systemd[1]: run-netns-cni\x2df60b3bec\x2db7df\x2d9646\x2d77d2\x2d892224e0081b.mount: Deactivated successfully. May 27 17:47:00.121332 kubelet[2669]: I0527 17:47:00.121289 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:00.121332 kubelet[2669]: I0527 17:47:00.121326 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:00.123047 kubelet[2669]: I0527 17:47:00.123020 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:00.134796 kubelet[2669]: I0527 17:47:00.134694 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:00.135020 kubelet[2669]: I0527 17:47:00.134995 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/calico-node-68gs9","calico-system/csi-node-driver-lf5vj","tigera-operator/tigera-operator-7c5755cdcb-f52fk","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:47:00.135122 kubelet[2669]: E0527 17:47:00.135035 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:00.135122 kubelet[2669]: E0527 17:47:00.135047 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:00.135122 kubelet[2669]: E0527 17:47:00.135055 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:00.135122 kubelet[2669]: E0527 17:47:00.135063 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68gs9" May 27 17:47:00.135122 kubelet[2669]: E0527 17:47:00.135070 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:00.163944 containerd[1533]: time="2025-05-27T17:47:00.163884468Z" level=info msg="StopContainer for \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" with timeout 2 (s)" May 27 17:47:00.170280 containerd[1533]: time="2025-05-27T17:47:00.170254601Z" level=info msg="Stop container \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" with signal terminated" May 27 17:47:00.336645 systemd[1]: cri-containerd-40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3.scope: Deactivated successfully. May 27 17:47:00.337241 systemd[1]: cri-containerd-40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3.scope: Consumed 4.736s CPU time, 83.7M memory peak. May 27 17:47:00.337635 containerd[1533]: time="2025-05-27T17:47:00.337596854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" id:\"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" pid:2988 exited_at:{seconds:1748368020 nanos:337121773}" May 27 17:47:00.337764 containerd[1533]: time="2025-05-27T17:47:00.337683195Z" level=info msg="received exit event container_id:\"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" id:\"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" pid:2988 exited_at:{seconds:1748368020 nanos:337121773}" May 27 17:47:00.360617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3-rootfs.mount: Deactivated successfully. May 27 17:47:00.377392 containerd[1533]: time="2025-05-27T17:47:00.377273419Z" level=info msg="StopContainer for \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" returns successfully" May 27 17:47:00.378214 containerd[1533]: time="2025-05-27T17:47:00.378176650Z" level=info msg="StopPodSandbox for \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\"" May 27 17:47:00.384082 containerd[1533]: time="2025-05-27T17:47:00.384038445Z" level=info msg="Container to stop \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:47:00.397801 systemd[1]: cri-containerd-72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd.scope: Deactivated successfully. May 27 17:47:00.399496 containerd[1533]: time="2025-05-27T17:47:00.399409668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" id:\"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" pid:2831 exit_status:137 exited_at:{seconds:1748368020 nanos:398860289}" May 27 17:47:00.434949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd-rootfs.mount: Deactivated successfully. May 27 17:47:00.514071 containerd[1533]: time="2025-05-27T17:47:00.514005578Z" level=info msg="shim disconnected" id=72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd namespace=k8s.io May 27 17:47:00.514071 containerd[1533]: time="2025-05-27T17:47:00.514040778Z" level=warning msg="cleaning up after shim disconnected" id=72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd namespace=k8s.io May 27 17:47:00.534928 containerd[1533]: time="2025-05-27T17:47:00.514639185Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:47:00.552398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd-shm.mount: Deactivated successfully. May 27 17:47:00.562620 containerd[1533]: time="2025-05-27T17:47:00.562556833Z" level=info msg="TearDown network for sandbox \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" successfully" May 27 17:47:00.562620 containerd[1533]: time="2025-05-27T17:47:00.562605119Z" level=info msg="StopPodSandbox for \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" returns successfully" May 27 17:47:00.567085 containerd[1533]: time="2025-05-27T17:47:00.567041847Z" level=info msg="received exit event sandbox_id:\"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" exit_status:137 exited_at:{seconds:1748368020 nanos:398860289}" May 27 17:47:00.572403 kubelet[2669]: I0527 17:47:00.572364 2669 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-7c5755cdcb-f52fk" May 27 17:47:00.572403 kubelet[2669]: I0527 17:47:00.572402 2669 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-7c5755cdcb-f52fk"] May 27 17:47:00.659378 kubelet[2669]: I0527 17:47:00.659239 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff318a34-6e5c-4f3e-b1de-3c19682d8c74-var-lib-calico\") pod \"ff318a34-6e5c-4f3e-b1de-3c19682d8c74\" (UID: \"ff318a34-6e5c-4f3e-b1de-3c19682d8c74\") " May 27 17:47:00.659378 kubelet[2669]: I0527 17:47:00.659294 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8hm4\" (UniqueName: \"kubernetes.io/projected/ff318a34-6e5c-4f3e-b1de-3c19682d8c74-kube-api-access-b8hm4\") pod \"ff318a34-6e5c-4f3e-b1de-3c19682d8c74\" (UID: \"ff318a34-6e5c-4f3e-b1de-3c19682d8c74\") " May 27 17:47:00.659378 kubelet[2669]: I0527 17:47:00.659367 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff318a34-6e5c-4f3e-b1de-3c19682d8c74-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "ff318a34-6e5c-4f3e-b1de-3c19682d8c74" (UID: "ff318a34-6e5c-4f3e-b1de-3c19682d8c74"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:47:00.662465 kubelet[2669]: I0527 17:47:00.662404 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff318a34-6e5c-4f3e-b1de-3c19682d8c74-kube-api-access-b8hm4" (OuterVolumeSpecName: "kube-api-access-b8hm4") pod "ff318a34-6e5c-4f3e-b1de-3c19682d8c74" (UID: "ff318a34-6e5c-4f3e-b1de-3c19682d8c74"). InnerVolumeSpecName "kube-api-access-b8hm4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:47:00.663749 systemd[1]: var-lib-kubelet-pods-ff318a34\x2d6e5c\x2d4f3e\x2db1de\x2d3c19682d8c74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db8hm4.mount: Deactivated successfully. May 27 17:47:00.759785 kubelet[2669]: I0527 17:47:00.759722 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8hm4\" (UniqueName: \"kubernetes.io/projected/ff318a34-6e5c-4f3e-b1de-3c19682d8c74-kube-api-access-b8hm4\") on node \"localhost\" DevicePath \"\"" May 27 17:47:00.759785 kubelet[2669]: I0527 17:47:00.759757 2669 reconciler_common.go:293] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff318a34-6e5c-4f3e-b1de-3c19682d8c74-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 27 17:47:00.963701 kubelet[2669]: I0527 17:47:00.963676 2669 scope.go:117] "RemoveContainer" containerID="40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3" May 27 17:47:00.965290 containerd[1533]: time="2025-05-27T17:47:00.965246875Z" level=info msg="RemoveContainer for \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\"" May 27 17:47:00.971087 systemd[1]: Removed slice kubepods-besteffort-podff318a34_6e5c_4f3e_b1de_3c19682d8c74.slice - libcontainer container kubepods-besteffort-podff318a34_6e5c_4f3e_b1de_3c19682d8c74.slice. May 27 17:47:00.971198 systemd[1]: kubepods-besteffort-podff318a34_6e5c_4f3e_b1de_3c19682d8c74.slice: Consumed 4.762s CPU time, 83.9M memory peak. May 27 17:47:00.977528 containerd[1533]: time="2025-05-27T17:47:00.977465246Z" level=info msg="RemoveContainer for \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" returns successfully" May 27 17:47:00.977925 kubelet[2669]: I0527 17:47:00.977816 2669 scope.go:117] "RemoveContainer" containerID="40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3" May 27 17:47:00.978218 containerd[1533]: time="2025-05-27T17:47:00.978153371Z" level=error msg="ContainerStatus for \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\": not found" May 27 17:47:00.978468 kubelet[2669]: E0527 17:47:00.978401 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\": not found" containerID="40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3" May 27 17:47:00.978468 kubelet[2669]: I0527 17:47:00.978428 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3"} err="failed to get container status \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\": rpc error: code = NotFound desc = an error occurred when try to find container \"40b3d4b931e2f3d2a722490d2994464a90066769afebcadb4bd443ecefb0aee3\": not found" May 27 17:47:01.572956 kubelet[2669]: I0527 17:47:01.572895 2669 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-7c5755cdcb-f52fk"] May 27 17:47:02.333796 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:49114.service - OpenSSH per-connection server daemon (10.0.0.1:49114). May 27 17:47:02.396563 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 49114 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:02.398578 sshd-session[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:02.403365 systemd-logind[1504]: New session 10 of user core. May 27 17:47:02.413931 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:47:02.526721 sshd[3929]: Connection closed by 10.0.0.1 port 49114 May 27 17:47:02.527064 sshd-session[3927]: pam_unix(sshd:session): session closed for user core May 27 17:47:02.539674 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:49114.service: Deactivated successfully. May 27 17:47:02.541548 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:47:02.542403 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. May 27 17:47:02.545103 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:49122.service - OpenSSH per-connection server daemon (10.0.0.1:49122). May 27 17:47:02.545945 systemd-logind[1504]: Removed session 10. May 27 17:47:02.607022 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 49122 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:02.608511 sshd-session[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:02.612730 systemd-logind[1504]: New session 11 of user core. May 27 17:47:02.621905 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:47:02.766651 sshd[3945]: Connection closed by 10.0.0.1 port 49122 May 27 17:47:02.767010 sshd-session[3943]: pam_unix(sshd:session): session closed for user core May 27 17:47:02.779399 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:49122.service: Deactivated successfully. May 27 17:47:02.784464 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:47:02.787348 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. May 27 17:47:02.791360 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:49132.service - OpenSSH per-connection server daemon (10.0.0.1:49132). May 27 17:47:02.792155 systemd-logind[1504]: Removed session 11. May 27 17:47:02.847276 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 49132 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:02.849354 sshd-session[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:02.849800 containerd[1533]: time="2025-05-27T17:47:02.849632888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:47:02.856666 systemd-logind[1504]: New session 12 of user core. May 27 17:47:02.860035 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:47:02.977831 sshd[3959]: Connection closed by 10.0.0.1 port 49132 May 27 17:47:02.978141 sshd-session[3957]: pam_unix(sshd:session): session closed for user core May 27 17:47:02.982449 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:49132.service: Deactivated successfully. May 27 17:47:02.984560 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:47:02.985580 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. May 27 17:47:02.987275 systemd-logind[1504]: Removed session 12. May 27 17:47:06.778236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149274952.mount: Deactivated successfully. May 27 17:47:06.780580 containerd[1533]: time="2025-05-27T17:47:06.780512872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4149274952: write /var/lib/containerd/tmpmounts/containerd-mount4149274952/usr/bin/calico-node: no space left on device" May 27 17:47:06.781005 containerd[1533]: time="2025-05-27T17:47:06.780609372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 17:47:06.781043 kubelet[2669]: E0527 17:47:06.780827 2669 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4149274952: write /var/lib/containerd/tmpmounts/containerd-mount4149274952/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 17:47:06.781043 kubelet[2669]: E0527 17:47:06.780899 2669 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4149274952: write /var/lib/containerd/tmpmounts/containerd-mount4149274952/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 17:47:06.781357 kubelet[2669]: E0527 17:47:06.781151 2669 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k47nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-68gs9_calico-system(1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4149274952: write /var/lib/containerd/tmpmounts/containerd-mount4149274952/usr/bin/calico-node: no space left on device" logger="UnhandledError" May 27 17:47:06.782429 kubelet[2669]: E0527 17:47:06.782386 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4149274952: write /var/lib/containerd/tmpmounts/containerd-mount4149274952/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-68gs9" podUID="1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07" May 27 17:47:07.999156 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:55394.service - OpenSSH per-connection server daemon (10.0.0.1:55394). May 27 17:47:08.055310 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 55394 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:08.056913 sshd-session[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:08.061648 systemd-logind[1504]: New session 13 of user core. May 27 17:47:08.070918 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:47:08.190005 sshd[3978]: Connection closed by 10.0.0.1 port 55394 May 27 17:47:08.190382 sshd-session[3976]: pam_unix(sshd:session): session closed for user core May 27 17:47:08.195176 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:55394.service: Deactivated successfully. May 27 17:47:08.197049 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:47:08.197746 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. May 27 17:47:08.198871 systemd-logind[1504]: Removed session 13. May 27 17:47:08.848420 kubelet[2669]: E0527 17:47:08.848374 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:08.848948 containerd[1533]: time="2025-05-27T17:47:08.848858337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,}" May 27 17:47:08.905057 containerd[1533]: time="2025-05-27T17:47:08.905007272Z" level=error msg="Failed to destroy network for sandbox \"7518958bd8c608f13e6f4bc719241eb15b8442db622b2ccbaa1f9146714a1302\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:08.906589 containerd[1533]: time="2025-05-27T17:47:08.906538993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7518958bd8c608f13e6f4bc719241eb15b8442db622b2ccbaa1f9146714a1302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:08.906840 kubelet[2669]: E0527 17:47:08.906750 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7518958bd8c608f13e6f4bc719241eb15b8442db622b2ccbaa1f9146714a1302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:08.906894 kubelet[2669]: E0527 17:47:08.906868 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7518958bd8c608f13e6f4bc719241eb15b8442db622b2ccbaa1f9146714a1302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:08.906976 kubelet[2669]: E0527 17:47:08.906895 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7518958bd8c608f13e6f4bc719241eb15b8442db622b2ccbaa1f9146714a1302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:08.906976 kubelet[2669]: E0527 17:47:08.906951 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7518958bd8c608f13e6f4bc719241eb15b8442db622b2ccbaa1f9146714a1302\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rkzrw" podUID="c5c48ec0-a2ac-4764-bfa4-f9c5138bf260" May 27 17:47:08.907242 systemd[1]: run-netns-cni\x2d9d019981\x2dcc3d\x2dc91a\x2d1831\x2d91cbf0227e55.mount: Deactivated successfully. May 27 17:47:09.849096 containerd[1533]: time="2025-05-27T17:47:09.849038687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,}" May 27 17:47:09.849598 containerd[1533]: time="2025-05-27T17:47:09.849134135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,}" May 27 17:47:09.958999 containerd[1533]: time="2025-05-27T17:47:09.958948818Z" level=error msg="Failed to destroy network for sandbox \"c1db0d70874df4a37fed545df2fedaba9f01ef2a5826fe345e39d0cda56cc494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:09.962328 systemd[1]: run-netns-cni\x2d794c70a7\x2d7e84\x2db4cd\x2dcf71\x2de82016c52213.mount: Deactivated successfully. May 27 17:47:09.984036 containerd[1533]: time="2025-05-27T17:47:09.983974171Z" level=error msg="Failed to destroy network for sandbox \"17810115c2b19bd351290f4eb2a91539f0e2b236012d26a3c8b16a62ab3024d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:09.986334 systemd[1]: run-netns-cni\x2d3dc33f9e\x2decc7\x2d4026\x2d5127\x2d874d057f06d0.mount: Deactivated successfully. May 27 17:47:09.992539 containerd[1533]: time="2025-05-27T17:47:09.992487194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1db0d70874df4a37fed545df2fedaba9f01ef2a5826fe345e39d0cda56cc494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:09.992728 kubelet[2669]: E0527 17:47:09.992683 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1db0d70874df4a37fed545df2fedaba9f01ef2a5826fe345e39d0cda56cc494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:09.993074 kubelet[2669]: E0527 17:47:09.992736 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1db0d70874df4a37fed545df2fedaba9f01ef2a5826fe345e39d0cda56cc494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:09.993074 kubelet[2669]: E0527 17:47:09.992754 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1db0d70874df4a37fed545df2fedaba9f01ef2a5826fe345e39d0cda56cc494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:09.993074 kubelet[2669]: E0527 17:47:09.992825 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1db0d70874df4a37fed545df2fedaba9f01ef2a5826fe345e39d0cda56cc494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:47:10.013015 containerd[1533]: time="2025-05-27T17:47:10.012946560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17810115c2b19bd351290f4eb2a91539f0e2b236012d26a3c8b16a62ab3024d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:10.013245 kubelet[2669]: E0527 17:47:10.013206 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17810115c2b19bd351290f4eb2a91539f0e2b236012d26a3c8b16a62ab3024d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:10.013305 kubelet[2669]: E0527 17:47:10.013263 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17810115c2b19bd351290f4eb2a91539f0e2b236012d26a3c8b16a62ab3024d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:10.013305 kubelet[2669]: E0527 17:47:10.013287 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17810115c2b19bd351290f4eb2a91539f0e2b236012d26a3c8b16a62ab3024d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:10.013375 kubelet[2669]: E0527 17:47:10.013327 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17810115c2b19bd351290f4eb2a91539f0e2b236012d26a3c8b16a62ab3024d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849878876c-q75fc" podUID="574ead5f-32c9-4c0b-bef7-1affef3c0fad" May 27 17:47:10.848050 kubelet[2669]: E0527 17:47:10.847983 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:10.848917 containerd[1533]: time="2025-05-27T17:47:10.848531503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,}" May 27 17:47:10.904906 containerd[1533]: time="2025-05-27T17:47:10.904843937Z" level=error msg="Failed to destroy network for sandbox \"47d9987873ee46ecca51a6114b539c21839e6dc95d059af945f9f7fb02a3d877\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:10.906601 containerd[1533]: time="2025-05-27T17:47:10.906546669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d9987873ee46ecca51a6114b539c21839e6dc95d059af945f9f7fb02a3d877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:10.906891 kubelet[2669]: E0527 17:47:10.906839 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d9987873ee46ecca51a6114b539c21839e6dc95d059af945f9f7fb02a3d877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:10.907136 kubelet[2669]: E0527 17:47:10.907101 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d9987873ee46ecca51a6114b539c21839e6dc95d059af945f9f7fb02a3d877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:10.907136 kubelet[2669]: E0527 17:47:10.907136 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d9987873ee46ecca51a6114b539c21839e6dc95d059af945f9f7fb02a3d877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:10.907266 kubelet[2669]: E0527 17:47:10.907198 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47d9987873ee46ecca51a6114b539c21839e6dc95d059af945f9f7fb02a3d877\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-glbgz" podUID="0520455b-5dee-4789-be5a-7de7b54d80f7" May 27 17:47:10.907137 systemd[1]: run-netns-cni\x2d30b1baf8\x2d4986\x2d14c7\x2d2306\x2dbc56153083b5.mount: Deactivated successfully. May 27 17:47:11.594680 kubelet[2669]: I0527 17:47:11.594634 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:11.594680 kubelet[2669]: I0527 17:47:11.594668 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:11.596133 containerd[1533]: time="2025-05-27T17:47:11.596073851Z" level=info msg="StopPodSandbox for \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\"" May 27 17:47:11.596248 containerd[1533]: time="2025-05-27T17:47:11.596226520Z" level=info msg="TearDown network for sandbox \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" successfully" May 27 17:47:11.596350 containerd[1533]: time="2025-05-27T17:47:11.596247011Z" level=info msg="StopPodSandbox for \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" returns successfully" May 27 17:47:11.596705 containerd[1533]: time="2025-05-27T17:47:11.596622207Z" level=info msg="RemovePodSandbox for \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\"" May 27 17:47:11.626553 containerd[1533]: time="2025-05-27T17:47:11.626501760Z" level=info msg="Forcibly stopping sandbox \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\"" May 27 17:47:11.626684 containerd[1533]: time="2025-05-27T17:47:11.626661554Z" level=info msg="TearDown network for sandbox \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" successfully" May 27 17:47:11.628029 containerd[1533]: time="2025-05-27T17:47:11.628005560Z" level=info msg="Ensure that sandbox 72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd in task-service has been cleanup successfully" May 27 17:47:11.708861 containerd[1533]: time="2025-05-27T17:47:11.708759852Z" level=info msg="RemovePodSandbox \"72b755a55fbecbf1ac1ae98af36db0b96c9c218a44b2ded99547776bbbef5afd\" returns successfully" May 27 17:47:11.709530 kubelet[2669]: I0527 17:47:11.709497 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:11.720434 kubelet[2669]: I0527 17:47:11.720384 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:11.720580 kubelet[2669]: I0527 17:47:11.720489 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/calico-node-68gs9","calico-system/csi-node-driver-lf5vj","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:47:11.720580 kubelet[2669]: E0527 17:47:11.720524 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:11.720580 kubelet[2669]: E0527 17:47:11.720537 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:11.720580 kubelet[2669]: E0527 17:47:11.720546 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:11.720580 kubelet[2669]: E0527 17:47:11.720555 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68gs9" May 27 17:47:11.720580 kubelet[2669]: E0527 17:47:11.720565 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:11.720580 kubelet[2669]: E0527 17:47:11.720584 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d56548d6d-wbr2m" May 27 17:47:11.720808 kubelet[2669]: E0527 17:47:11.720596 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:47:11.720808 kubelet[2669]: E0527 17:47:11.720607 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7kgbm" May 27 17:47:11.720808 kubelet[2669]: E0527 17:47:11.720618 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 17:47:11.720808 kubelet[2669]: E0527 17:47:11.720628 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 17:47:11.720808 kubelet[2669]: I0527 17:47:11.720640 2669 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:13.208453 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:41514.service - OpenSSH per-connection server daemon (10.0.0.1:41514). May 27 17:47:13.265740 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 41514 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:13.267266 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:13.271914 systemd-logind[1504]: New session 14 of user core. May 27 17:47:13.289025 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:47:13.402637 sshd[4124]: Connection closed by 10.0.0.1 port 41514 May 27 17:47:13.403041 sshd-session[4122]: pam_unix(sshd:session): session closed for user core May 27 17:47:13.407022 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:41514.service: Deactivated successfully. May 27 17:47:13.409070 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:47:13.409916 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. May 27 17:47:13.411191 systemd-logind[1504]: Removed session 14. May 27 17:47:18.415498 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:41610.service - OpenSSH per-connection server daemon (10.0.0.1:41610). May 27 17:47:18.476121 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 41610 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:18.477467 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:18.481538 systemd-logind[1504]: New session 15 of user core. May 27 17:47:18.491949 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:47:18.606852 sshd[4141]: Connection closed by 10.0.0.1 port 41610 May 27 17:47:18.607181 sshd-session[4139]: pam_unix(sshd:session): session closed for user core May 27 17:47:18.611844 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:41610.service: Deactivated successfully. May 27 17:47:18.613925 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:47:18.615016 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. May 27 17:47:18.616769 systemd-logind[1504]: Removed session 15. May 27 17:47:18.849633 kubelet[2669]: E0527 17:47:18.849577 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\"\"" pod="calico-system/calico-node-68gs9" podUID="1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07" May 27 17:47:21.735963 kubelet[2669]: I0527 17:47:21.735903 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:21.735963 kubelet[2669]: I0527 17:47:21.735949 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:21.737565 kubelet[2669]: I0527 17:47:21.737534 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:21.753282 kubelet[2669]: I0527 17:47:21.753229 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:21.753428 kubelet[2669]: I0527 17:47:21.753315 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/calico-node-68gs9","calico-system/csi-node-driver-lf5vj","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753345 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753354 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753361 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753368 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68gs9" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753375 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753387 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d56548d6d-wbr2m" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753396 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753405 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7kgbm" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753413 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 17:47:21.753428 kubelet[2669]: E0527 17:47:21.753423 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 17:47:21.753428 kubelet[2669]: I0527 17:47:21.753433 2669 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:21.848629 containerd[1533]: time="2025-05-27T17:47:21.848573584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,}" May 27 17:47:21.991112 containerd[1533]: time="2025-05-27T17:47:21.990929199Z" level=error msg="Failed to destroy network for sandbox \"384ef2052ecb98a5f13bddc5500f61747d55c7f2cd5a4d74a6c0226774da6fc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:21.993637 systemd[1]: run-netns-cni\x2d031c3d14\x2d3925\x2dd6c0\x2dea49\x2dab4d99a63390.mount: Deactivated successfully. May 27 17:47:22.025659 containerd[1533]: time="2025-05-27T17:47:22.025585843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"384ef2052ecb98a5f13bddc5500f61747d55c7f2cd5a4d74a6c0226774da6fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:22.025911 kubelet[2669]: E0527 17:47:22.025864 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384ef2052ecb98a5f13bddc5500f61747d55c7f2cd5a4d74a6c0226774da6fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:22.025962 kubelet[2669]: E0527 17:47:22.025938 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384ef2052ecb98a5f13bddc5500f61747d55c7f2cd5a4d74a6c0226774da6fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:22.025989 kubelet[2669]: E0527 17:47:22.025964 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384ef2052ecb98a5f13bddc5500f61747d55c7f2cd5a4d74a6c0226774da6fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:22.026061 kubelet[2669]: E0527 17:47:22.026017 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"384ef2052ecb98a5f13bddc5500f61747d55c7f2cd5a4d74a6c0226774da6fc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:47:22.848459 kubelet[2669]: E0527 17:47:22.848403 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:22.849004 containerd[1533]: time="2025-05-27T17:47:22.848842525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,}" May 27 17:47:22.980109 containerd[1533]: time="2025-05-27T17:47:22.980042017Z" level=error msg="Failed to destroy network for sandbox \"4859f0bce4b1aa47b2746a35005288bb7f587a23d4ef2cc6ce22d1f877c40b43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:22.982926 systemd[1]: run-netns-cni\x2d9587122c\x2d112d\x2d5c63\x2d1ed8\x2d38d9d71cb99f.mount: Deactivated successfully. May 27 17:47:23.038285 containerd[1533]: time="2025-05-27T17:47:23.038202990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4859f0bce4b1aa47b2746a35005288bb7f587a23d4ef2cc6ce22d1f877c40b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:23.038605 kubelet[2669]: E0527 17:47:23.038520 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4859f0bce4b1aa47b2746a35005288bb7f587a23d4ef2cc6ce22d1f877c40b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:23.038605 kubelet[2669]: E0527 17:47:23.038604 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4859f0bce4b1aa47b2746a35005288bb7f587a23d4ef2cc6ce22d1f877c40b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:23.038889 kubelet[2669]: E0527 17:47:23.038631 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4859f0bce4b1aa47b2746a35005288bb7f587a23d4ef2cc6ce22d1f877c40b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:23.038889 kubelet[2669]: E0527 17:47:23.038687 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4859f0bce4b1aa47b2746a35005288bb7f587a23d4ef2cc6ce22d1f877c40b43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-glbgz" podUID="0520455b-5dee-4789-be5a-7de7b54d80f7" May 27 17:47:23.624379 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:45664.service - OpenSSH per-connection server daemon (10.0.0.1:45664). May 27 17:47:23.670884 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 45664 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:23.672470 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:23.677164 systemd-logind[1504]: New session 16 of user core. May 27 17:47:23.688025 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:47:23.797908 sshd[4224]: Connection closed by 10.0.0.1 port 45664 May 27 17:47:23.798251 sshd-session[4222]: pam_unix(sshd:session): session closed for user core May 27 17:47:23.813024 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:45664.service: Deactivated successfully. May 27 17:47:23.815116 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:47:23.815977 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. May 27 17:47:23.819016 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:45678.service - OpenSSH per-connection server daemon (10.0.0.1:45678). May 27 17:47:23.819680 systemd-logind[1504]: Removed session 16. May 27 17:47:23.849811 kubelet[2669]: E0527 17:47:23.849106 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:23.850360 containerd[1533]: time="2025-05-27T17:47:23.849823890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,}" May 27 17:47:23.850360 containerd[1533]: time="2025-05-27T17:47:23.850312433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,}" May 27 17:47:23.866847 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 45678 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:23.868632 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:23.873550 systemd-logind[1504]: New session 17 of user core. May 27 17:47:23.882964 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:47:24.003574 containerd[1533]: time="2025-05-27T17:47:24.003462272Z" level=error msg="Failed to destroy network for sandbox \"6e18bc12d279749d2b6af5b54965ce7277c951f36b8fede87cdc3f22e68778e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:24.005707 systemd[1]: run-netns-cni\x2dbef11a31\x2d6fec\x2db0a6\x2d6908\x2de4d08ead2de8.mount: Deactivated successfully. May 27 17:47:24.026249 containerd[1533]: time="2025-05-27T17:47:24.026192552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e18bc12d279749d2b6af5b54965ce7277c951f36b8fede87cdc3f22e68778e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:24.026469 kubelet[2669]: E0527 17:47:24.026412 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e18bc12d279749d2b6af5b54965ce7277c951f36b8fede87cdc3f22e68778e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:24.026554 kubelet[2669]: E0527 17:47:24.026487 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e18bc12d279749d2b6af5b54965ce7277c951f36b8fede87cdc3f22e68778e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:24.026554 kubelet[2669]: E0527 17:47:24.026507 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e18bc12d279749d2b6af5b54965ce7277c951f36b8fede87cdc3f22e68778e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:24.026622 kubelet[2669]: E0527 17:47:24.026556 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e18bc12d279749d2b6af5b54965ce7277c951f36b8fede87cdc3f22e68778e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849878876c-q75fc" podUID="574ead5f-32c9-4c0b-bef7-1affef3c0fad" May 27 17:47:24.043254 containerd[1533]: time="2025-05-27T17:47:24.043204242Z" level=error msg="Failed to destroy network for sandbox \"387ff3c9c287418ae91163b703d84de90f3265b975c0c92f11272f080caa3a50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:24.045646 systemd[1]: run-netns-cni\x2d5bb78abf\x2dc39e\x2d8875\x2d42a5\x2d236e3be8631f.mount: Deactivated successfully. May 27 17:47:24.052407 containerd[1533]: time="2025-05-27T17:47:24.052229880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"387ff3c9c287418ae91163b703d84de90f3265b975c0c92f11272f080caa3a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:24.053427 kubelet[2669]: E0527 17:47:24.053355 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"387ff3c9c287418ae91163b703d84de90f3265b975c0c92f11272f080caa3a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:24.053427 kubelet[2669]: E0527 17:47:24.053437 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"387ff3c9c287418ae91163b703d84de90f3265b975c0c92f11272f080caa3a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:24.053636 kubelet[2669]: E0527 17:47:24.053456 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"387ff3c9c287418ae91163b703d84de90f3265b975c0c92f11272f080caa3a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:24.053636 kubelet[2669]: E0527 17:47:24.053510 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"387ff3c9c287418ae91163b703d84de90f3265b975c0c92f11272f080caa3a50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rkzrw" podUID="c5c48ec0-a2ac-4764-bfa4-f9c5138bf260" May 27 17:47:24.446270 sshd[4240]: Connection closed by 10.0.0.1 port 45678 May 27 17:47:24.447017 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 27 17:47:24.459162 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:45678.service: Deactivated successfully. May 27 17:47:24.463527 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:47:24.467316 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. May 27 17:47:24.473884 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:45686.service - OpenSSH per-connection server daemon (10.0.0.1:45686). May 27 17:47:24.475863 systemd-logind[1504]: Removed session 17. May 27 17:47:24.533249 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 45686 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:24.535033 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:24.541805 systemd-logind[1504]: New session 18 of user core. May 27 17:47:24.550052 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:47:26.429989 sshd[4320]: Connection closed by 10.0.0.1 port 45686 May 27 17:47:26.430584 sshd-session[4318]: pam_unix(sshd:session): session closed for user core May 27 17:47:26.441827 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:45686.service: Deactivated successfully. May 27 17:47:26.444753 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:47:26.445273 systemd[1]: session-18.scope: Consumed 627ms CPU time, 78.4M memory peak. May 27 17:47:26.446486 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. May 27 17:47:26.450972 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:45818.service - OpenSSH per-connection server daemon (10.0.0.1:45818). May 27 17:47:26.453569 systemd-logind[1504]: Removed session 18. May 27 17:47:26.497762 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 45818 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:26.499936 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:26.504962 systemd-logind[1504]: New session 19 of user core. May 27 17:47:26.515042 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:47:26.739494 sshd[4341]: Connection closed by 10.0.0.1 port 45818 May 27 17:47:26.740055 sshd-session[4339]: pam_unix(sshd:session): session closed for user core May 27 17:47:26.750169 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:45818.service: Deactivated successfully. May 27 17:47:26.752229 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:47:26.753153 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. May 27 17:47:26.756259 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:45820.service - OpenSSH per-connection server daemon (10.0.0.1:45820). May 27 17:47:26.757707 systemd-logind[1504]: Removed session 19. May 27 17:47:26.804682 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 45820 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:26.806498 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:26.812057 systemd-logind[1504]: New session 20 of user core. May 27 17:47:26.819001 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:47:26.930710 sshd[4354]: Connection closed by 10.0.0.1 port 45820 May 27 17:47:26.931983 sshd-session[4352]: pam_unix(sshd:session): session closed for user core May 27 17:47:26.936544 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:45820.service: Deactivated successfully. May 27 17:47:26.938694 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:47:26.939597 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. May 27 17:47:26.940819 systemd-logind[1504]: Removed session 20. May 27 17:47:27.849422 kubelet[2669]: E0527 17:47:27.849133 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:27.850593 kubelet[2669]: E0527 17:47:27.849900 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:31.774137 kubelet[2669]: I0527 17:47:31.774087 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:31.774137 kubelet[2669]: I0527 17:47:31.774128 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:31.775565 kubelet[2669]: I0527 17:47:31.775547 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:31.786130 kubelet[2669]: I0527 17:47:31.786093 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:31.786266 kubelet[2669]: I0527 17:47:31.786187 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/calico-node-68gs9","calico-system/csi-node-driver-lf5vj","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:47:31.786266 kubelet[2669]: E0527 17:47:31.786218 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:31.786266 kubelet[2669]: E0527 17:47:31.786231 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:31.786266 kubelet[2669]: E0527 17:47:31.786240 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:31.786266 kubelet[2669]: E0527 17:47:31.786249 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68gs9" May 27 17:47:31.786266 kubelet[2669]: E0527 17:47:31.786257 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:31.786266 kubelet[2669]: E0527 17:47:31.786271 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d56548d6d-wbr2m" May 27 17:47:31.786504 kubelet[2669]: E0527 17:47:31.786282 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:47:31.786504 kubelet[2669]: E0527 17:47:31.786294 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7kgbm" May 27 17:47:31.786504 kubelet[2669]: E0527 17:47:31.786304 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 17:47:31.786504 kubelet[2669]: E0527 17:47:31.786315 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 17:47:31.786504 kubelet[2669]: I0527 17:47:31.786327 2669 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:31.947829 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:45826.service - OpenSSH per-connection server daemon (10.0.0.1:45826). May 27 17:47:32.002165 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 45826 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:32.004049 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:32.009554 systemd-logind[1504]: New session 21 of user core. May 27 17:47:32.019056 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:47:32.141562 sshd[4375]: Connection closed by 10.0.0.1 port 45826 May 27 17:47:32.142006 sshd-session[4373]: pam_unix(sshd:session): session closed for user core May 27 17:47:32.148089 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:45826.service: Deactivated successfully. May 27 17:47:32.150407 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:47:32.151675 systemd-logind[1504]: Session 21 logged out. Waiting for processes to exit. May 27 17:47:32.153419 systemd-logind[1504]: Removed session 21. May 27 17:47:32.849270 containerd[1533]: time="2025-05-27T17:47:32.849218557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,}" May 27 17:47:32.850205 containerd[1533]: time="2025-05-27T17:47:32.850149064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 17:47:32.907832 containerd[1533]: time="2025-05-27T17:47:32.907760299Z" level=error msg="Failed to destroy network for sandbox \"510fc815d05e2b1bf691d3b3efc55114a7d22c3fdba7e5e85b13413ff87f8b77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:32.909997 systemd[1]: run-netns-cni\x2d54055e18\x2d4392\x2d8238\x2d385a\x2d20b45d30c00a.mount: Deactivated successfully. May 27 17:47:32.959188 containerd[1533]: time="2025-05-27T17:47:32.959104642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"510fc815d05e2b1bf691d3b3efc55114a7d22c3fdba7e5e85b13413ff87f8b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:32.959417 kubelet[2669]: E0527 17:47:32.959360 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"510fc815d05e2b1bf691d3b3efc55114a7d22c3fdba7e5e85b13413ff87f8b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:32.959816 kubelet[2669]: E0527 17:47:32.959430 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"510fc815d05e2b1bf691d3b3efc55114a7d22c3fdba7e5e85b13413ff87f8b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:32.959816 kubelet[2669]: E0527 17:47:32.959454 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"510fc815d05e2b1bf691d3b3efc55114a7d22c3fdba7e5e85b13413ff87f8b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:32.959816 kubelet[2669]: E0527 17:47:32.959498 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"510fc815d05e2b1bf691d3b3efc55114a7d22c3fdba7e5e85b13413ff87f8b77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:47:33.848895 kubelet[2669]: E0527 17:47:33.848845 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:33.849292 containerd[1533]: time="2025-05-27T17:47:33.849253178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,}" May 27 17:47:34.082648 containerd[1533]: time="2025-05-27T17:47:34.082597516Z" level=error msg="Failed to destroy network for sandbox \"6fff816177612535aeaa650903999ce8def9468916e7a50b406734904ef08ac0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:34.085012 systemd[1]: run-netns-cni\x2d321257eb\x2d71d6\x2d856a\x2d102c\x2d2a58ec112663.mount: Deactivated successfully. May 27 17:47:34.165209 containerd[1533]: time="2025-05-27T17:47:34.164979420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fff816177612535aeaa650903999ce8def9468916e7a50b406734904ef08ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:34.165464 kubelet[2669]: E0527 17:47:34.165354 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fff816177612535aeaa650903999ce8def9468916e7a50b406734904ef08ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:34.165464 kubelet[2669]: E0527 17:47:34.165407 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fff816177612535aeaa650903999ce8def9468916e7a50b406734904ef08ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:34.165464 kubelet[2669]: E0527 17:47:34.165425 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fff816177612535aeaa650903999ce8def9468916e7a50b406734904ef08ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:34.166095 kubelet[2669]: E0527 17:47:34.165514 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fff816177612535aeaa650903999ce8def9468916e7a50b406734904ef08ac0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-glbgz" podUID="0520455b-5dee-4789-be5a-7de7b54d80f7" May 27 17:47:35.850049 containerd[1533]: time="2025-05-27T17:47:35.850006259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,}" May 27 17:47:35.973741 containerd[1533]: time="2025-05-27T17:47:35.973674750Z" level=error msg="Failed to destroy network for sandbox \"6c2199c1b888b58278c3b313bb9188d7e6485d2bac32cf3aeb152eb95290db0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:35.975993 systemd[1]: run-netns-cni\x2da3f0f26e\x2d723a\x2d1609\x2d9d6a\x2d615b64c2813c.mount: Deactivated successfully. May 27 17:47:36.016420 containerd[1533]: time="2025-05-27T17:47:36.016357000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c2199c1b888b58278c3b313bb9188d7e6485d2bac32cf3aeb152eb95290db0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:36.016678 kubelet[2669]: E0527 17:47:36.016630 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c2199c1b888b58278c3b313bb9188d7e6485d2bac32cf3aeb152eb95290db0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:36.017032 kubelet[2669]: E0527 17:47:36.016700 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c2199c1b888b58278c3b313bb9188d7e6485d2bac32cf3aeb152eb95290db0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:36.017032 kubelet[2669]: E0527 17:47:36.016720 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c2199c1b888b58278c3b313bb9188d7e6485d2bac32cf3aeb152eb95290db0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:36.017032 kubelet[2669]: E0527 17:47:36.016762 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c2199c1b888b58278c3b313bb9188d7e6485d2bac32cf3aeb152eb95290db0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849878876c-q75fc" podUID="574ead5f-32c9-4c0b-bef7-1affef3c0fad" May 27 17:47:36.848360 kubelet[2669]: E0527 17:47:36.848311 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:36.848896 containerd[1533]: time="2025-05-27T17:47:36.848801835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,}" May 27 17:47:37.156190 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:35066.service - OpenSSH per-connection server daemon (10.0.0.1:35066). May 27 17:47:37.304225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980490620.mount: Deactivated successfully. May 27 17:47:37.309790 containerd[1533]: time="2025-05-27T17:47:37.309479417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1980490620: write /var/lib/containerd/tmpmounts/containerd-mount1980490620/usr/bin/calico-node: no space left on device" May 27 17:47:37.309790 containerd[1533]: time="2025-05-27T17:47:37.309584834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 17:47:37.310856 kubelet[2669]: E0527 17:47:37.310443 2669 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1980490620: write /var/lib/containerd/tmpmounts/containerd-mount1980490620/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 17:47:37.310856 kubelet[2669]: E0527 17:47:37.310505 2669 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1980490620: write /var/lib/containerd/tmpmounts/containerd-mount1980490620/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 17:47:37.311209 kubelet[2669]: E0527 17:47:37.310729 2669 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k47nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-68gs9_calico-system(1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1980490620: write /var/lib/containerd/tmpmounts/containerd-mount1980490620/usr/bin/calico-node: no space left on device" logger="UnhandledError" May 27 17:47:37.312076 kubelet[2669]: E0527 17:47:37.311983 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1980490620: write /var/lib/containerd/tmpmounts/containerd-mount1980490620/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-68gs9" podUID="1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07" May 27 17:47:37.358060 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 35066 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:37.360633 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:37.366901 systemd-logind[1504]: New session 22 of user core. May 27 17:47:37.371126 containerd[1533]: time="2025-05-27T17:47:37.371065102Z" level=error msg="Failed to destroy network for sandbox \"0d2e3bfd5d89b8b710dcdf16380a7bc7e01405e5f7de8033d1c5976d7f187bb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:37.372723 containerd[1533]: time="2025-05-27T17:47:37.372669267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2e3bfd5d89b8b710dcdf16380a7bc7e01405e5f7de8033d1c5976d7f187bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:37.373043 kubelet[2669]: E0527 17:47:37.372993 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2e3bfd5d89b8b710dcdf16380a7bc7e01405e5f7de8033d1c5976d7f187bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:37.373114 kubelet[2669]: E0527 17:47:37.373070 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2e3bfd5d89b8b710dcdf16380a7bc7e01405e5f7de8033d1c5976d7f187bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:37.373114 kubelet[2669]: E0527 17:47:37.373097 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d2e3bfd5d89b8b710dcdf16380a7bc7e01405e5f7de8033d1c5976d7f187bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:37.373184 kubelet[2669]: E0527 17:47:37.373158 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d2e3bfd5d89b8b710dcdf16380a7bc7e01405e5f7de8033d1c5976d7f187bb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rkzrw" podUID="c5c48ec0-a2ac-4764-bfa4-f9c5138bf260" May 27 17:47:37.373208 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:47:37.375353 systemd[1]: run-netns-cni\x2d4da1f5c1\x2de1ad\x2d6886\x2dccff\x2d3a243fb98b11.mount: Deactivated successfully. May 27 17:47:37.500865 sshd[4530]: Connection closed by 10.0.0.1 port 35066 May 27 17:47:37.501193 sshd-session[4496]: pam_unix(sshd:session): session closed for user core May 27 17:47:37.505873 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:35066.service: Deactivated successfully. May 27 17:47:37.508522 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:47:37.509459 systemd-logind[1504]: Session 22 logged out. Waiting for processes to exit. May 27 17:47:37.511353 systemd-logind[1504]: Removed session 22. May 27 17:47:41.801839 kubelet[2669]: I0527 17:47:41.801787 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:41.801839 kubelet[2669]: I0527 17:47:41.801826 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:41.804034 kubelet[2669]: I0527 17:47:41.803983 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:41.814643 kubelet[2669]: I0527 17:47:41.814616 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:41.814791 kubelet[2669]: I0527 17:47:41.814693 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/calico-node-68gs9","calico-system/csi-node-driver-lf5vj","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:47:41.814791 kubelet[2669]: E0527 17:47:41.814719 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:41.814791 kubelet[2669]: E0527 17:47:41.814729 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:41.814791 kubelet[2669]: E0527 17:47:41.814736 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:41.814791 kubelet[2669]: E0527 17:47:41.814743 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68gs9" May 27 17:47:41.814791 kubelet[2669]: E0527 17:47:41.814751 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:41.814791 kubelet[2669]: E0527 17:47:41.814761 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d56548d6d-wbr2m" May 27 17:47:41.814791 kubelet[2669]: E0527 17:47:41.814770 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:47:41.814791 kubelet[2669]: E0527 17:47:41.814792 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7kgbm" May 27 17:47:41.815042 kubelet[2669]: E0527 17:47:41.814802 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 17:47:41.815042 kubelet[2669]: E0527 17:47:41.814810 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 17:47:41.815042 kubelet[2669]: I0527 17:47:41.814819 2669 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:42.526309 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:35074.service - OpenSSH per-connection server daemon (10.0.0.1:35074). May 27 17:47:42.580614 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:42.582447 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:42.587259 systemd-logind[1504]: New session 23 of user core. May 27 17:47:42.596120 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:47:42.717154 sshd[4549]: Connection closed by 10.0.0.1 port 35074 May 27 17:47:42.717539 sshd-session[4547]: pam_unix(sshd:session): session closed for user core May 27 17:47:42.722053 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:35074.service: Deactivated successfully. May 27 17:47:42.724602 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:47:42.725669 systemd-logind[1504]: Session 23 logged out. Waiting for processes to exit. May 27 17:47:42.727368 systemd-logind[1504]: Removed session 23. May 27 17:47:45.847889 kubelet[2669]: E0527 17:47:45.847849 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:47.735298 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:43406.service - OpenSSH per-connection server daemon (10.0.0.1:43406). May 27 17:47:47.790662 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 43406 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:47.792128 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:47.796959 systemd-logind[1504]: New session 24 of user core. May 27 17:47:47.806934 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:47:47.848716 kubelet[2669]: E0527 17:47:47.848674 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:47.850090 containerd[1533]: time="2025-05-27T17:47:47.849234895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,}" May 27 17:47:47.850090 containerd[1533]: time="2025-05-27T17:47:47.849310126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,}" May 27 17:47:47.917027 containerd[1533]: time="2025-05-27T17:47:47.916977145Z" level=error msg="Failed to destroy network for sandbox \"a379855b5d4c89d230e6183a6dd006549c29b4ea0a4f857ac2bd507620c8c4af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:47.918473 containerd[1533]: time="2025-05-27T17:47:47.918392172Z" level=error msg="Failed to destroy network for sandbox \"03fc7c18c426a007ce2852975fa54ea488a09b8d400c8e2a915d7508bcb23a48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:47.919377 systemd[1]: run-netns-cni\x2d93545d75\x2da2fc\x2db4c5\x2d04ba\x2df6393b7e316c.mount: Deactivated successfully. May 27 17:47:47.920011 containerd[1533]: time="2025-05-27T17:47:47.919965916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-glbgz,Uid:0520455b-5dee-4789-be5a-7de7b54d80f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a379855b5d4c89d230e6183a6dd006549c29b4ea0a4f857ac2bd507620c8c4af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:47.921363 containerd[1533]: time="2025-05-27T17:47:47.921277537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lf5vj,Uid:c1488e45-b4c4-4b5a-9c26-a912011cdd13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03fc7c18c426a007ce2852975fa54ea488a09b8d400c8e2a915d7508bcb23a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:47.923341 systemd[1]: run-netns-cni\x2d26c7f9c6\x2dd308\x2d6dfd\x2d7893\x2dd50b35efc77e.mount: Deactivated successfully. May 27 17:47:47.924036 kubelet[2669]: E0527 17:47:47.923930 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a379855b5d4c89d230e6183a6dd006549c29b4ea0a4f857ac2bd507620c8c4af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:47.924036 kubelet[2669]: E0527 17:47:47.924013 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a379855b5d4c89d230e6183a6dd006549c29b4ea0a4f857ac2bd507620c8c4af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:47.924149 kubelet[2669]: E0527 17:47:47.924038 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a379855b5d4c89d230e6183a6dd006549c29b4ea0a4f857ac2bd507620c8c4af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:47.924149 kubelet[2669]: E0527 17:47:47.924084 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-glbgz_kube-system(0520455b-5dee-4789-be5a-7de7b54d80f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a379855b5d4c89d230e6183a6dd006549c29b4ea0a4f857ac2bd507620c8c4af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-glbgz" podUID="0520455b-5dee-4789-be5a-7de7b54d80f7" May 27 17:47:47.924888 kubelet[2669]: E0527 17:47:47.924843 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03fc7c18c426a007ce2852975fa54ea488a09b8d400c8e2a915d7508bcb23a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:47.924932 kubelet[2669]: E0527 17:47:47.924890 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03fc7c18c426a007ce2852975fa54ea488a09b8d400c8e2a915d7508bcb23a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:47.924932 kubelet[2669]: E0527 17:47:47.924910 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03fc7c18c426a007ce2852975fa54ea488a09b8d400c8e2a915d7508bcb23a48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:47.924992 kubelet[2669]: E0527 17:47:47.924942 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lf5vj_calico-system(c1488e45-b4c4-4b5a-9c26-a912011cdd13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03fc7c18c426a007ce2852975fa54ea488a09b8d400c8e2a915d7508bcb23a48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lf5vj" podUID="c1488e45-b4c4-4b5a-9c26-a912011cdd13" May 27 17:47:47.937772 sshd[4568]: Connection closed by 10.0.0.1 port 43406 May 27 17:47:47.938118 sshd-session[4566]: pam_unix(sshd:session): session closed for user core May 27 17:47:47.942088 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:43406.service: Deactivated successfully. May 27 17:47:47.943971 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:47:47.944866 systemd-logind[1504]: Session 24 logged out. Waiting for processes to exit. May 27 17:47:47.946211 systemd-logind[1504]: Removed session 24. May 27 17:47:48.847928 kubelet[2669]: E0527 17:47:48.847878 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:48.848317 containerd[1533]: time="2025-05-27T17:47:48.848226360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,}" May 27 17:47:48.848436 containerd[1533]: time="2025-05-27T17:47:48.848259763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,}" May 27 17:47:48.906637 containerd[1533]: time="2025-05-27T17:47:48.906576169Z" level=error msg="Failed to destroy network for sandbox \"a9347eda471a0c072f1caae8f5ae581af9e574a5dbfa00eaf7d81dd33bc95073\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:48.909036 containerd[1533]: time="2025-05-27T17:47:48.908950897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rkzrw,Uid:c5c48ec0-a2ac-4764-bfa4-f9c5138bf260,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9347eda471a0c072f1caae8f5ae581af9e574a5dbfa00eaf7d81dd33bc95073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:48.909331 kubelet[2669]: E0527 17:47:48.909294 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9347eda471a0c072f1caae8f5ae581af9e574a5dbfa00eaf7d81dd33bc95073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:48.909622 kubelet[2669]: E0527 17:47:48.909569 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9347eda471a0c072f1caae8f5ae581af9e574a5dbfa00eaf7d81dd33bc95073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:48.909622 kubelet[2669]: E0527 17:47:48.909597 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9347eda471a0c072f1caae8f5ae581af9e574a5dbfa00eaf7d81dd33bc95073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:48.909500 systemd[1]: run-netns-cni\x2d8655b428\x2d0981\x2dcda4\x2dff1e\x2d4919a256803d.mount: Deactivated successfully. May 27 17:47:48.909892 kubelet[2669]: E0527 17:47:48.909653 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rkzrw_kube-system(c5c48ec0-a2ac-4764-bfa4-f9c5138bf260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9347eda471a0c072f1caae8f5ae581af9e574a5dbfa00eaf7d81dd33bc95073\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rkzrw" podUID="c5c48ec0-a2ac-4764-bfa4-f9c5138bf260" May 27 17:47:48.913453 containerd[1533]: time="2025-05-27T17:47:48.913373354Z" level=error msg="Failed to destroy network for sandbox \"bff5b9106495476e95d2fd8aac0ccb0e79de5470aadae33e48ee53e8f7f0a084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:48.915705 systemd[1]: run-netns-cni\x2d0fde3214\x2df4d5\x2d806d\x2d7fd9\x2d44692ac4146b.mount: Deactivated successfully. May 27 17:47:48.916330 containerd[1533]: time="2025-05-27T17:47:48.916279754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849878876c-q75fc,Uid:574ead5f-32c9-4c0b-bef7-1affef3c0fad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff5b9106495476e95d2fd8aac0ccb0e79de5470aadae33e48ee53e8f7f0a084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:48.916554 kubelet[2669]: E0527 17:47:48.916519 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff5b9106495476e95d2fd8aac0ccb0e79de5470aadae33e48ee53e8f7f0a084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 17:47:48.916627 kubelet[2669]: E0527 17:47:48.916576 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff5b9106495476e95d2fd8aac0ccb0e79de5470aadae33e48ee53e8f7f0a084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:48.916627 kubelet[2669]: E0527 17:47:48.916601 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff5b9106495476e95d2fd8aac0ccb0e79de5470aadae33e48ee53e8f7f0a084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:48.916706 kubelet[2669]: E0527 17:47:48.916669 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849878876c-q75fc_calico-system(574ead5f-32c9-4c0b-bef7-1affef3c0fad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bff5b9106495476e95d2fd8aac0ccb0e79de5470aadae33e48ee53e8f7f0a084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849878876c-q75fc" podUID="574ead5f-32c9-4c0b-bef7-1affef3c0fad" May 27 17:47:49.848747 kubelet[2669]: E0527 17:47:49.848681 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:47:51.830223 kubelet[2669]: I0527 17:47:51.830164 2669 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:51.830223 kubelet[2669]: I0527 17:47:51.830208 2669 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:51.832491 kubelet[2669]: I0527 17:47:51.832463 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:51.844393 kubelet[2669]: I0527 17:47:51.844327 2669 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:51.844548 kubelet[2669]: I0527 17:47:51.844408 2669 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-rkzrw","calico-system/calico-kube-controllers-849878876c-q75fc","kube-system/coredns-7c65d6cfc9-glbgz","calico-system/csi-node-driver-lf5vj","calico-system/calico-node-68gs9","calico-system/calico-typha-6d56548d6d-wbr2m","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-7kgbm","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844435 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-rkzrw" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844444 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-849878876c-q75fc" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844452 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-glbgz" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844459 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lf5vj" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844466 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68gs9" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844478 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d56548d6d-wbr2m" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844487 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844496 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7kgbm" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844504 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 17:47:51.844548 kubelet[2669]: E0527 17:47:51.844513 2669 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 17:47:51.844548 kubelet[2669]: I0527 17:47:51.844523 2669 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:52.848834 kubelet[2669]: E0527 17:47:52.848767 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\"\"" pod="calico-system/calico-node-68gs9" podUID="1290fdfb-b0ab-446e-a3a4-ace4bfb5ee07" May 27 17:47:52.956211 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:43418.service - OpenSSH per-connection server daemon (10.0.0.1:43418). May 27 17:47:53.017274 sshd[4718]: Accepted publickey for core from 10.0.0.1 port 43418 ssh2: RSA SHA256:agsMvw+ROSy4zA6D9AxlWsh30ZOW3irUWPGwzQ4rVME May 27 17:47:53.018866 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:53.023917 systemd-logind[1504]: New session 25 of user core. May 27 17:47:53.033968 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:47:53.156673 sshd[4720]: Connection closed by 10.0.0.1 port 43418 May 27 17:47:53.156912 sshd-session[4718]: pam_unix(sshd:session): session closed for user core May 27 17:47:53.162007 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:43418.service: Deactivated successfully. May 27 17:47:53.165306 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:47:53.166654 systemd-logind[1504]: Session 25 logged out. Waiting for processes to exit. May 27 17:47:53.168592 systemd-logind[1504]: Removed session 25.