May 27 03:24:30.898729 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:24:30.898766 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:24:30.898775 kernel: BIOS-provided physical RAM map: May 27 03:24:30.898782 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 03:24:30.898789 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 03:24:30.898795 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 03:24:30.898803 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 27 03:24:30.898813 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 27 03:24:30.898822 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 03:24:30.898829 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 03:24:30.898835 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:24:30.898842 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 03:24:30.898849 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:24:30.898855 kernel: NX (Execute Disable) protection: active May 27 03:24:30.898866 kernel: APIC: Static calls initialized May 27 03:24:30.898873 kernel: SMBIOS 2.8 present. May 27 03:24:30.898884 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 27 03:24:30.898891 kernel: DMI: Memory slots populated: 1/1 May 27 03:24:30.898898 kernel: Hypervisor detected: KVM May 27 03:24:30.898905 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:24:30.898913 kernel: kvm-clock: using sched offset of 4607154609 cycles May 27 03:24:30.898920 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:24:30.898928 kernel: tsc: Detected 2794.748 MHz processor May 27 03:24:30.898936 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:24:30.898946 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:24:30.898954 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 27 03:24:30.898961 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 03:24:30.898969 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:24:30.898979 kernel: Using GB pages for direct mapping May 27 03:24:30.899001 kernel: ACPI: Early table checksum verification disabled May 27 03:24:30.899023 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 27 03:24:30.899033 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:24:30.899046 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:24:30.899053 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:24:30.899061 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 27 03:24:30.899068 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:24:30.899075 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:24:30.899083 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:24:30.899090 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:24:30.899098 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] May 27 03:24:30.899111 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] May 27 03:24:30.899119 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 27 03:24:30.899127 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] May 27 03:24:30.899135 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] May 27 03:24:30.899142 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] May 27 03:24:30.899150 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] May 27 03:24:30.899159 kernel: No NUMA configuration found May 27 03:24:30.899167 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 27 03:24:30.899175 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 27 03:24:30.899182 kernel: Zone ranges: May 27 03:24:30.899190 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:24:30.899198 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 27 03:24:30.899205 kernel: Normal empty May 27 03:24:30.899213 kernel: Device empty May 27 03:24:30.899220 kernel: Movable zone start for each node May 27 03:24:30.899228 kernel: Early memory node ranges May 27 03:24:30.899238 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 03:24:30.899245 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 27 03:24:30.899253 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 27 03:24:30.899260 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:24:30.899268 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 03:24:30.899276 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 27 03:24:30.899283 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:24:30.899294 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:24:30.899302 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:24:30.899313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:24:30.899320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:24:30.899330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:24:30.899338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:24:30.899345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:24:30.899353 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:24:30.899374 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:24:30.899393 kernel: TSC deadline timer available May 27 03:24:30.899401 kernel: CPU topo: Max. logical packages: 1 May 27 03:24:30.899412 kernel: CPU topo: Max. logical dies: 1 May 27 03:24:30.899420 kernel: CPU topo: Max. dies per package: 1 May 27 03:24:30.899427 kernel: CPU topo: Max. threads per core: 1 May 27 03:24:30.899434 kernel: CPU topo: Num. cores per package: 4 May 27 03:24:30.899442 kernel: CPU topo: Num. threads per package: 4 May 27 03:24:30.899449 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 03:24:30.899457 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:24:30.899465 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:24:30.899472 kernel: kvm-guest: setup PV sched yield May 27 03:24:30.899480 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 03:24:30.899490 kernel: Booting paravirtualized kernel on KVM May 27 03:24:30.899498 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:24:30.899505 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 03:24:30.899513 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 03:24:30.899521 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 03:24:30.899528 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 03:24:30.899536 kernel: kvm-guest: PV spinlocks enabled May 27 03:24:30.899543 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:24:30.899552 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:24:30.899563 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:24:30.899570 kernel: random: crng init done May 27 03:24:30.899578 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:24:30.899586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:24:30.899604 kernel: Fallback order for Node 0: 0 May 27 03:24:30.899611 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 27 03:24:30.899619 kernel: Policy zone: DMA32 May 27 03:24:30.899627 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:24:30.899637 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:24:30.899644 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:24:30.899653 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:24:30.899660 kernel: Dynamic Preempt: voluntary May 27 03:24:30.899668 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:24:30.899676 kernel: rcu: RCU event tracing is enabled. May 27 03:24:30.899684 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:24:30.899692 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:24:30.899702 kernel: Rude variant of Tasks RCU enabled. May 27 03:24:30.899712 kernel: Tracing variant of Tasks RCU enabled. May 27 03:24:30.899720 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:24:30.899728 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:24:30.899736 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:24:30.899744 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:24:30.899751 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:24:30.899759 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 03:24:30.899767 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:24:30.899784 kernel: Console: colour VGA+ 80x25 May 27 03:24:30.899791 kernel: printk: legacy console [ttyS0] enabled May 27 03:24:30.899799 kernel: ACPI: Core revision 20240827 May 27 03:24:30.899807 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:24:30.899818 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:24:30.899826 kernel: x2apic enabled May 27 03:24:30.899836 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:24:30.899844 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:24:30.899852 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:24:30.899863 kernel: kvm-guest: setup PV IPIs May 27 03:24:30.899871 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:24:30.899879 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:24:30.899887 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 03:24:30.899895 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:24:30.899903 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:24:30.899911 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:24:30.899919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:24:30.899930 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:24:30.899938 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:24:30.899946 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:24:30.899954 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:24:30.899962 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:24:30.899970 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:24:30.899978 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:24:30.899987 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:24:30.899995 kernel: x86/bugs: return thunk changed May 27 03:24:30.900005 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:24:30.900013 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:24:30.900021 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:24:30.900029 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:24:30.900036 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:24:30.900045 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:24:30.900052 kernel: Freeing SMP alternatives memory: 32K May 27 03:24:30.900062 kernel: pid_max: default: 32768 minimum: 301 May 27 03:24:30.900072 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:24:30.900087 kernel: landlock: Up and running. May 27 03:24:30.900097 kernel: SELinux: Initializing. May 27 03:24:30.900108 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:24:30.900122 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:24:30.900131 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:24:30.900139 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:24:30.900147 kernel: ... version: 0 May 27 03:24:30.900155 kernel: ... bit width: 48 May 27 03:24:30.900163 kernel: ... generic registers: 6 May 27 03:24:30.900174 kernel: ... value mask: 0000ffffffffffff May 27 03:24:30.900182 kernel: ... max period: 00007fffffffffff May 27 03:24:30.900190 kernel: ... fixed-purpose events: 0 May 27 03:24:30.900198 kernel: ... event mask: 000000000000003f May 27 03:24:30.900206 kernel: signal: max sigframe size: 1776 May 27 03:24:30.900214 kernel: rcu: Hierarchical SRCU implementation. May 27 03:24:30.900222 kernel: rcu: Max phase no-delay instances is 400. May 27 03:24:30.900241 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:24:30.900250 kernel: smp: Bringing up secondary CPUs ... May 27 03:24:30.900270 kernel: smpboot: x86: Booting SMP configuration: May 27 03:24:30.900286 kernel: .... node #0, CPUs: #1 #2 #3 May 27 03:24:30.900295 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:24:30.900303 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 03:24:30.900311 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 136900K reserved, 0K cma-reserved) May 27 03:24:30.900319 kernel: devtmpfs: initialized May 27 03:24:30.900327 kernel: x86/mm: Memory block size: 128MB May 27 03:24:30.900335 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:24:30.900344 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:24:30.900358 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:24:30.900397 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:24:30.900405 kernel: audit: initializing netlink subsys (disabled) May 27 03:24:30.900414 kernel: audit: type=2000 audit(1748316268.214:1): state=initialized audit_enabled=0 res=1 May 27 03:24:30.900421 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:24:30.900429 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:24:30.900437 kernel: cpuidle: using governor menu May 27 03:24:30.900445 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:24:30.900453 kernel: dca service started, version 1.12.1 May 27 03:24:30.900465 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 03:24:30.900473 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 03:24:30.900481 kernel: PCI: Using configuration type 1 for base access May 27 03:24:30.900489 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:24:30.900497 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:24:30.900505 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:24:30.900513 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:24:30.900521 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:24:30.900529 kernel: ACPI: Added _OSI(Module Device) May 27 03:24:30.900539 kernel: ACPI: Added _OSI(Processor Device) May 27 03:24:30.900547 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:24:30.900555 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:24:30.900563 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:24:30.900571 kernel: ACPI: Interpreter enabled May 27 03:24:30.900579 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:24:30.900587 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:24:30.900603 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:24:30.900611 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:24:30.900621 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:24:30.900629 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:24:30.900838 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:24:30.900967 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:24:30.901090 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:24:30.901100 kernel: PCI host bridge to bus 0000:00 May 27 03:24:30.901288 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:24:30.901437 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:24:30.901563 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:24:30.901687 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 27 03:24:30.901797 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 03:24:30.901908 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 27 03:24:30.902018 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:24:30.902169 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:24:30.902328 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:24:30.902486 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 27 03:24:30.902626 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 27 03:24:30.902748 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 27 03:24:30.902868 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:24:30.903010 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:24:30.903140 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 27 03:24:30.903263 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 27 03:24:30.903403 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 27 03:24:30.903555 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 03:24:30.903691 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 27 03:24:30.903815 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 27 03:24:30.903937 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 27 03:24:30.904083 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:24:30.904223 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 27 03:24:30.904349 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 27 03:24:30.904525 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 27 03:24:30.904660 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 27 03:24:30.904797 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:24:30.904920 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:24:30.905063 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:24:30.905197 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 27 03:24:30.905344 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 27 03:24:30.905503 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:24:30.905641 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 03:24:30.905653 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:24:30.905666 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:24:30.905674 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:24:30.905682 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:24:30.905690 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:24:30.905698 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:24:30.905706 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:24:30.905714 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:24:30.905722 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:24:30.905730 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:24:30.905741 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:24:30.905749 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:24:30.905757 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:24:30.905764 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:24:30.905772 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:24:30.905780 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:24:30.905789 kernel: iommu: Default domain type: Translated May 27 03:24:30.905797 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:24:30.905805 kernel: PCI: Using ACPI for IRQ routing May 27 03:24:30.905815 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:24:30.905823 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 03:24:30.905831 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 27 03:24:30.905953 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:24:30.906074 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:24:30.906195 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:24:30.906206 kernel: vgaarb: loaded May 27 03:24:30.906214 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:24:30.906225 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:24:30.906233 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:24:30.906242 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:24:30.906250 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:24:30.906258 kernel: pnp: PnP ACPI init May 27 03:24:30.906443 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 03:24:30.906457 kernel: pnp: PnP ACPI: found 6 devices May 27 03:24:30.906465 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:24:30.906473 kernel: NET: Registered PF_INET protocol family May 27 03:24:30.906485 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:24:30.906494 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:24:30.906502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:24:30.906510 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:24:30.906518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:24:30.906526 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:24:30.906534 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:24:30.906542 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:24:30.906553 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:24:30.906561 kernel: NET: Registered PF_XDP protocol family May 27 03:24:30.906685 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:24:30.906814 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:24:30.906928 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:24:30.907054 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 27 03:24:30.907168 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 03:24:30.907279 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 27 03:24:30.907290 kernel: PCI: CLS 0 bytes, default 64 May 27 03:24:30.907312 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:24:30.907323 kernel: Initialise system trusted keyrings May 27 03:24:30.907333 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:24:30.907343 kernel: Key type asymmetric registered May 27 03:24:30.907353 kernel: Asymmetric key parser 'x509' registered May 27 03:24:30.907379 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:24:30.907390 kernel: io scheduler mq-deadline registered May 27 03:24:30.907399 kernel: io scheduler kyber registered May 27 03:24:30.907407 kernel: io scheduler bfq registered May 27 03:24:30.907420 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:24:30.907429 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:24:30.907437 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:24:30.907448 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 03:24:30.907459 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:24:30.907470 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:24:30.907480 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:24:30.907491 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:24:30.907499 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:24:30.907703 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 03:24:30.907720 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:24:30.907874 kernel: rtc_cmos 00:04: registered as rtc0 May 27 03:24:30.908009 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T03:24:30 UTC (1748316270) May 27 03:24:30.908125 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 03:24:30.908135 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:24:30.908143 kernel: NET: Registered PF_INET6 protocol family May 27 03:24:30.908151 kernel: Segment Routing with IPv6 May 27 03:24:30.908164 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:24:30.908172 kernel: NET: Registered PF_PACKET protocol family May 27 03:24:30.908180 kernel: Key type dns_resolver registered May 27 03:24:30.908188 kernel: IPI shorthand broadcast: enabled May 27 03:24:30.908196 kernel: sched_clock: Marking stable (3242003538, 126077307)->(3398257386, -30176541) May 27 03:24:30.908204 kernel: registered taskstats version 1 May 27 03:24:30.908212 kernel: Loading compiled-in X.509 certificates May 27 03:24:30.908220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:24:30.908228 kernel: Demotion targets for Node 0: null May 27 03:24:30.908239 kernel: Key type .fscrypt registered May 27 03:24:30.908247 kernel: Key type fscrypt-provisioning registered May 27 03:24:30.908255 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:24:30.908263 kernel: ima: Allocated hash algorithm: sha1 May 27 03:24:30.908271 kernel: ima: No architecture policies found May 27 03:24:30.908279 kernel: clk: Disabling unused clocks May 27 03:24:30.908287 kernel: Warning: unable to open an initial console. May 27 03:24:30.908295 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:24:30.908305 kernel: Write protecting the kernel read-only data: 24576k May 27 03:24:30.908314 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:24:30.908322 kernel: Run /init as init process May 27 03:24:30.908330 kernel: with arguments: May 27 03:24:30.908337 kernel: /init May 27 03:24:30.908345 kernel: with environment: May 27 03:24:30.908353 kernel: HOME=/ May 27 03:24:30.908376 kernel: TERM=linux May 27 03:24:30.908384 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:24:30.908394 systemd[1]: Successfully made /usr/ read-only. May 27 03:24:30.908418 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:24:30.908430 systemd[1]: Detected virtualization kvm. May 27 03:24:30.908438 systemd[1]: Detected architecture x86-64. May 27 03:24:30.908447 systemd[1]: Running in initrd. May 27 03:24:30.908455 systemd[1]: No hostname configured, using default hostname. May 27 03:24:30.908467 systemd[1]: Hostname set to . May 27 03:24:30.908475 systemd[1]: Initializing machine ID from VM UUID. May 27 03:24:30.908484 systemd[1]: Queued start job for default target initrd.target. May 27 03:24:30.908493 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:24:30.908504 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:24:30.908513 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:24:30.908522 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:24:30.908531 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:24:30.908543 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:24:30.908553 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:24:30.908562 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:24:30.908571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:24:30.908580 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:24:30.908598 systemd[1]: Reached target paths.target - Path Units. May 27 03:24:30.908610 systemd[1]: Reached target slices.target - Slice Units. May 27 03:24:30.908619 systemd[1]: Reached target swap.target - Swaps. May 27 03:24:30.908627 systemd[1]: Reached target timers.target - Timer Units. May 27 03:24:30.908636 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:24:30.908645 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:24:30.908654 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:24:30.908663 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:24:30.908672 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:24:30.908681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:24:30.908692 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:24:30.908701 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:24:30.908710 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:24:30.908719 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:24:30.908728 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:24:30.908741 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:24:30.908750 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:24:30.908759 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:24:30.908768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:24:30.908777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:24:30.908786 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:24:30.908799 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:24:30.908808 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:24:30.908817 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:24:30.908844 systemd-journald[220]: Collecting audit messages is disabled. May 27 03:24:30.908866 systemd-journald[220]: Journal started May 27 03:24:30.908885 systemd-journald[220]: Runtime Journal (/run/log/journal/a165f03e62c74357a7be8953b8ef2f0d) is 6M, max 48.6M, 42.5M free. May 27 03:24:30.899082 systemd-modules-load[223]: Inserted module 'overlay' May 27 03:24:30.947305 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:24:30.947343 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:24:30.947359 kernel: Bridge firewalling registered May 27 03:24:30.927138 systemd-modules-load[223]: Inserted module 'br_netfilter' May 27 03:24:30.948409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:24:30.951556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:24:30.953766 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:24:30.958432 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:24:30.960649 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:24:30.964829 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:24:30.972452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:24:30.982323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:24:30.986558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:24:30.989076 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:24:30.993221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:24:30.994810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:24:30.997598 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:24:31.001191 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:24:31.029885 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:24:31.051076 systemd-resolved[260]: Positive Trust Anchors: May 27 03:24:31.051103 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:24:31.051137 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:24:31.053994 systemd-resolved[260]: Defaulting to hostname 'linux'. May 27 03:24:31.055436 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:24:31.061439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:24:31.157433 kernel: SCSI subsystem initialized May 27 03:24:31.174429 kernel: Loading iSCSI transport class v2.0-870. May 27 03:24:31.187403 kernel: iscsi: registered transport (tcp) May 27 03:24:31.214515 kernel: iscsi: registered transport (qla4xxx) May 27 03:24:31.214621 kernel: QLogic iSCSI HBA Driver May 27 03:24:31.239910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:24:31.262349 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:24:31.266849 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:24:31.343739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:24:31.346450 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:24:31.406408 kernel: raid6: avx2x4 gen() 28313 MB/s May 27 03:24:31.425398 kernel: raid6: avx2x2 gen() 30161 MB/s May 27 03:24:31.442702 kernel: raid6: avx2x1 gen() 23519 MB/s May 27 03:24:31.442746 kernel: raid6: using algorithm avx2x2 gen() 30161 MB/s May 27 03:24:31.469412 kernel: raid6: .... xor() 19300 MB/s, rmw enabled May 27 03:24:31.469498 kernel: raid6: using avx2x2 recovery algorithm May 27 03:24:31.503406 kernel: xor: automatically using best checksumming function avx May 27 03:24:31.683438 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:24:31.693347 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:24:31.697720 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:24:31.733057 systemd-udevd[470]: Using default interface naming scheme 'v255'. May 27 03:24:31.739720 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:24:31.740931 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:24:31.859695 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation May 27 03:24:31.892552 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:24:31.896722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:24:31.979618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:24:31.983948 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:24:32.026410 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 03:24:32.033389 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:24:32.036746 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:24:32.050534 kernel: AES CTR mode by8 optimization enabled May 27 03:24:32.057809 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:24:32.057868 kernel: GPT:9289727 != 19775487 May 27 03:24:32.057885 kernel: libata version 3.00 loaded. May 27 03:24:32.057901 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:24:32.057917 kernel: GPT:9289727 != 19775487 May 27 03:24:32.057931 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:24:32.057947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:24:32.072604 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:24:32.083122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:24:32.083296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:24:32.088020 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:24:32.088721 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:24:32.088550 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:24:32.096328 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:24:32.096986 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:24:32.097536 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:24:32.097219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:24:32.101465 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:24:32.106411 kernel: scsi host0: ahci May 27 03:24:32.109409 kernel: scsi host1: ahci May 27 03:24:32.141814 kernel: scsi host2: ahci May 27 03:24:32.145393 kernel: scsi host3: ahci May 27 03:24:32.148391 kernel: scsi host4: ahci May 27 03:24:32.158076 kernel: scsi host5: ahci May 27 03:24:32.158259 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 27 03:24:32.158272 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 27 03:24:32.158283 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 27 03:24:32.158301 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 27 03:24:32.158312 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 27 03:24:32.158323 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 27 03:24:32.165396 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:24:32.176690 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:24:32.208158 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:24:32.226834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:24:32.235136 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:24:32.236703 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:24:32.237778 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:24:32.410918 disk-uuid[631]: Primary Header is updated. May 27 03:24:32.410918 disk-uuid[631]: Secondary Entries is updated. May 27 03:24:32.410918 disk-uuid[631]: Secondary Header is updated. May 27 03:24:32.415414 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:24:32.421484 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:24:32.471953 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:24:32.472009 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:24:32.472021 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:24:32.472031 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:24:32.472851 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:24:32.473746 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:24:32.474593 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:24:32.474618 kernel: ata3.00: applying bridge limits May 27 03:24:32.475784 kernel: ata3.00: configured for UDMA/100 May 27 03:24:32.476386 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:24:32.521418 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:24:32.521718 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:24:32.537396 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 03:24:32.915303 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:24:32.918230 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:24:32.920135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:24:32.921571 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:24:32.925198 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:24:32.952347 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:24:33.437442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:24:33.437574 disk-uuid[632]: The operation has completed successfully. May 27 03:24:33.467750 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:24:33.467911 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:24:33.514738 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:24:33.540191 sh[660]: Success May 27 03:24:33.559395 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:24:33.559440 kernel: device-mapper: uevent: version 1.0.3 May 27 03:24:33.559452 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:24:33.570413 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:24:33.606129 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:24:33.608555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:24:33.626672 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:24:33.635770 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:24:33.635824 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (672) May 27 03:24:33.637537 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:24:33.638556 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:24:33.638585 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:24:33.644877 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:24:33.647265 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:24:33.649578 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:24:33.652579 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:24:33.654781 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:24:33.689377 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (704) May 27 03:24:33.689453 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:24:33.689470 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:24:33.690947 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:24:33.699417 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:24:33.701192 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:24:33.704898 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:24:33.919793 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:24:33.923611 ignition[747]: Ignition 2.21.0 May 27 03:24:33.923628 ignition[747]: Stage: fetch-offline May 27 03:24:33.923683 ignition[747]: no configs at "/usr/lib/ignition/base.d" May 27 03:24:33.923695 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:24:33.923854 ignition[747]: parsed url from cmdline: "" May 27 03:24:33.923864 ignition[747]: no config URL provided May 27 03:24:33.923876 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:24:33.923887 ignition[747]: no config at "/usr/lib/ignition/user.ign" May 27 03:24:33.923924 ignition[747]: op(1): [started] loading QEMU firmware config module May 27 03:24:33.923930 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:24:33.932435 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:24:33.938430 ignition[747]: op(1): [finished] loading QEMU firmware config module May 27 03:24:33.983551 ignition[747]: parsing config with SHA512: d516cba8d33e049c68167417dae741608f1a32ed7fb087de8bfb1eb73fc5198798abd1132e05685a2b322b0c14b2c25d290313dcb738cb7fd31b22e37f186588 May 27 03:24:33.988010 unknown[747]: fetched base config from "system" May 27 03:24:33.988023 unknown[747]: fetched user config from "qemu" May 27 03:24:33.988431 ignition[747]: fetch-offline: fetch-offline passed May 27 03:24:33.988530 ignition[747]: Ignition finished successfully May 27 03:24:33.992009 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:24:33.995826 systemd-networkd[850]: lo: Link UP May 27 03:24:33.995834 systemd-networkd[850]: lo: Gained carrier May 27 03:24:33.998780 systemd-networkd[850]: Enumeration completed May 27 03:24:33.998966 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:24:34.002115 systemd[1]: Reached target network.target - Network. May 27 03:24:34.002197 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:24:34.003118 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:24:34.007529 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:24:34.007537 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:24:34.011304 systemd-networkd[850]: eth0: Link UP May 27 03:24:34.011313 systemd-networkd[850]: eth0: Gained carrier May 27 03:24:34.011322 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:24:34.031408 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:24:34.053125 ignition[854]: Ignition 2.21.0 May 27 03:24:34.053140 ignition[854]: Stage: kargs May 27 03:24:34.053275 ignition[854]: no configs at "/usr/lib/ignition/base.d" May 27 03:24:34.053286 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:24:34.054632 ignition[854]: kargs: kargs passed May 27 03:24:34.054691 ignition[854]: Ignition finished successfully May 27 03:24:34.059177 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:24:34.061332 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:24:34.111203 ignition[863]: Ignition 2.21.0 May 27 03:24:34.111219 ignition[863]: Stage: disks May 27 03:24:34.111415 ignition[863]: no configs at "/usr/lib/ignition/base.d" May 27 03:24:34.111428 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:24:34.113975 ignition[863]: disks: disks passed May 27 03:24:34.114031 ignition[863]: Ignition finished successfully May 27 03:24:34.117016 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:24:34.117386 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:24:34.120100 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:24:34.120333 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:24:34.124291 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:24:34.124705 systemd[1]: Reached target basic.target - Basic System. May 27 03:24:34.128349 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:24:34.170550 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:24:34.182701 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:24:34.184117 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:24:34.324407 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:24:34.325389 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:24:34.326291 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:24:34.330132 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:24:34.332169 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:24:34.332584 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:24:34.332637 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:24:34.332667 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:24:34.351485 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:24:34.353714 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:24:34.376392 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (883) May 27 03:24:34.376464 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:24:34.376476 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:24:34.378166 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:24:34.383926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:24:34.405435 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:24:34.412110 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory May 27 03:24:34.416829 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:24:34.421126 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:24:34.525642 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:24:34.528127 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:24:34.530843 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:24:34.551399 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:24:34.570103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:24:34.586996 ignition[997]: INFO : Ignition 2.21.0 May 27 03:24:34.586996 ignition[997]: INFO : Stage: mount May 27 03:24:34.588834 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:24:34.588834 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:24:34.591071 ignition[997]: INFO : mount: mount passed May 27 03:24:34.591907 ignition[997]: INFO : Ignition finished successfully May 27 03:24:34.595119 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:24:34.598617 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:24:34.634128 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:24:34.635771 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:24:34.671522 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1011) May 27 03:24:34.671574 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:24:34.671587 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:24:34.672645 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:24:34.677646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:24:34.761273 ignition[1028]: INFO : Ignition 2.21.0 May 27 03:24:34.761273 ignition[1028]: INFO : Stage: files May 27 03:24:34.763312 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:24:34.763312 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:24:34.765703 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping May 27 03:24:34.766926 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:24:34.766926 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:24:34.769937 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:24:34.769937 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:24:34.769937 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:24:34.769727 unknown[1028]: wrote ssh authorized keys file for user: core May 27 03:24:34.775571 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:24:34.775571 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 03:24:34.822764 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:24:35.025807 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:24:35.025807 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:24:35.030441 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:24:35.030441 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:24:35.030441 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:24:35.030441 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:24:35.037515 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:24:35.037515 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:24:35.041047 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:24:35.195389 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:24:35.197911 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:24:35.199889 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:24:35.294473 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:24:35.294473 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:24:35.300448 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 03:24:36.026522 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:24:36.038608 systemd-networkd[850]: eth0: Gained IPv6LL May 27 03:24:36.624180 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:24:36.624180 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:24:36.628325 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:24:36.849126 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:24:36.849126 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:24:36.849126 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 03:24:36.849126 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:24:36.868077 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:24:36.868077 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 03:24:36.868077 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:24:36.887196 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:24:36.894237 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:24:36.896051 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:24:36.896051 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 03:24:36.896051 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:24:36.896051 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:24:36.896051 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:24:36.896051 ignition[1028]: INFO : files: files passed May 27 03:24:36.896051 ignition[1028]: INFO : Ignition finished successfully May 27 03:24:36.904286 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:24:36.906997 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:24:36.911417 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:24:36.924332 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:24:36.924552 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:24:36.926861 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:24:36.933617 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:24:36.936921 initrd-setup-root-after-ignition[1059]: grep: May 27 03:24:36.937780 initrd-setup-root-after-ignition[1059]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:24:36.937780 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:24:36.939478 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:24:36.944004 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:24:36.947576 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:24:36.997046 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:24:36.997194 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:24:36.999829 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:24:37.002100 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:24:37.004173 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:24:37.005229 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:24:37.036255 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:24:37.038563 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:24:37.073938 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:24:37.075805 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:24:37.078896 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:24:37.080392 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:24:37.080541 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:24:37.085297 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:24:37.087753 systemd[1]: Stopped target basic.target - Basic System. May 27 03:24:37.089734 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:24:37.091820 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:24:37.092966 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:24:37.093292 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:24:37.093828 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:24:37.094163 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:24:37.094699 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:24:37.095025 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:24:37.095398 systemd[1]: Stopped target swap.target - Swaps. May 27 03:24:37.095875 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:24:37.095989 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:24:37.096747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:24:37.097151 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:24:37.097733 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:24:37.097868 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:24:37.119101 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:24:37.119300 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:24:37.124320 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:24:37.124504 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:24:37.125801 systemd[1]: Stopped target paths.target - Path Units. May 27 03:24:37.126111 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:24:37.130677 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:24:37.131882 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:24:37.134722 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:24:37.135137 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:24:37.135259 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:24:37.142421 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:24:37.143468 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:24:37.143782 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:24:37.143945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:24:37.146901 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:24:37.147031 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:24:37.154112 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:24:37.154322 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:24:37.154562 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:24:37.156457 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:24:37.161012 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:24:37.163240 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:24:37.166037 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:24:37.166156 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:24:37.174014 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:24:37.174263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:24:37.190809 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:24:37.196488 ignition[1083]: INFO : Ignition 2.21.0 May 27 03:24:37.196488 ignition[1083]: INFO : Stage: umount May 27 03:24:37.198949 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:24:37.198949 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:24:37.198949 ignition[1083]: INFO : umount: umount passed May 27 03:24:37.198949 ignition[1083]: INFO : Ignition finished successfully May 27 03:24:37.201517 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:24:37.201698 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:24:37.204186 systemd[1]: Stopped target network.target - Network. May 27 03:24:37.206103 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:24:37.206216 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:24:37.210650 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:24:37.210754 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:24:37.213142 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:24:37.213239 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:24:37.213390 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:24:37.213479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:24:37.218533 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:24:37.219593 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:24:37.226716 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:24:37.226885 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:24:37.230635 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:24:37.231867 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:24:37.233496 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:24:37.233565 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:24:37.237332 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:24:37.239509 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:24:37.239579 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:24:37.241092 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:24:37.246329 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:24:37.253384 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:24:37.262024 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:24:37.264334 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:24:37.264452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:24:37.269632 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:24:37.269737 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:24:37.271267 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:24:37.271332 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:24:37.278009 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:24:37.278111 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:24:37.278571 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:24:37.278779 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:24:37.308072 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:24:37.308269 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:24:37.312586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:24:37.312717 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:24:37.316060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:24:37.316140 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:24:37.319744 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:24:37.319858 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:24:37.323824 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:24:37.323998 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:24:37.326708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:24:37.326822 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:24:37.329039 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:24:37.333310 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:24:37.333394 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:24:37.338489 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:24:37.338559 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:24:37.342423 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:24:37.342521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:24:37.349234 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:24:37.349334 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:24:37.349421 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:24:37.356219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:24:37.356419 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:24:37.642883 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:24:37.643065 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:24:37.647470 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:24:37.649139 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:24:37.649209 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:24:37.652358 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:24:37.675659 systemd[1]: Switching root. May 27 03:24:37.743822 systemd-journald[220]: Journal stopped May 27 03:24:39.095484 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 03:24:39.095545 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:24:39.095567 kernel: SELinux: policy capability open_perms=1 May 27 03:24:39.095579 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:24:39.095591 kernel: SELinux: policy capability always_check_network=0 May 27 03:24:39.095602 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:24:39.095614 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:24:39.095625 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:24:39.095642 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:24:39.095654 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:24:39.095668 kernel: audit: type=1403 audit(1748316278.238:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:24:39.095685 systemd[1]: Successfully loaded SELinux policy in 53.141ms. May 27 03:24:39.095705 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.427ms. May 27 03:24:39.095718 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:24:39.095732 systemd[1]: Detected virtualization kvm. May 27 03:24:39.095744 systemd[1]: Detected architecture x86-64. May 27 03:24:39.095756 systemd[1]: Detected first boot. May 27 03:24:39.095769 systemd[1]: Initializing machine ID from VM UUID. May 27 03:24:39.095781 zram_generator::config[1128]: No configuration found. May 27 03:24:39.095809 kernel: Guest personality initialized and is inactive May 27 03:24:39.095821 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:24:39.095832 kernel: Initialized host personality May 27 03:24:39.095849 kernel: NET: Registered PF_VSOCK protocol family May 27 03:24:39.095860 systemd[1]: Populated /etc with preset unit settings. May 27 03:24:39.095874 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:24:39.095887 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:24:39.095900 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:24:39.095912 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:24:39.095927 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:24:39.095939 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:24:39.095951 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:24:39.095963 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:24:39.095976 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:24:39.095989 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:24:39.096001 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:24:39.096013 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:24:39.096031 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:24:39.096044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:24:39.096056 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:24:39.096068 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:24:39.096081 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:24:39.096093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:24:39.096106 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:24:39.096118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:24:39.096133 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:24:39.096146 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:24:39.096159 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:24:39.096171 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:24:39.096183 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:24:39.096196 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:24:39.096208 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:24:39.096221 systemd[1]: Reached target slices.target - Slice Units. May 27 03:24:39.096233 systemd[1]: Reached target swap.target - Swaps. May 27 03:24:39.096248 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:24:39.096260 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:24:39.096272 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:24:39.096284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:24:39.096296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:24:39.096308 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:24:39.096320 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:24:39.096333 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:24:39.096345 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:24:39.096373 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:24:39.096387 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:24:39.096407 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:24:39.096420 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:24:39.096432 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:24:39.096446 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:24:39.096458 systemd[1]: Reached target machines.target - Containers. May 27 03:24:39.096470 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:24:39.096485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:24:39.096498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:24:39.096510 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:24:39.096522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:24:39.096535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:24:39.096547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:24:39.096560 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:24:39.096572 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:24:39.096584 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:24:39.096602 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:24:39.096614 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:24:39.096626 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:24:39.096638 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:24:39.096651 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:24:39.096664 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:24:39.096676 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:24:39.096688 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:24:39.096722 systemd-journald[1193]: Collecting audit messages is disabled. May 27 03:24:39.096753 systemd-journald[1193]: Journal started May 27 03:24:39.096779 systemd-journald[1193]: Runtime Journal (/run/log/journal/a165f03e62c74357a7be8953b8ef2f0d) is 6M, max 48.6M, 42.5M free. May 27 03:24:38.861117 systemd[1]: Queued start job for default target multi-user.target. May 27 03:24:38.880238 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:24:38.880901 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:24:39.101971 kernel: loop: module loaded May 27 03:24:39.102015 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:24:39.108436 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:24:39.115476 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:24:39.117883 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:24:39.117922 systemd[1]: Stopped verity-setup.service. May 27 03:24:39.123989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:24:39.129406 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:24:39.129462 kernel: fuse: init (API version 7.41) May 27 03:24:39.134953 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:24:39.136503 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:24:39.139497 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:24:39.140907 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:24:39.142437 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:24:39.144016 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:24:39.145715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:24:39.147926 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:24:39.148222 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:24:39.150058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:24:39.151706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:24:39.153520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:24:39.155443 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:24:39.157982 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:24:39.158269 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:24:39.160004 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:24:39.160272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:24:39.162191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:24:39.169980 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:24:39.171454 kernel: ACPI: bus type drm_connector registered May 27 03:24:39.172652 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:24:39.174980 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:24:39.175274 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:24:39.177023 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:24:39.194948 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:24:39.199667 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:24:39.205291 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:24:39.206696 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:24:39.206745 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:24:39.209585 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:24:39.215672 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:24:39.217098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:24:39.218972 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:24:39.223523 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:24:39.225106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:24:39.233410 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:24:39.234782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:24:39.236638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:24:39.238616 systemd-journald[1193]: Time spent on flushing to /var/log/journal/a165f03e62c74357a7be8953b8ef2f0d is 29.712ms for 976 entries. May 27 03:24:39.238616 systemd-journald[1193]: System Journal (/var/log/journal/a165f03e62c74357a7be8953b8ef2f0d) is 8M, max 195.6M, 187.6M free. May 27 03:24:39.281542 systemd-journald[1193]: Received client request to flush runtime journal. May 27 03:24:39.281584 kernel: loop0: detected capacity change from 0 to 224512 May 27 03:24:39.240514 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:24:39.244831 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:24:39.246833 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:24:39.248732 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:24:39.259519 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:24:39.285476 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:24:39.287343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:24:39.289084 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:24:39.291955 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:24:39.296072 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:24:39.300678 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:24:39.307654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:24:39.388413 kernel: loop1: detected capacity change from 0 to 113872 May 27 03:24:39.462414 kernel: loop2: detected capacity change from 0 to 146240 May 27 03:24:39.499835 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:24:39.502996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:24:39.503945 kernel: loop3: detected capacity change from 0 to 224512 May 27 03:24:39.513888 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:24:39.520465 kernel: loop4: detected capacity change from 0 to 113872 May 27 03:24:39.552503 kernel: loop5: detected capacity change from 0 to 146240 May 27 03:24:39.577131 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:24:39.578977 (sd-merge)[1265]: Merged extensions into '/usr'. May 27 03:24:39.591160 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:24:39.591181 systemd[1]: Reloading... May 27 03:24:39.617788 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 27 03:24:39.617814 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 27 03:24:39.696403 zram_generator::config[1314]: No configuration found. May 27 03:24:39.813518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:24:39.878273 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:24:39.922659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:24:39.923294 systemd[1]: Reloading finished in 331 ms. May 27 03:24:39.962442 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:24:39.964112 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:24:39.965817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:24:39.983620 systemd[1]: Starting ensure-sysext.service... May 27 03:24:39.985907 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:24:40.071952 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... May 27 03:24:40.071973 systemd[1]: Reloading... May 27 03:24:40.091312 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:24:40.091410 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:24:40.091792 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:24:40.092114 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:24:40.093514 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:24:40.093930 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. May 27 03:24:40.094056 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. May 27 03:24:40.120954 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:24:40.120973 systemd-tmpfiles[1336]: Skipping /boot May 27 03:24:40.129410 zram_generator::config[1363]: No configuration found. May 27 03:24:40.144098 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:24:40.144118 systemd-tmpfiles[1336]: Skipping /boot May 27 03:24:40.235704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:24:40.322435 systemd[1]: Reloading finished in 250 ms. May 27 03:24:40.369068 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:24:40.379034 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:24:40.402515 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:24:40.406138 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:24:40.410332 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:24:40.415912 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:24:40.421280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:24:40.421503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:24:40.423586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:24:40.427078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:24:40.429462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:24:40.444656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:24:40.444810 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:24:40.448949 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:24:40.451269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:24:40.453253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:24:40.453539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:24:40.455178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:24:40.474592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:24:40.506624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:24:40.508660 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:24:40.508895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:24:40.519037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:24:40.519280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:24:40.521018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:24:40.531919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:24:40.535158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:24:40.536557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:24:40.536677 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:24:40.536784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:24:40.538097 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:24:40.540487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:24:40.540735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:24:40.542765 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:24:40.544996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:24:40.545288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:24:40.554850 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:24:40.555111 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:24:40.561141 systemd[1]: Finished ensure-sysext.service. May 27 03:24:40.563910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:24:40.564147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:24:40.573477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:24:40.576914 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:24:40.581278 augenrules[1446]: No rules May 27 03:24:40.580650 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:24:40.582297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:24:40.582379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:24:40.591779 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:24:40.593584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:24:40.596427 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:24:40.596818 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:24:40.599193 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:24:40.601189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:24:40.601452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:24:40.603235 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:24:40.603521 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:24:40.605159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:24:40.605419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:24:40.608823 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:24:40.615611 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:24:40.615706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:24:40.623204 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:24:40.628528 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:24:40.629958 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:24:40.652562 systemd-resolved[1408]: Positive Trust Anchors: May 27 03:24:40.652947 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:24:40.652981 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:24:40.659497 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:24:40.660830 systemd-resolved[1408]: Defaulting to hostname 'linux'. May 27 03:24:40.663501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:24:40.664791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:24:40.774982 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:24:40.775912 systemd-udevd[1462]: Using default interface naming scheme 'v255'. May 27 03:24:40.776558 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:24:40.796117 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:24:40.797628 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:24:40.798901 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:24:40.800252 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:24:40.802227 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:24:40.803700 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:24:40.804966 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:24:40.807414 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:24:40.809095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:24:40.809142 systemd[1]: Reached target paths.target - Path Units. May 27 03:24:40.810299 systemd[1]: Reached target timers.target - Timer Units. May 27 03:24:40.812632 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:24:40.815686 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:24:40.824773 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:24:40.826687 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:24:40.829573 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:24:40.840492 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:24:40.843294 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:24:40.848006 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:24:40.851041 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:24:40.861640 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:24:40.864457 systemd[1]: Reached target basic.target - Basic System. May 27 03:24:40.865929 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:24:40.865966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:24:40.867679 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:24:40.871527 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:24:40.874148 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:24:40.880648 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:24:40.882471 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:24:40.887662 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:24:40.892292 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:24:40.901161 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:24:40.905986 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Refreshing passwd entry cache May 27 03:24:40.906332 oslogin_cache_refresh[1500]: Refreshing passwd entry cache May 27 03:24:40.906530 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:24:40.908694 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Failure getting users, quitting May 27 03:24:40.908738 oslogin_cache_refresh[1500]: Failure getting users, quitting May 27 03:24:40.908819 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:24:40.908851 oslogin_cache_refresh[1500]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:24:40.908932 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Refreshing group entry cache May 27 03:24:40.908974 oslogin_cache_refresh[1500]: Refreshing group entry cache May 27 03:24:40.909633 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Failure getting groups, quitting May 27 03:24:40.909689 oslogin_cache_refresh[1500]: Failure getting groups, quitting May 27 03:24:40.909744 google_oslogin_nss_cache[1500]: oslogin_cache_refresh[1500]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:24:40.909779 oslogin_cache_refresh[1500]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:24:40.911903 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:24:40.921731 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:24:40.924447 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:24:40.926244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:24:40.929740 jq[1498]: false May 27 03:24:40.930553 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:24:40.954208 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:24:40.957617 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:24:40.959635 jq[1514]: true May 27 03:24:40.959735 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:24:40.959984 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:24:40.960422 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:24:40.960769 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:24:40.963904 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:24:40.964164 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:24:40.985993 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:24:40.997998 extend-filesystems[1499]: Found loop3 May 27 03:24:40.999522 extend-filesystems[1499]: Found loop4 May 27 03:24:40.999522 extend-filesystems[1499]: Found loop5 May 27 03:24:40.999522 extend-filesystems[1499]: Found sr0 May 27 03:24:40.999522 extend-filesystems[1499]: Found vda May 27 03:24:40.999522 extend-filesystems[1499]: Found vda1 May 27 03:24:40.999522 extend-filesystems[1499]: Found vda2 May 27 03:24:41.007520 extend-filesystems[1499]: Found vda3 May 27 03:24:41.007520 extend-filesystems[1499]: Found usr May 27 03:24:41.007520 extend-filesystems[1499]: Found vda4 May 27 03:24:41.007520 extend-filesystems[1499]: Found vda6 May 27 03:24:41.007520 extend-filesystems[1499]: Found vda7 May 27 03:24:41.007520 extend-filesystems[1499]: Found vda9 May 27 03:24:41.002829 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:24:41.012252 jq[1516]: true May 27 03:24:41.013527 update_engine[1508]: I20250527 03:24:41.009711 1508 main.cc:92] Flatcar Update Engine starting May 27 03:24:41.004003 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:24:41.032518 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:24:41.032930 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:24:41.041219 tar[1515]: linux-amd64/LICENSE May 27 03:24:41.041219 tar[1515]: linux-amd64/helm May 27 03:24:41.053457 dbus-daemon[1496]: [system] SELinux support is enabled May 27 03:24:41.054617 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:24:41.057787 update_engine[1508]: I20250527 03:24:41.057646 1508 update_check_scheduler.cc:74] Next update check in 4m52s May 27 03:24:41.058908 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:24:41.058950 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:24:41.060734 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:24:41.060768 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:24:41.063985 systemd[1]: Started update-engine.service - Update Engine. May 27 03:24:41.067722 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:24:41.111663 bash[1552]: Updated "/home/core/.ssh/authorized_keys" May 27 03:24:41.135225 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:24:41.137835 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:24:41.140323 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:24:41.149940 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:24:41.151693 systemd-logind[1505]: New seat seat0. May 27 03:24:41.153457 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:24:41.160410 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:24:41.163904 systemd-networkd[1495]: lo: Link UP May 27 03:24:41.163919 systemd-networkd[1495]: lo: Gained carrier May 27 03:24:41.166927 systemd-networkd[1495]: Enumeration completed May 27 03:24:41.167139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:24:41.167951 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:24:41.167965 systemd-networkd[1495]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:24:41.169011 systemd-networkd[1495]: eth0: Link UP May 27 03:24:41.169288 systemd-networkd[1495]: eth0: Gained carrier May 27 03:24:41.169312 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:24:41.170765 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:24:41.175105 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:24:41.179132 systemd[1]: Reached target network.target - Network. May 27 03:24:41.182132 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:24:41.184030 systemd-networkd[1495]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:24:41.184392 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:24:41.185842 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. May 27 03:24:41.186560 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:24:42.171825 systemd-timesyncd[1454]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:24:42.171893 systemd-timesyncd[1454]: Initial clock synchronization to Tue 2025-05-27 03:24:42.171663 UTC. May 27 03:24:42.172202 systemd-resolved[1408]: Clock change detected. Flushing caches. May 27 03:24:42.173544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:24:42.179278 kernel: ACPI: button: Power Button [PWRF] May 27 03:24:42.179702 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:24:42.183863 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:24:42.197730 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:24:42.198013 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:24:42.202317 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:24:42.209771 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:24:42.212283 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:24:42.214285 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:24:42.229082 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:24:42.234620 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:24:42.237443 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:24:42.242961 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:24:42.244075 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:24:42.240336 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:24:42.443313 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:24:42.493626 kernel: kvm_amd: TSC scaling supported May 27 03:24:42.493680 kernel: kvm_amd: Nested Virtualization enabled May 27 03:24:42.493694 kernel: kvm_amd: Nested Paging enabled May 27 03:24:42.494850 kernel: kvm_amd: LBR virtualization supported May 27 03:24:42.496672 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:24:42.497659 systemd-logind[1505]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:24:42.499173 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 03:24:42.499231 kernel: kvm_amd: Virtual GIF supported May 27 03:24:42.544181 kernel: EDAC MC: Ver: 3.0.0 May 27 03:24:42.638427 containerd[1580]: time="2025-05-27T03:24:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:24:42.641329 containerd[1580]: time="2025-05-27T03:24:42.641196344Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:24:42.654334 containerd[1580]: time="2025-05-27T03:24:42.654281794Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.281µs" May 27 03:24:42.654334 containerd[1580]: time="2025-05-27T03:24:42.654322421Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:24:42.654419 containerd[1580]: time="2025-05-27T03:24:42.654342949Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:24:42.654604 containerd[1580]: time="2025-05-27T03:24:42.654575385Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:24:42.654604 containerd[1580]: time="2025-05-27T03:24:42.654602786Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:24:42.654663 containerd[1580]: time="2025-05-27T03:24:42.654632272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:24:42.654736 containerd[1580]: time="2025-05-27T03:24:42.654710989Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:24:42.654736 containerd[1580]: time="2025-05-27T03:24:42.654730506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:24:42.655069 containerd[1580]: time="2025-05-27T03:24:42.655037952Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:24:42.655069 containerd[1580]: time="2025-05-27T03:24:42.655058070Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:24:42.655069 containerd[1580]: time="2025-05-27T03:24:42.655068470Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:24:42.655181 containerd[1580]: time="2025-05-27T03:24:42.655076364Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:24:42.655250 containerd[1580]: time="2025-05-27T03:24:42.655224322Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:24:42.655519 containerd[1580]: time="2025-05-27T03:24:42.655482125Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:24:42.655548 containerd[1580]: time="2025-05-27T03:24:42.655519906Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:24:42.655548 containerd[1580]: time="2025-05-27T03:24:42.655530576Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:24:42.655587 containerd[1580]: time="2025-05-27T03:24:42.655568087Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:24:42.656126 containerd[1580]: time="2025-05-27T03:24:42.656082501Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:24:42.656416 containerd[1580]: time="2025-05-27T03:24:42.656223526Z" level=info msg="metadata content store policy set" policy=shared May 27 03:24:42.663212 containerd[1580]: time="2025-05-27T03:24:42.663179350Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:24:42.663283 containerd[1580]: time="2025-05-27T03:24:42.663227060Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:24:42.663283 containerd[1580]: time="2025-05-27T03:24:42.663245504Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:24:42.663283 containerd[1580]: time="2025-05-27T03:24:42.663258359Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:24:42.663283 containerd[1580]: time="2025-05-27T03:24:42.663271223Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:24:42.663283 containerd[1580]: time="2025-05-27T03:24:42.663282704Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:24:42.663452 containerd[1580]: time="2025-05-27T03:24:42.663294797Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:24:42.663452 containerd[1580]: time="2025-05-27T03:24:42.663307180Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:24:42.663452 containerd[1580]: time="2025-05-27T03:24:42.663319583Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:24:42.663452 containerd[1580]: time="2025-05-27T03:24:42.663329141Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:24:42.663452 containerd[1580]: time="2025-05-27T03:24:42.663339371Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:24:42.663452 containerd[1580]: time="2025-05-27T03:24:42.663352946Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:24:42.663565 containerd[1580]: time="2025-05-27T03:24:42.663488009Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:24:42.663565 containerd[1580]: time="2025-05-27T03:24:42.663508778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:24:42.663565 containerd[1580]: time="2025-05-27T03:24:42.663549655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:24:42.663619 containerd[1580]: time="2025-05-27T03:24:42.663569622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:24:42.663619 containerd[1580]: time="2025-05-27T03:24:42.663580823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:24:42.663619 containerd[1580]: time="2025-05-27T03:24:42.663593327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:24:42.663619 containerd[1580]: time="2025-05-27T03:24:42.663613945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:24:42.663700 containerd[1580]: time="2025-05-27T03:24:42.663625848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:24:42.663700 containerd[1580]: time="2025-05-27T03:24:42.663637720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:24:42.663700 containerd[1580]: time="2025-05-27T03:24:42.663650153Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:24:42.663700 containerd[1580]: time="2025-05-27T03:24:42.663668598Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:24:42.663774 containerd[1580]: time="2025-05-27T03:24:42.663756924Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:24:42.663774 containerd[1580]: time="2025-05-27T03:24:42.663771942Z" level=info msg="Start snapshots syncer" May 27 03:24:42.663813 containerd[1580]: time="2025-05-27T03:24:42.663799103Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:24:42.664081 containerd[1580]: time="2025-05-27T03:24:42.664035526Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664113532Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664229430Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664346319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664368080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664379541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664394700Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664411261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664425037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:24:42.664455 containerd[1580]: time="2025-05-27T03:24:42.664438652Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664465252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664477585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664488385Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664520706Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664533540Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664542437Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664551504Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664559008Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664567684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664578214Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664595697Z" level=info msg="runtime interface created" May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664600796Z" level=info msg="created NRI interface" May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664624741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:24:42.664629 containerd[1580]: time="2025-05-27T03:24:42.664637515Z" level=info msg="Connect containerd service" May 27 03:24:42.664878 containerd[1580]: time="2025-05-27T03:24:42.664671068Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:24:42.665532 containerd[1580]: time="2025-05-27T03:24:42.665503499Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:24:42.691990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:24:42.814721 tar[1515]: linux-amd64/README.md May 27 03:24:42.830268 containerd[1580]: time="2025-05-27T03:24:42.830186198Z" level=info msg="Start subscribing containerd event" May 27 03:24:42.830268 containerd[1580]: time="2025-05-27T03:24:42.830263072Z" level=info msg="Start recovering state" May 27 03:24:42.830443 containerd[1580]: time="2025-05-27T03:24:42.830379050Z" level=info msg="Start event monitor" May 27 03:24:42.830443 containerd[1580]: time="2025-05-27T03:24:42.830393747Z" level=info msg="Start cni network conf syncer for default" May 27 03:24:42.830443 containerd[1580]: time="2025-05-27T03:24:42.830411030Z" level=info msg="Start streaming server" May 27 03:24:42.830443 containerd[1580]: time="2025-05-27T03:24:42.830425286Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:24:42.830443 containerd[1580]: time="2025-05-27T03:24:42.830433321Z" level=info msg="runtime interface starting up..." May 27 03:24:42.830443 containerd[1580]: time="2025-05-27T03:24:42.830439523Z" level=info msg="starting plugins..." May 27 03:24:42.830584 containerd[1580]: time="2025-05-27T03:24:42.830444502Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:24:42.830584 containerd[1580]: time="2025-05-27T03:24:42.830455042Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:24:42.830584 containerd[1580]: time="2025-05-27T03:24:42.830538008Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:24:42.830759 containerd[1580]: time="2025-05-27T03:24:42.830738744Z" level=info msg="containerd successfully booted in 0.192854s" May 27 03:24:42.830803 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:24:42.837953 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:24:43.807393 systemd-networkd[1495]: eth0: Gained IPv6LL May 27 03:24:43.810498 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:24:43.812559 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:24:43.815498 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:24:43.818147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:24:43.831574 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:24:43.903030 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:24:43.903421 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:24:43.905534 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:24:43.914692 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:24:45.072513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:24:45.074289 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:24:45.077232 systemd[1]: Startup finished in 3.343s (kernel) + 7.572s (initrd) + 5.904s (userspace) = 16.821s. May 27 03:24:45.099019 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:24:45.656354 kubelet[1659]: E0527 03:24:45.656252 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:24:45.660753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:24:45.660953 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:24:45.661392 systemd[1]: kubelet.service: Consumed 1.585s CPU time, 263.9M memory peak. May 27 03:24:45.737823 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:24:45.739195 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:60500.service - OpenSSH per-connection server daemon (10.0.0.1:60500). May 27 03:24:45.802427 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 60500 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:24:45.804200 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:45.810748 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:24:45.811850 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:24:45.818030 systemd-logind[1505]: New session 1 of user core. May 27 03:24:45.839788 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:24:45.842863 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:24:45.862729 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:24:45.865108 systemd-logind[1505]: New session c1 of user core. May 27 03:24:46.018564 systemd[1676]: Queued start job for default target default.target. May 27 03:24:46.036686 systemd[1676]: Created slice app.slice - User Application Slice. May 27 03:24:46.036722 systemd[1676]: Reached target paths.target - Paths. May 27 03:24:46.036781 systemd[1676]: Reached target timers.target - Timers. May 27 03:24:46.038495 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:24:46.050075 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:24:46.050238 systemd[1676]: Reached target sockets.target - Sockets. May 27 03:24:46.050282 systemd[1676]: Reached target basic.target - Basic System. May 27 03:24:46.050325 systemd[1676]: Reached target default.target - Main User Target. May 27 03:24:46.050357 systemd[1676]: Startup finished in 178ms. May 27 03:24:46.050885 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:24:46.068292 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:24:46.131415 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:60504.service - OpenSSH per-connection server daemon (10.0.0.1:60504). May 27 03:24:46.180778 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 60504 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:24:46.182449 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:46.187098 systemd-logind[1505]: New session 2 of user core. May 27 03:24:46.197284 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:24:46.250428 sshd[1689]: Connection closed by 10.0.0.1 port 60504 May 27 03:24:46.250762 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 27 03:24:46.263979 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:60504.service: Deactivated successfully. May 27 03:24:46.265781 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:24:46.266559 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. May 27 03:24:46.269244 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:60510.service - OpenSSH per-connection server daemon (10.0.0.1:60510). May 27 03:24:46.269797 systemd-logind[1505]: Removed session 2. May 27 03:24:46.319913 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 60510 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:24:46.321429 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:46.326160 systemd-logind[1505]: New session 3 of user core. May 27 03:24:46.336287 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:24:46.385980 sshd[1697]: Connection closed by 10.0.0.1 port 60510 May 27 03:24:46.386453 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 27 03:24:46.394777 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:60510.service: Deactivated successfully. May 27 03:24:46.396676 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:24:46.397536 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. May 27 03:24:46.400723 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:60520.service - OpenSSH per-connection server daemon (10.0.0.1:60520). May 27 03:24:46.401529 systemd-logind[1505]: Removed session 3. May 27 03:24:46.448178 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 60520 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:24:46.449730 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:46.454479 systemd-logind[1505]: New session 4 of user core. May 27 03:24:46.471289 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:24:46.525231 sshd[1705]: Connection closed by 10.0.0.1 port 60520 May 27 03:24:46.525511 sshd-session[1703]: pam_unix(sshd:session): session closed for user core May 27 03:24:46.542836 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:60520.service: Deactivated successfully. May 27 03:24:46.544752 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:24:46.545481 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. May 27 03:24:46.548329 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:60534.service - OpenSSH per-connection server daemon (10.0.0.1:60534). May 27 03:24:46.549152 systemd-logind[1505]: Removed session 4. May 27 03:24:46.602177 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 60534 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:24:46.603895 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:46.608581 systemd-logind[1505]: New session 5 of user core. May 27 03:24:46.624402 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:24:46.684300 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:24:46.684615 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:24:46.709660 sudo[1714]: pam_unix(sudo:session): session closed for user root May 27 03:24:46.711472 sshd[1713]: Connection closed by 10.0.0.1 port 60534 May 27 03:24:46.711828 sshd-session[1711]: pam_unix(sshd:session): session closed for user core May 27 03:24:46.725662 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:60534.service: Deactivated successfully. May 27 03:24:46.727477 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:24:46.728354 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. May 27 03:24:46.731093 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:60548.service - OpenSSH per-connection server daemon (10.0.0.1:60548). May 27 03:24:46.731707 systemd-logind[1505]: Removed session 5. May 27 03:24:46.787401 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 60548 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:24:46.789313 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:46.793904 systemd-logind[1505]: New session 6 of user core. May 27 03:24:46.807408 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:24:46.863732 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:24:46.864089 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:24:46.872554 sudo[1724]: pam_unix(sudo:session): session closed for user root May 27 03:24:46.879736 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:24:46.880076 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:24:46.890832 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:24:46.949919 augenrules[1746]: No rules May 27 03:24:46.952091 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:24:46.952403 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:24:46.953654 sudo[1723]: pam_unix(sudo:session): session closed for user root May 27 03:24:46.955354 sshd[1722]: Connection closed by 10.0.0.1 port 60548 May 27 03:24:46.955606 sshd-session[1720]: pam_unix(sshd:session): session closed for user core May 27 03:24:46.964917 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:60548.service: Deactivated successfully. May 27 03:24:46.966821 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:24:46.967559 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. May 27 03:24:46.970549 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:60556.service - OpenSSH per-connection server daemon (10.0.0.1:60556). May 27 03:24:46.971185 systemd-logind[1505]: Removed session 6. May 27 03:24:47.021304 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 60556 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:24:47.022789 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:47.027542 systemd-logind[1505]: New session 7 of user core. May 27 03:24:47.045280 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:24:47.098905 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:24:47.099268 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:24:47.417764 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:24:47.440508 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:24:47.660501 dockerd[1778]: time="2025-05-27T03:24:47.660426563Z" level=info msg="Starting up" May 27 03:24:47.662022 dockerd[1778]: time="2025-05-27T03:24:47.661989143Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:24:48.219684 dockerd[1778]: time="2025-05-27T03:24:48.219610502Z" level=info msg="Loading containers: start." May 27 03:24:48.233182 kernel: Initializing XFRM netlink socket May 27 03:24:48.517599 systemd-networkd[1495]: docker0: Link UP May 27 03:24:48.525030 dockerd[1778]: time="2025-05-27T03:24:48.524969069Z" level=info msg="Loading containers: done." May 27 03:24:48.539835 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1849611338-merged.mount: Deactivated successfully. May 27 03:24:48.542247 dockerd[1778]: time="2025-05-27T03:24:48.542190615Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:24:48.542341 dockerd[1778]: time="2025-05-27T03:24:48.542317794Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:24:48.542502 dockerd[1778]: time="2025-05-27T03:24:48.542478695Z" level=info msg="Initializing buildkit" May 27 03:24:48.575923 dockerd[1778]: time="2025-05-27T03:24:48.575857793Z" level=info msg="Completed buildkit initialization" May 27 03:24:48.582721 dockerd[1778]: time="2025-05-27T03:24:48.582646885Z" level=info msg="Daemon has completed initialization" May 27 03:24:48.582879 dockerd[1778]: time="2025-05-27T03:24:48.582741242Z" level=info msg="API listen on /run/docker.sock" May 27 03:24:48.582989 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:24:49.258165 containerd[1580]: time="2025-05-27T03:24:49.257530152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 03:24:49.986095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114497399.mount: Deactivated successfully. May 27 03:24:50.977832 containerd[1580]: time="2025-05-27T03:24:50.977764876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:50.978658 containerd[1580]: time="2025-05-27T03:24:50.978585375Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 03:24:50.979934 containerd[1580]: time="2025-05-27T03:24:50.979856689Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:50.982516 containerd[1580]: time="2025-05-27T03:24:50.982470781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:50.983233 containerd[1580]: time="2025-05-27T03:24:50.983190581Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.725609424s" May 27 03:24:50.983233 containerd[1580]: time="2025-05-27T03:24:50.983225296Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 03:24:50.984097 containerd[1580]: time="2025-05-27T03:24:50.984061495Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 03:24:52.300122 containerd[1580]: time="2025-05-27T03:24:52.300047558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:52.300855 containerd[1580]: time="2025-05-27T03:24:52.300784450Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 03:24:52.303162 containerd[1580]: time="2025-05-27T03:24:52.302222016Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:52.306032 containerd[1580]: time="2025-05-27T03:24:52.305965366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:52.306931 containerd[1580]: time="2025-05-27T03:24:52.306862629Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.322771218s" May 27 03:24:52.306931 containerd[1580]: time="2025-05-27T03:24:52.306924785Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 03:24:52.307549 containerd[1580]: time="2025-05-27T03:24:52.307487581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 03:24:54.005752 containerd[1580]: time="2025-05-27T03:24:54.005675930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:54.006955 containerd[1580]: time="2025-05-27T03:24:54.006919312Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 03:24:54.008652 containerd[1580]: time="2025-05-27T03:24:54.008610293Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:54.011565 containerd[1580]: time="2025-05-27T03:24:54.011529067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:54.012471 containerd[1580]: time="2025-05-27T03:24:54.012434575Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.704918812s" May 27 03:24:54.012471 containerd[1580]: time="2025-05-27T03:24:54.012465052Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 03:24:54.013047 containerd[1580]: time="2025-05-27T03:24:54.013006508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 03:24:55.333200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711643593.mount: Deactivated successfully. May 27 03:24:55.753686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:24:55.755486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:24:56.166598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:24:56.187542 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:24:56.233989 kubelet[2068]: E0527 03:24:56.233865 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:24:56.241367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:24:56.241577 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:24:56.242067 systemd[1]: kubelet.service: Consumed 261ms CPU time, 111.4M memory peak. May 27 03:24:56.316441 containerd[1580]: time="2025-05-27T03:24:56.316354788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:56.317635 containerd[1580]: time="2025-05-27T03:24:56.317579144Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 03:24:56.318874 containerd[1580]: time="2025-05-27T03:24:56.318836452Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:56.320977 containerd[1580]: time="2025-05-27T03:24:56.320928646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:56.321558 containerd[1580]: time="2025-05-27T03:24:56.321510256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.30847233s" May 27 03:24:56.321558 containerd[1580]: time="2025-05-27T03:24:56.321550842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 03:24:56.322044 containerd[1580]: time="2025-05-27T03:24:56.322019521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:24:57.001401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267346482.mount: Deactivated successfully. May 27 03:24:57.746093 containerd[1580]: time="2025-05-27T03:24:57.746018488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:57.747227 containerd[1580]: time="2025-05-27T03:24:57.747165338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 03:24:57.748876 containerd[1580]: time="2025-05-27T03:24:57.748837104Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:57.752524 containerd[1580]: time="2025-05-27T03:24:57.752454007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:57.753554 containerd[1580]: time="2025-05-27T03:24:57.753504707Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.431457704s" May 27 03:24:57.753554 containerd[1580]: time="2025-05-27T03:24:57.753535324Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:24:57.754300 containerd[1580]: time="2025-05-27T03:24:57.754248392Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:24:58.233660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929824487.mount: Deactivated successfully. May 27 03:24:58.241604 containerd[1580]: time="2025-05-27T03:24:58.241531031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:24:58.242454 containerd[1580]: time="2025-05-27T03:24:58.242411482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:24:58.243564 containerd[1580]: time="2025-05-27T03:24:58.243510533Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:24:58.246632 containerd[1580]: time="2025-05-27T03:24:58.246575160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:24:58.247452 containerd[1580]: time="2025-05-27T03:24:58.247406970Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 493.104597ms" May 27 03:24:58.247452 containerd[1580]: time="2025-05-27T03:24:58.247445863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:24:58.248066 containerd[1580]: time="2025-05-27T03:24:58.247976378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 03:24:58.815251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506013812.mount: Deactivated successfully. May 27 03:25:00.877055 containerd[1580]: time="2025-05-27T03:25:00.876968366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:00.878341 containerd[1580]: time="2025-05-27T03:25:00.878286548Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 03:25:00.879811 containerd[1580]: time="2025-05-27T03:25:00.879772966Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:00.882692 containerd[1580]: time="2025-05-27T03:25:00.882615707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:00.883643 containerd[1580]: time="2025-05-27T03:25:00.883588000Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.635557511s" May 27 03:25:00.883643 containerd[1580]: time="2025-05-27T03:25:00.883639146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 03:25:03.251262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:25:03.251438 systemd[1]: kubelet.service: Consumed 261ms CPU time, 111.4M memory peak. May 27 03:25:03.253663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:25:03.278843 systemd[1]: Reload requested from client PID 2216 ('systemctl') (unit session-7.scope)... May 27 03:25:03.278861 systemd[1]: Reloading... May 27 03:25:03.369403 zram_generator::config[2265]: No configuration found. May 27 03:25:03.635889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:25:03.754838 systemd[1]: Reloading finished in 475 ms. May 27 03:25:03.830021 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:25:03.830129 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:25:03.830475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:25:03.830522 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.3M memory peak. May 27 03:25:03.832324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:25:04.019227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:25:04.031550 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:25:04.070444 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:25:04.070444 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:25:04.070444 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:25:04.070889 kubelet[2307]: I0527 03:25:04.070502 2307 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:25:04.243780 kubelet[2307]: I0527 03:25:04.243734 2307 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:25:04.243780 kubelet[2307]: I0527 03:25:04.243762 2307 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:25:04.244014 kubelet[2307]: I0527 03:25:04.243996 2307 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:25:04.271756 kubelet[2307]: E0527 03:25:04.271633 2307 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" May 27 03:25:04.275259 kubelet[2307]: I0527 03:25:04.275200 2307 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:25:04.282118 kubelet[2307]: I0527 03:25:04.282093 2307 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:25:04.288338 kubelet[2307]: I0527 03:25:04.288297 2307 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:25:04.289470 kubelet[2307]: I0527 03:25:04.289427 2307 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:25:04.289645 kubelet[2307]: I0527 03:25:04.289460 2307 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:25:04.289762 kubelet[2307]: I0527 03:25:04.289653 2307 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:25:04.289762 kubelet[2307]: I0527 03:25:04.289662 2307 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:25:04.289808 kubelet[2307]: I0527 03:25:04.289796 2307 state_mem.go:36] "Initialized new in-memory state store" May 27 03:25:04.292673 kubelet[2307]: I0527 03:25:04.292644 2307 kubelet.go:446] "Attempting to sync node with API server" May 27 03:25:04.292673 kubelet[2307]: I0527 03:25:04.292666 2307 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:25:04.292757 kubelet[2307]: I0527 03:25:04.292689 2307 kubelet.go:352] "Adding apiserver pod source" May 27 03:25:04.292757 kubelet[2307]: I0527 03:25:04.292699 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:25:04.296917 kubelet[2307]: I0527 03:25:04.296806 2307 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:25:04.296917 kubelet[2307]: W0527 03:25:04.296792 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 27 03:25:04.296917 kubelet[2307]: E0527 03:25:04.296870 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" May 27 03:25:04.297387 kubelet[2307]: W0527 03:25:04.297339 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 27 03:25:04.297503 kubelet[2307]: E0527 03:25:04.297483 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" May 27 03:25:04.297564 kubelet[2307]: I0527 03:25:04.297370 2307 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:25:04.298178 kubelet[2307]: W0527 03:25:04.298153 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:25:04.300836 kubelet[2307]: I0527 03:25:04.300813 2307 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:25:04.300932 kubelet[2307]: I0527 03:25:04.300857 2307 server.go:1287] "Started kubelet" May 27 03:25:04.301228 kubelet[2307]: I0527 03:25:04.301196 2307 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:25:04.307794 kubelet[2307]: I0527 03:25:04.307216 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:25:04.308773 kubelet[2307]: I0527 03:25:04.308008 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:25:04.308773 kubelet[2307]: I0527 03:25:04.308003 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:25:04.308773 kubelet[2307]: I0527 03:25:04.308317 2307 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:25:04.309120 kubelet[2307]: I0527 03:25:04.309079 2307 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:25:04.309391 kubelet[2307]: E0527 03:25:04.309365 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:04.309444 kubelet[2307]: I0527 03:25:04.309405 2307 server.go:479] "Adding debug handlers to kubelet server" May 27 03:25:04.309911 kubelet[2307]: I0527 03:25:04.309887 2307 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:25:04.309979 kubelet[2307]: I0527 03:25:04.309957 2307 reconciler.go:26] "Reconciler: start to sync state" May 27 03:25:04.310524 kubelet[2307]: W0527 03:25:04.310462 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 27 03:25:04.310682 kubelet[2307]: E0527 03:25:04.310650 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" May 27 03:25:04.310924 kubelet[2307]: E0527 03:25:04.310862 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" May 27 03:25:04.312071 kubelet[2307]: E0527 03:25:04.310837 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843446a70dfec7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:25:04.300829819 +0000 UTC m=+0.264411507,LastTimestamp:2025-05-27 03:25:04.300829819 +0000 UTC m=+0.264411507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:25:04.312258 kubelet[2307]: I0527 03:25:04.312235 2307 factory.go:221] Registration of the systemd container factory successfully May 27 03:25:04.312369 kubelet[2307]: I0527 03:25:04.312344 2307 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:25:04.313691 kubelet[2307]: E0527 03:25:04.313627 2307 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:25:04.313835 kubelet[2307]: I0527 03:25:04.313815 2307 factory.go:221] Registration of the containerd container factory successfully May 27 03:25:04.324165 kubelet[2307]: I0527 03:25:04.324118 2307 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:25:04.324306 kubelet[2307]: I0527 03:25:04.324256 2307 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:25:04.324306 kubelet[2307]: I0527 03:25:04.324280 2307 state_mem.go:36] "Initialized new in-memory state store" May 27 03:25:04.410558 kubelet[2307]: E0527 03:25:04.410497 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:04.510898 kubelet[2307]: E0527 03:25:04.510840 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:04.512591 kubelet[2307]: E0527 03:25:04.512521 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" May 27 03:25:04.611912 kubelet[2307]: E0527 03:25:04.611757 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:04.712357 kubelet[2307]: E0527 03:25:04.712296 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:04.801285 kubelet[2307]: I0527 03:25:04.801207 2307 policy_none.go:49] "None policy: Start" May 27 03:25:04.801285 kubelet[2307]: I0527 03:25:04.801237 2307 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:25:04.801285 kubelet[2307]: I0527 03:25:04.801251 2307 state_mem.go:35] "Initializing new in-memory state store" May 27 03:25:04.803798 kubelet[2307]: I0527 03:25:04.803769 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:25:04.805599 kubelet[2307]: I0527 03:25:04.805530 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:25:04.805653 kubelet[2307]: I0527 03:25:04.805607 2307 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:25:04.805653 kubelet[2307]: I0527 03:25:04.805646 2307 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:25:04.805653 kubelet[2307]: I0527 03:25:04.805653 2307 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:25:04.805731 kubelet[2307]: E0527 03:25:04.805710 2307 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:25:04.807391 kubelet[2307]: W0527 03:25:04.807289 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 27 03:25:04.807391 kubelet[2307]: E0527 03:25:04.807348 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" May 27 03:25:04.812448 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:25:04.812805 kubelet[2307]: E0527 03:25:04.812455 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:04.826829 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:25:04.830233 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:25:04.848270 kubelet[2307]: I0527 03:25:04.848183 2307 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:25:04.848469 kubelet[2307]: I0527 03:25:04.848438 2307 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:25:04.848508 kubelet[2307]: I0527 03:25:04.848467 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:25:04.848899 kubelet[2307]: I0527 03:25:04.848761 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:25:04.849765 kubelet[2307]: E0527 03:25:04.849735 2307 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:25:04.849867 kubelet[2307]: E0527 03:25:04.849854 2307 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:25:04.913159 kubelet[2307]: I0527 03:25:04.912547 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:04.913159 kubelet[2307]: I0527 03:25:04.912619 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:04.913159 kubelet[2307]: I0527 03:25:04.912653 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:04.913159 kubelet[2307]: I0527 03:25:04.912682 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fc85e7f9e5e019ab9e4dbfca06d011a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fc85e7f9e5e019ab9e4dbfca06d011a\") " pod="kube-system/kube-apiserver-localhost" May 27 03:25:04.913159 kubelet[2307]: I0527 03:25:04.912707 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fc85e7f9e5e019ab9e4dbfca06d011a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8fc85e7f9e5e019ab9e4dbfca06d011a\") " pod="kube-system/kube-apiserver-localhost" May 27 03:25:04.913358 kubelet[2307]: I0527 03:25:04.912732 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:04.913358 kubelet[2307]: I0527 03:25:04.912756 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:04.913358 kubelet[2307]: I0527 03:25:04.912780 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:25:04.913358 kubelet[2307]: I0527 03:25:04.912804 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fc85e7f9e5e019ab9e4dbfca06d011a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fc85e7f9e5e019ab9e4dbfca06d011a\") " pod="kube-system/kube-apiserver-localhost" May 27 03:25:04.913358 kubelet[2307]: E0527 03:25:04.913036 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" May 27 03:25:04.917573 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 03:25:04.943482 kubelet[2307]: E0527 03:25:04.943410 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:04.947200 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 03:25:04.950365 kubelet[2307]: I0527 03:25:04.950337 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:25:04.950852 kubelet[2307]: E0527 03:25:04.950813 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 27 03:25:04.960943 kubelet[2307]: E0527 03:25:04.960897 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:04.963809 systemd[1]: Created slice kubepods-burstable-pod8fc85e7f9e5e019ab9e4dbfca06d011a.slice - libcontainer container kubepods-burstable-pod8fc85e7f9e5e019ab9e4dbfca06d011a.slice. May 27 03:25:04.966030 kubelet[2307]: E0527 03:25:04.965995 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:05.152342 kubelet[2307]: I0527 03:25:05.152279 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:25:05.152852 kubelet[2307]: E0527 03:25:05.152678 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 27 03:25:05.186278 kubelet[2307]: W0527 03:25:05.186060 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 27 03:25:05.186278 kubelet[2307]: E0527 03:25:05.186118 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" May 27 03:25:05.245114 containerd[1580]: time="2025-05-27T03:25:05.245062692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 03:25:05.258888 kubelet[2307]: W0527 03:25:05.258820 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 27 03:25:05.258993 kubelet[2307]: E0527 03:25:05.258901 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" May 27 03:25:05.261836 containerd[1580]: time="2025-05-27T03:25:05.261758303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 03:25:05.267588 containerd[1580]: time="2025-05-27T03:25:05.267510801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8fc85e7f9e5e019ab9e4dbfca06d011a,Namespace:kube-system,Attempt:0,}" May 27 03:25:05.284547 containerd[1580]: time="2025-05-27T03:25:05.284482959Z" level=info msg="connecting to shim d0aac2c6827f8098627b5dafd338641be892317217aeffb05e693ce1ebb4668e" address="unix:///run/containerd/s/8fb77c3894ea795c1a07d65b5ab1ef829a40e5a0ac601f4afc27bbfca7b69cd5" namespace=k8s.io protocol=ttrpc version=3 May 27 03:25:05.314754 containerd[1580]: time="2025-05-27T03:25:05.314679909Z" level=info msg="connecting to shim 007b7f00fc10762e5da9f0dd90278f84baa0b016bbd04f4d2ef8f810cb9206ec" address="unix:///run/containerd/s/e47b144bfb60f1f09d5a7e16a90f51f76c7327740e2af6fe11f514a3fe77f744" namespace=k8s.io protocol=ttrpc version=3 May 27 03:25:05.316979 containerd[1580]: time="2025-05-27T03:25:05.316943575Z" level=info msg="connecting to shim 6a4e1062fff39f6de64f34f5171e94bfaafacf5b67eab0b33b1374c40e868e6c" address="unix:///run/containerd/s/ffdb2240133d8f8c5358208220683a0f7108a1f3e8f830a74ed50f09ba583288" namespace=k8s.io protocol=ttrpc version=3 May 27 03:25:05.327375 systemd[1]: Started cri-containerd-d0aac2c6827f8098627b5dafd338641be892317217aeffb05e693ce1ebb4668e.scope - libcontainer container d0aac2c6827f8098627b5dafd338641be892317217aeffb05e693ce1ebb4668e. May 27 03:25:05.367270 systemd[1]: Started cri-containerd-007b7f00fc10762e5da9f0dd90278f84baa0b016bbd04f4d2ef8f810cb9206ec.scope - libcontainer container 007b7f00fc10762e5da9f0dd90278f84baa0b016bbd04f4d2ef8f810cb9206ec. May 27 03:25:05.373561 systemd[1]: Started cri-containerd-6a4e1062fff39f6de64f34f5171e94bfaafacf5b67eab0b33b1374c40e868e6c.scope - libcontainer container 6a4e1062fff39f6de64f34f5171e94bfaafacf5b67eab0b33b1374c40e868e6c. May 27 03:25:05.426639 containerd[1580]: time="2025-05-27T03:25:05.426569362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0aac2c6827f8098627b5dafd338641be892317217aeffb05e693ce1ebb4668e\"" May 27 03:25:05.434937 containerd[1580]: time="2025-05-27T03:25:05.434877022Z" level=info msg="CreateContainer within sandbox \"d0aac2c6827f8098627b5dafd338641be892317217aeffb05e693ce1ebb4668e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:25:05.437010 containerd[1580]: time="2025-05-27T03:25:05.436732732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8fc85e7f9e5e019ab9e4dbfca06d011a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a4e1062fff39f6de64f34f5171e94bfaafacf5b67eab0b33b1374c40e868e6c\"" May 27 03:25:05.440631 containerd[1580]: time="2025-05-27T03:25:05.440602289Z" level=info msg="CreateContainer within sandbox \"6a4e1062fff39f6de64f34f5171e94bfaafacf5b67eab0b33b1374c40e868e6c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:25:05.442822 containerd[1580]: time="2025-05-27T03:25:05.442771707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"007b7f00fc10762e5da9f0dd90278f84baa0b016bbd04f4d2ef8f810cb9206ec\"" May 27 03:25:05.444925 containerd[1580]: time="2025-05-27T03:25:05.444886794Z" level=info msg="CreateContainer within sandbox \"007b7f00fc10762e5da9f0dd90278f84baa0b016bbd04f4d2ef8f810cb9206ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:25:05.461405 containerd[1580]: time="2025-05-27T03:25:05.461367742Z" level=info msg="Container d80d07c5293ec140a4c549b1b39c875f8cfe25e4ab9ce8307f80ea409f85a2bd: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:05.472025 containerd[1580]: time="2025-05-27T03:25:05.471956870Z" level=info msg="Container ca3a954801b4675b2b2e699ebc48b1534606af11e909c9bec212baf1ca7298e7: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:05.478308 containerd[1580]: time="2025-05-27T03:25:05.478253298Z" level=info msg="Container 1fad069986fd636976333b5aa807dc1d486cbfddd79df06e253aebe816354880: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:05.484353 containerd[1580]: time="2025-05-27T03:25:05.484297713Z" level=info msg="CreateContainer within sandbox \"6a4e1062fff39f6de64f34f5171e94bfaafacf5b67eab0b33b1374c40e868e6c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca3a954801b4675b2b2e699ebc48b1534606af11e909c9bec212baf1ca7298e7\"" May 27 03:25:05.484993 containerd[1580]: time="2025-05-27T03:25:05.484960086Z" level=info msg="StartContainer for \"ca3a954801b4675b2b2e699ebc48b1534606af11e909c9bec212baf1ca7298e7\"" May 27 03:25:05.486298 containerd[1580]: time="2025-05-27T03:25:05.486268830Z" level=info msg="connecting to shim ca3a954801b4675b2b2e699ebc48b1534606af11e909c9bec212baf1ca7298e7" address="unix:///run/containerd/s/ffdb2240133d8f8c5358208220683a0f7108a1f3e8f830a74ed50f09ba583288" protocol=ttrpc version=3 May 27 03:25:05.489163 containerd[1580]: time="2025-05-27T03:25:05.489093227Z" level=info msg="CreateContainer within sandbox \"d0aac2c6827f8098627b5dafd338641be892317217aeffb05e693ce1ebb4668e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d80d07c5293ec140a4c549b1b39c875f8cfe25e4ab9ce8307f80ea409f85a2bd\"" May 27 03:25:05.489624 containerd[1580]: time="2025-05-27T03:25:05.489580621Z" level=info msg="StartContainer for \"d80d07c5293ec140a4c549b1b39c875f8cfe25e4ab9ce8307f80ea409f85a2bd\"" May 27 03:25:05.490976 containerd[1580]: time="2025-05-27T03:25:05.490936944Z" level=info msg="connecting to shim d80d07c5293ec140a4c549b1b39c875f8cfe25e4ab9ce8307f80ea409f85a2bd" address="unix:///run/containerd/s/8fb77c3894ea795c1a07d65b5ab1ef829a40e5a0ac601f4afc27bbfca7b69cd5" protocol=ttrpc version=3 May 27 03:25:05.497285 containerd[1580]: time="2025-05-27T03:25:05.497209628Z" level=info msg="CreateContainer within sandbox \"007b7f00fc10762e5da9f0dd90278f84baa0b016bbd04f4d2ef8f810cb9206ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fad069986fd636976333b5aa807dc1d486cbfddd79df06e253aebe816354880\"" May 27 03:25:05.498231 containerd[1580]: time="2025-05-27T03:25:05.498188714Z" level=info msg="StartContainer for \"1fad069986fd636976333b5aa807dc1d486cbfddd79df06e253aebe816354880\"" May 27 03:25:05.500504 containerd[1580]: time="2025-05-27T03:25:05.500370966Z" level=info msg="connecting to shim 1fad069986fd636976333b5aa807dc1d486cbfddd79df06e253aebe816354880" address="unix:///run/containerd/s/e47b144bfb60f1f09d5a7e16a90f51f76c7327740e2af6fe11f514a3fe77f744" protocol=ttrpc version=3 May 27 03:25:05.506405 systemd[1]: Started cri-containerd-ca3a954801b4675b2b2e699ebc48b1534606af11e909c9bec212baf1ca7298e7.scope - libcontainer container ca3a954801b4675b2b2e699ebc48b1534606af11e909c9bec212baf1ca7298e7. May 27 03:25:05.510182 systemd[1]: Started cri-containerd-d80d07c5293ec140a4c549b1b39c875f8cfe25e4ab9ce8307f80ea409f85a2bd.scope - libcontainer container d80d07c5293ec140a4c549b1b39c875f8cfe25e4ab9ce8307f80ea409f85a2bd. May 27 03:25:05.528378 systemd[1]: Started cri-containerd-1fad069986fd636976333b5aa807dc1d486cbfddd79df06e253aebe816354880.scope - libcontainer container 1fad069986fd636976333b5aa807dc1d486cbfddd79df06e253aebe816354880. May 27 03:25:05.540556 kubelet[2307]: W0527 03:25:05.540454 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 27 03:25:05.540809 kubelet[2307]: E0527 03:25:05.540782 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" May 27 03:25:05.555601 kubelet[2307]: I0527 03:25:05.555567 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:25:05.557451 kubelet[2307]: E0527 03:25:05.557425 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 27 03:25:05.586161 containerd[1580]: time="2025-05-27T03:25:05.586068570Z" level=info msg="StartContainer for \"d80d07c5293ec140a4c549b1b39c875f8cfe25e4ab9ce8307f80ea409f85a2bd\" returns successfully" May 27 03:25:05.586496 containerd[1580]: time="2025-05-27T03:25:05.586425029Z" level=info msg="StartContainer for \"ca3a954801b4675b2b2e699ebc48b1534606af11e909c9bec212baf1ca7298e7\" returns successfully" May 27 03:25:05.604985 containerd[1580]: time="2025-05-27T03:25:05.604919172Z" level=info msg="StartContainer for \"1fad069986fd636976333b5aa807dc1d486cbfddd79df06e253aebe816354880\" returns successfully" May 27 03:25:05.816111 kubelet[2307]: E0527 03:25:05.816063 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:05.822208 kubelet[2307]: E0527 03:25:05.822169 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:05.823856 kubelet[2307]: E0527 03:25:05.823825 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:06.359166 kubelet[2307]: I0527 03:25:06.359105 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:25:06.532658 kubelet[2307]: E0527 03:25:06.532609 2307 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 03:25:06.632374 kubelet[2307]: I0527 03:25:06.632203 2307 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:25:06.632374 kubelet[2307]: E0527 03:25:06.632271 2307 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 03:25:06.658502 kubelet[2307]: E0527 03:25:06.658459 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:06.759637 kubelet[2307]: E0527 03:25:06.759556 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:06.826241 kubelet[2307]: E0527 03:25:06.826111 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:06.826632 kubelet[2307]: E0527 03:25:06.826615 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:06.860576 kubelet[2307]: E0527 03:25:06.860510 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:06.960811 kubelet[2307]: E0527 03:25:06.960679 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.061489 kubelet[2307]: E0527 03:25:07.061412 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.162640 kubelet[2307]: E0527 03:25:07.162563 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.263222 kubelet[2307]: E0527 03:25:07.263170 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.363427 kubelet[2307]: E0527 03:25:07.363349 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.464052 kubelet[2307]: E0527 03:25:07.463978 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.564816 kubelet[2307]: E0527 03:25:07.564638 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.665573 kubelet[2307]: E0527 03:25:07.665481 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.766458 kubelet[2307]: E0527 03:25:07.766396 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.827660 kubelet[2307]: E0527 03:25:07.827528 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:25:07.866860 kubelet[2307]: E0527 03:25:07.866804 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:07.967508 kubelet[2307]: E0527 03:25:07.967440 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:08.068300 kubelet[2307]: E0527 03:25:08.068211 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:08.168931 kubelet[2307]: E0527 03:25:08.168710 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:08.269636 kubelet[2307]: E0527 03:25:08.269561 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:08.370816 kubelet[2307]: E0527 03:25:08.370732 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:08.471767 kubelet[2307]: E0527 03:25:08.471597 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:25:08.510260 kubelet[2307]: I0527 03:25:08.510200 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:25:08.518258 kubelet[2307]: I0527 03:25:08.518226 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:25:08.522581 kubelet[2307]: I0527 03:25:08.522536 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:25:08.969343 systemd[1]: Reload requested from client PID 2583 ('systemctl') (unit session-7.scope)... May 27 03:25:08.969368 systemd[1]: Reloading... May 27 03:25:09.066182 zram_generator::config[2629]: No configuration found. May 27 03:25:09.298668 kubelet[2307]: I0527 03:25:09.298618 2307 apiserver.go:52] "Watching apiserver" May 27 03:25:09.304381 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:25:09.311100 kubelet[2307]: I0527 03:25:09.311046 2307 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:25:09.456085 systemd[1]: Reloading finished in 486 ms. May 27 03:25:09.493585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:25:09.513854 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:25:09.514217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:25:09.514282 systemd[1]: kubelet.service: Consumed 760ms CPU time, 132.6M memory peak. May 27 03:25:09.516562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:25:09.751846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:25:09.769688 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:25:09.819898 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:25:09.819898 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:25:09.819898 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:25:09.820396 kubelet[2671]: I0527 03:25:09.819834 2671 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:25:09.828390 kubelet[2671]: I0527 03:25:09.828333 2671 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:25:09.828390 kubelet[2671]: I0527 03:25:09.828371 2671 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:25:09.828710 kubelet[2671]: I0527 03:25:09.828681 2671 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:25:09.830361 kubelet[2671]: I0527 03:25:09.830335 2671 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:25:09.833273 kubelet[2671]: I0527 03:25:09.833212 2671 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:25:09.837318 kubelet[2671]: I0527 03:25:09.837288 2671 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:25:09.845336 kubelet[2671]: I0527 03:25:09.845289 2671 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:25:09.845673 kubelet[2671]: I0527 03:25:09.845614 2671 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:25:09.845906 kubelet[2671]: I0527 03:25:09.845664 2671 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:25:09.846003 kubelet[2671]: I0527 03:25:09.845916 2671 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:25:09.846003 kubelet[2671]: I0527 03:25:09.845929 2671 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:25:09.846003 kubelet[2671]: I0527 03:25:09.845993 2671 state_mem.go:36] "Initialized new in-memory state store" May 27 03:25:09.846979 kubelet[2671]: I0527 03:25:09.846218 2671 kubelet.go:446] "Attempting to sync node with API server" May 27 03:25:09.846979 kubelet[2671]: I0527 03:25:09.846266 2671 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:25:09.846979 kubelet[2671]: I0527 03:25:09.846295 2671 kubelet.go:352] "Adding apiserver pod source" May 27 03:25:09.846979 kubelet[2671]: I0527 03:25:09.846309 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:25:09.847733 kubelet[2671]: I0527 03:25:09.847702 2671 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:25:09.850153 kubelet[2671]: I0527 03:25:09.848601 2671 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:25:09.850153 kubelet[2671]: I0527 03:25:09.849717 2671 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:25:09.850153 kubelet[2671]: I0527 03:25:09.849788 2671 server.go:1287] "Started kubelet" May 27 03:25:09.850327 kubelet[2671]: I0527 03:25:09.850296 2671 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:25:09.850558 kubelet[2671]: I0527 03:25:09.850479 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:25:09.852389 kubelet[2671]: I0527 03:25:09.851078 2671 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:25:09.852809 kubelet[2671]: I0527 03:25:09.852436 2671 server.go:479] "Adding debug handlers to kubelet server" May 27 03:25:09.856087 kubelet[2671]: E0527 03:25:09.855861 2671 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:25:09.856087 kubelet[2671]: I0527 03:25:09.856025 2671 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:25:09.856547 kubelet[2671]: I0527 03:25:09.856520 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:25:09.857020 kubelet[2671]: I0527 03:25:09.856780 2671 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:25:09.857657 kubelet[2671]: I0527 03:25:09.857620 2671 factory.go:221] Registration of the systemd container factory successfully May 27 03:25:09.858856 kubelet[2671]: I0527 03:25:09.857800 2671 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:25:09.861132 kubelet[2671]: I0527 03:25:09.861087 2671 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:25:09.862260 kubelet[2671]: I0527 03:25:09.861909 2671 factory.go:221] Registration of the containerd container factory successfully May 27 03:25:09.864348 kubelet[2671]: I0527 03:25:09.864302 2671 reconciler.go:26] "Reconciler: start to sync state" May 27 03:25:09.875353 kubelet[2671]: I0527 03:25:09.875274 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:25:09.877507 kubelet[2671]: I0527 03:25:09.877474 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:25:09.877507 kubelet[2671]: I0527 03:25:09.877506 2671 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:25:09.877590 kubelet[2671]: I0527 03:25:09.877531 2671 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:25:09.877590 kubelet[2671]: I0527 03:25:09.877539 2671 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:25:09.877639 kubelet[2671]: E0527 03:25:09.877587 2671 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:25:09.911009 kubelet[2671]: I0527 03:25:09.910944 2671 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:25:09.911009 kubelet[2671]: I0527 03:25:09.910968 2671 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:25:09.911009 kubelet[2671]: I0527 03:25:09.910998 2671 state_mem.go:36] "Initialized new in-memory state store" May 27 03:25:09.911239 kubelet[2671]: I0527 03:25:09.911220 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:25:09.911298 kubelet[2671]: I0527 03:25:09.911236 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:25:09.911298 kubelet[2671]: I0527 03:25:09.911255 2671 policy_none.go:49] "None policy: Start" May 27 03:25:09.911298 kubelet[2671]: I0527 03:25:09.911266 2671 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:25:09.911298 kubelet[2671]: I0527 03:25:09.911277 2671 state_mem.go:35] "Initializing new in-memory state store" May 27 03:25:09.911377 kubelet[2671]: I0527 03:25:09.911370 2671 state_mem.go:75] "Updated machine memory state" May 27 03:25:09.916041 kubelet[2671]: I0527 03:25:09.916002 2671 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:25:09.916325 kubelet[2671]: I0527 03:25:09.916221 2671 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:25:09.916325 kubelet[2671]: I0527 03:25:09.916240 2671 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:25:09.916615 kubelet[2671]: I0527 03:25:09.916572 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:25:09.917623 kubelet[2671]: E0527 03:25:09.917553 2671 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:25:09.979028 kubelet[2671]: I0527 03:25:09.978971 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:25:09.979184 kubelet[2671]: I0527 03:25:09.979104 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:25:09.979374 kubelet[2671]: I0527 03:25:09.979348 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:25:09.986501 kubelet[2671]: E0527 03:25:09.986234 2671 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:25:09.986765 kubelet[2671]: E0527 03:25:09.986706 2671 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 03:25:09.986904 kubelet[2671]: E0527 03:25:09.986885 2671 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:25:10.020330 kubelet[2671]: I0527 03:25:10.020281 2671 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:25:10.030690 kubelet[2671]: I0527 03:25:10.030635 2671 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 03:25:10.030876 kubelet[2671]: I0527 03:25:10.030750 2671 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:25:10.065675 kubelet[2671]: I0527 03:25:10.065578 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:10.065856 kubelet[2671]: I0527 03:25:10.065654 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:10.065856 kubelet[2671]: I0527 03:25:10.065810 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8fc85e7f9e5e019ab9e4dbfca06d011a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fc85e7f9e5e019ab9e4dbfca06d011a\") " pod="kube-system/kube-apiserver-localhost" May 27 03:25:10.065902 kubelet[2671]: I0527 03:25:10.065883 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8fc85e7f9e5e019ab9e4dbfca06d011a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8fc85e7f9e5e019ab9e4dbfca06d011a\") " pod="kube-system/kube-apiserver-localhost" May 27 03:25:10.065979 kubelet[2671]: I0527 03:25:10.065916 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:10.066012 kubelet[2671]: I0527 03:25:10.065987 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:10.066072 kubelet[2671]: I0527 03:25:10.066048 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:25:10.066215 kubelet[2671]: I0527 03:25:10.066174 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8fc85e7f9e5e019ab9e4dbfca06d011a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8fc85e7f9e5e019ab9e4dbfca06d011a\") " pod="kube-system/kube-apiserver-localhost" May 27 03:25:10.066215 kubelet[2671]: I0527 03:25:10.066212 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:25:10.847113 kubelet[2671]: I0527 03:25:10.847069 2671 apiserver.go:52] "Watching apiserver" May 27 03:25:10.861272 kubelet[2671]: I0527 03:25:10.861226 2671 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:25:10.887556 kubelet[2671]: I0527 03:25:10.887316 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:25:10.887556 kubelet[2671]: I0527 03:25:10.887415 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:25:10.932305 kubelet[2671]: E0527 03:25:10.932125 2671 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:25:10.932754 kubelet[2671]: E0527 03:25:10.932430 2671 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:25:10.974646 kubelet[2671]: I0527 03:25:10.974506 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.974479534 podStartE2EDuration="2.974479534s" podCreationTimestamp="2025-05-27 03:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:25:10.958537096 +0000 UTC m=+1.184052432" watchObservedRunningTime="2025-05-27 03:25:10.974479534 +0000 UTC m=+1.199994870" May 27 03:25:10.987493 kubelet[2671]: I0527 03:25:10.987392 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.987361983 podStartE2EDuration="2.987361983s" podCreationTimestamp="2025-05-27 03:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:25:10.975563797 +0000 UTC m=+1.201079133" watchObservedRunningTime="2025-05-27 03:25:10.987361983 +0000 UTC m=+1.212877319" May 27 03:25:10.988720 kubelet[2671]: I0527 03:25:10.988645 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.988635401 podStartE2EDuration="2.988635401s" podCreationTimestamp="2025-05-27 03:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:25:10.988372468 +0000 UTC m=+1.213887804" watchObservedRunningTime="2025-05-27 03:25:10.988635401 +0000 UTC m=+1.214150737" May 27 03:25:15.575080 kubelet[2671]: I0527 03:25:15.575035 2671 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:25:15.575547 containerd[1580]: time="2025-05-27T03:25:15.575448781Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:25:15.575807 kubelet[2671]: I0527 03:25:15.575657 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:25:16.540902 systemd[1]: Created slice kubepods-besteffort-pod58ec2c13_faaa_4f96_8430_2bb6cec92f39.slice - libcontainer container kubepods-besteffort-pod58ec2c13_faaa_4f96_8430_2bb6cec92f39.slice. May 27 03:25:16.599688 systemd[1]: Created slice kubepods-besteffort-pod4cdf2d34_39af_4acb_bcfa_79504bf9a2ab.slice - libcontainer container kubepods-besteffort-pod4cdf2d34_39af_4acb_bcfa_79504bf9a2ab.slice. May 27 03:25:16.602760 kubelet[2671]: I0527 03:25:16.602703 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppsxk\" (UniqueName: \"kubernetes.io/projected/58ec2c13-faaa-4f96-8430-2bb6cec92f39-kube-api-access-ppsxk\") pod \"kube-proxy-5pmvk\" (UID: \"58ec2c13-faaa-4f96-8430-2bb6cec92f39\") " pod="kube-system/kube-proxy-5pmvk" May 27 03:25:16.602760 kubelet[2671]: I0527 03:25:16.602743 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4cdf2d34-39af-4acb-bcfa-79504bf9a2ab-var-lib-calico\") pod \"tigera-operator-844669ff44-fr86j\" (UID: \"4cdf2d34-39af-4acb-bcfa-79504bf9a2ab\") " pod="tigera-operator/tigera-operator-844669ff44-fr86j" May 27 03:25:16.604240 kubelet[2671]: I0527 03:25:16.602762 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlsvq\" (UniqueName: \"kubernetes.io/projected/4cdf2d34-39af-4acb-bcfa-79504bf9a2ab-kube-api-access-rlsvq\") pod \"tigera-operator-844669ff44-fr86j\" (UID: \"4cdf2d34-39af-4acb-bcfa-79504bf9a2ab\") " pod="tigera-operator/tigera-operator-844669ff44-fr86j" May 27 03:25:16.604311 kubelet[2671]: I0527 03:25:16.604280 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58ec2c13-faaa-4f96-8430-2bb6cec92f39-lib-modules\") pod \"kube-proxy-5pmvk\" (UID: \"58ec2c13-faaa-4f96-8430-2bb6cec92f39\") " pod="kube-system/kube-proxy-5pmvk" May 27 03:25:16.604311 kubelet[2671]: I0527 03:25:16.604305 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58ec2c13-faaa-4f96-8430-2bb6cec92f39-kube-proxy\") pod \"kube-proxy-5pmvk\" (UID: \"58ec2c13-faaa-4f96-8430-2bb6cec92f39\") " pod="kube-system/kube-proxy-5pmvk" May 27 03:25:16.604372 kubelet[2671]: I0527 03:25:16.604319 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58ec2c13-faaa-4f96-8430-2bb6cec92f39-xtables-lock\") pod \"kube-proxy-5pmvk\" (UID: \"58ec2c13-faaa-4f96-8430-2bb6cec92f39\") " pod="kube-system/kube-proxy-5pmvk" May 27 03:25:16.851408 containerd[1580]: time="2025-05-27T03:25:16.851216259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pmvk,Uid:58ec2c13-faaa-4f96-8430-2bb6cec92f39,Namespace:kube-system,Attempt:0,}" May 27 03:25:16.876303 containerd[1580]: time="2025-05-27T03:25:16.876244885Z" level=info msg="connecting to shim 0a5103f4eecea366dcd36849da1a7a4443acb1f74cf09ca35cdd2c78d2ef8a2d" address="unix:///run/containerd/s/0ce9b4b63f583d6a3229643346b0e010778853226958e4ac51b6683f48bfda83" namespace=k8s.io protocol=ttrpc version=3 May 27 03:25:16.904826 containerd[1580]: time="2025-05-27T03:25:16.904522628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-fr86j,Uid:4cdf2d34-39af-4acb-bcfa-79504bf9a2ab,Namespace:tigera-operator,Attempt:0,}" May 27 03:25:16.916369 systemd[1]: Started cri-containerd-0a5103f4eecea366dcd36849da1a7a4443acb1f74cf09ca35cdd2c78d2ef8a2d.scope - libcontainer container 0a5103f4eecea366dcd36849da1a7a4443acb1f74cf09ca35cdd2c78d2ef8a2d. May 27 03:25:16.932590 containerd[1580]: time="2025-05-27T03:25:16.932470537Z" level=info msg="connecting to shim 25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e" address="unix:///run/containerd/s/444a5f5db615c0a7a8af501eb8d3a72b94a6d5f2124558d43a4b6e33311b58ed" namespace=k8s.io protocol=ttrpc version=3 May 27 03:25:16.964672 systemd[1]: Started cri-containerd-25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e.scope - libcontainer container 25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e. May 27 03:25:16.966805 containerd[1580]: time="2025-05-27T03:25:16.966725879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pmvk,Uid:58ec2c13-faaa-4f96-8430-2bb6cec92f39,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a5103f4eecea366dcd36849da1a7a4443acb1f74cf09ca35cdd2c78d2ef8a2d\"" May 27 03:25:16.971779 containerd[1580]: time="2025-05-27T03:25:16.971735792Z" level=info msg="CreateContainer within sandbox \"0a5103f4eecea366dcd36849da1a7a4443acb1f74cf09ca35cdd2c78d2ef8a2d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:25:16.986933 containerd[1580]: time="2025-05-27T03:25:16.986884952Z" level=info msg="Container 5899fdced0701ab5b9946a149c50eaf273fb3105b1638980739d9a17fcc32f0d: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:16.996225 containerd[1580]: time="2025-05-27T03:25:16.996180058Z" level=info msg="CreateContainer within sandbox \"0a5103f4eecea366dcd36849da1a7a4443acb1f74cf09ca35cdd2c78d2ef8a2d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5899fdced0701ab5b9946a149c50eaf273fb3105b1638980739d9a17fcc32f0d\"" May 27 03:25:16.997346 containerd[1580]: time="2025-05-27T03:25:16.997319725Z" level=info msg="StartContainer for \"5899fdced0701ab5b9946a149c50eaf273fb3105b1638980739d9a17fcc32f0d\"" May 27 03:25:16.998937 containerd[1580]: time="2025-05-27T03:25:16.998915307Z" level=info msg="connecting to shim 5899fdced0701ab5b9946a149c50eaf273fb3105b1638980739d9a17fcc32f0d" address="unix:///run/containerd/s/0ce9b4b63f583d6a3229643346b0e010778853226958e4ac51b6683f48bfda83" protocol=ttrpc version=3 May 27 03:25:17.019471 containerd[1580]: time="2025-05-27T03:25:17.019409183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-fr86j,Uid:4cdf2d34-39af-4acb-bcfa-79504bf9a2ab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\"" May 27 03:25:17.025251 containerd[1580]: time="2025-05-27T03:25:17.022950678Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:25:17.028353 systemd[1]: Started cri-containerd-5899fdced0701ab5b9946a149c50eaf273fb3105b1638980739d9a17fcc32f0d.scope - libcontainer container 5899fdced0701ab5b9946a149c50eaf273fb3105b1638980739d9a17fcc32f0d. May 27 03:25:17.075040 containerd[1580]: time="2025-05-27T03:25:17.074996386Z" level=info msg="StartContainer for \"5899fdced0701ab5b9946a149c50eaf273fb3105b1638980739d9a17fcc32f0d\" returns successfully" May 27 03:25:19.162410 kubelet[2671]: I0527 03:25:19.162321 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5pmvk" podStartSLOduration=3.16230111 podStartE2EDuration="3.16230111s" podCreationTimestamp="2025-05-27 03:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:25:17.95232808 +0000 UTC m=+8.177843416" watchObservedRunningTime="2025-05-27 03:25:19.16230111 +0000 UTC m=+9.387816446" May 27 03:25:20.168489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569258654.mount: Deactivated successfully. May 27 03:25:20.523313 containerd[1580]: time="2025-05-27T03:25:20.523247956Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:20.524014 containerd[1580]: time="2025-05-27T03:25:20.523988821Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:25:20.525444 containerd[1580]: time="2025-05-27T03:25:20.525396858Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:20.529159 containerd[1580]: time="2025-05-27T03:25:20.528045906Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:20.529563 containerd[1580]: time="2025-05-27T03:25:20.529510151Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.504249194s" May 27 03:25:20.529687 containerd[1580]: time="2025-05-27T03:25:20.529664395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:25:20.532952 containerd[1580]: time="2025-05-27T03:25:20.532922836Z" level=info msg="CreateContainer within sandbox \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:25:20.542610 containerd[1580]: time="2025-05-27T03:25:20.542559151Z" level=info msg="Container e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:20.551309 containerd[1580]: time="2025-05-27T03:25:20.551251815Z" level=info msg="CreateContainer within sandbox \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\"" May 27 03:25:20.551885 containerd[1580]: time="2025-05-27T03:25:20.551835660Z" level=info msg="StartContainer for \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\"" May 27 03:25:20.552908 containerd[1580]: time="2025-05-27T03:25:20.552878891Z" level=info msg="connecting to shim e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77" address="unix:///run/containerd/s/444a5f5db615c0a7a8af501eb8d3a72b94a6d5f2124558d43a4b6e33311b58ed" protocol=ttrpc version=3 May 27 03:25:20.610264 systemd[1]: Started cri-containerd-e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77.scope - libcontainer container e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77. May 27 03:25:20.644207 containerd[1580]: time="2025-05-27T03:25:20.644157460Z" level=info msg="StartContainer for \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" returns successfully" May 27 03:25:20.922932 kubelet[2671]: I0527 03:25:20.922763 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-fr86j" podStartSLOduration=1.412986291 podStartE2EDuration="4.922743096s" podCreationTimestamp="2025-05-27 03:25:16 +0000 UTC" firstStartedPulling="2025-05-27 03:25:17.021722815 +0000 UTC m=+7.247238161" lastFinishedPulling="2025-05-27 03:25:20.53147962 +0000 UTC m=+10.756994966" observedRunningTime="2025-05-27 03:25:20.922591176 +0000 UTC m=+11.148106512" watchObservedRunningTime="2025-05-27 03:25:20.922743096 +0000 UTC m=+11.148258432" May 27 03:25:26.459860 sudo[1758]: pam_unix(sudo:session): session closed for user root May 27 03:25:26.462936 sshd[1757]: Connection closed by 10.0.0.1 port 60556 May 27 03:25:26.464538 sshd-session[1755]: pam_unix(sshd:session): session closed for user core May 27 03:25:26.471087 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. May 27 03:25:26.472540 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:60556.service: Deactivated successfully. May 27 03:25:26.477083 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:25:26.477886 systemd[1]: session-7.scope: Consumed 4.434s CPU time, 224.8M memory peak. May 27 03:25:26.483791 systemd-logind[1505]: Removed session 7. May 27 03:25:27.257152 update_engine[1508]: I20250527 03:25:27.256201 1508 update_attempter.cc:509] Updating boot flags... May 27 03:25:29.370798 systemd[1]: Created slice kubepods-besteffort-podd5855a5c_41c3_4994_a5e6_2f21333fef2e.slice - libcontainer container kubepods-besteffort-podd5855a5c_41c3_4994_a5e6_2f21333fef2e.slice. May 27 03:25:29.385556 kubelet[2671]: I0527 03:25:29.385523 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ll8\" (UniqueName: \"kubernetes.io/projected/d5855a5c-41c3-4994-a5e6-2f21333fef2e-kube-api-access-44ll8\") pod \"calico-typha-6d64b75d5-w8w2n\" (UID: \"d5855a5c-41c3-4994-a5e6-2f21333fef2e\") " pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:25:29.385556 kubelet[2671]: I0527 03:25:29.385559 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5855a5c-41c3-4994-a5e6-2f21333fef2e-tigera-ca-bundle\") pod \"calico-typha-6d64b75d5-w8w2n\" (UID: \"d5855a5c-41c3-4994-a5e6-2f21333fef2e\") " pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:25:29.385938 kubelet[2671]: I0527 03:25:29.385573 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d5855a5c-41c3-4994-a5e6-2f21333fef2e-typha-certs\") pod \"calico-typha-6d64b75d5-w8w2n\" (UID: \"d5855a5c-41c3-4994-a5e6-2f21333fef2e\") " pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:25:29.589902 systemd[1]: Created slice kubepods-besteffort-pod3ba286e9_822e_413a_a6bf_426b06794d9c.slice - libcontainer container kubepods-besteffort-pod3ba286e9_822e_413a_a6bf_426b06794d9c.slice. May 27 03:25:29.677535 containerd[1580]: time="2025-05-27T03:25:29.677375163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d64b75d5-w8w2n,Uid:d5855a5c-41c3-4994-a5e6-2f21333fef2e,Namespace:calico-system,Attempt:0,}" May 27 03:25:29.686751 kubelet[2671]: I0527 03:25:29.686688 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-cni-bin-dir\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.686751 kubelet[2671]: I0527 03:25:29.686725 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-xtables-lock\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.686751 kubelet[2671]: I0527 03:25:29.686740 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3ba286e9-822e-413a-a6bf-426b06794d9c-node-certs\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.686751 kubelet[2671]: I0527 03:25:29.686755 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-policysync\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.686751 kubelet[2671]: I0527 03:25:29.686773 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-lib-modules\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.687056 kubelet[2671]: I0527 03:25:29.686787 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-cni-log-dir\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.687056 kubelet[2671]: I0527 03:25:29.686852 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-flexvol-driver-host\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.687056 kubelet[2671]: I0527 03:25:29.686894 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4vj5\" (UniqueName: \"kubernetes.io/projected/3ba286e9-822e-413a-a6bf-426b06794d9c-kube-api-access-g4vj5\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.687056 kubelet[2671]: I0527 03:25:29.686932 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-cni-net-dir\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.687056 kubelet[2671]: I0527 03:25:29.686952 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-var-lib-calico\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.687203 kubelet[2671]: I0527 03:25:29.686984 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ba286e9-822e-413a-a6bf-426b06794d9c-tigera-ca-bundle\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.687203 kubelet[2671]: I0527 03:25:29.687003 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3ba286e9-822e-413a-a6bf-426b06794d9c-var-run-calico\") pod \"calico-node-nl4v8\" (UID: \"3ba286e9-822e-413a-a6bf-426b06794d9c\") " pod="calico-system/calico-node-nl4v8" May 27 03:25:29.790169 kubelet[2671]: E0527 03:25:29.789982 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.790169 kubelet[2671]: W0527 03:25:29.790014 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.790169 kubelet[2671]: E0527 03:25:29.790067 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.791897 kubelet[2671]: E0527 03:25:29.791873 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.791897 kubelet[2671]: W0527 03:25:29.791893 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.791981 kubelet[2671]: E0527 03:25:29.791911 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.860411 kubelet[2671]: E0527 03:25:29.860296 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.860411 kubelet[2671]: W0527 03:25:29.860344 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.860411 kubelet[2671]: E0527 03:25:29.860371 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.893484 containerd[1580]: time="2025-05-27T03:25:29.893396154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nl4v8,Uid:3ba286e9-822e-413a-a6bf-426b06794d9c,Namespace:calico-system,Attempt:0,}" May 27 03:25:29.894624 kubelet[2671]: E0527 03:25:29.894560 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:29.916605 containerd[1580]: time="2025-05-27T03:25:29.916535524Z" level=info msg="connecting to shim bd29b6af627f403ab06c22019eed56db6e905aba341cf5d737b5850df0de9b71" address="unix:///run/containerd/s/42c3189df2d357a3d23be29abe87b7e5f963a4dca567e183475fcc598188c51e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:25:29.926037 containerd[1580]: time="2025-05-27T03:25:29.925707858Z" level=info msg="connecting to shim 86b899ae1781c7479bad293950a28ce2383db90b8fab545394351cd7099347ef" address="unix:///run/containerd/s/d17162251f4da78e158b0b5f99583417b0102ad3b274c39bf645e4b1e97f4c43" namespace=k8s.io protocol=ttrpc version=3 May 27 03:25:29.949393 systemd[1]: Started cri-containerd-bd29b6af627f403ab06c22019eed56db6e905aba341cf5d737b5850df0de9b71.scope - libcontainer container bd29b6af627f403ab06c22019eed56db6e905aba341cf5d737b5850df0de9b71. May 27 03:25:29.953247 systemd[1]: Started cri-containerd-86b899ae1781c7479bad293950a28ce2383db90b8fab545394351cd7099347ef.scope - libcontainer container 86b899ae1781c7479bad293950a28ce2383db90b8fab545394351cd7099347ef. May 27 03:25:29.972158 kubelet[2671]: E0527 03:25:29.972089 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.972158 kubelet[2671]: W0527 03:25:29.972119 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.972400 kubelet[2671]: E0527 03:25:29.972347 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.972902 kubelet[2671]: E0527 03:25:29.972863 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.972902 kubelet[2671]: W0527 03:25:29.972898 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.972976 kubelet[2671]: E0527 03:25:29.972927 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.974174 kubelet[2671]: E0527 03:25:29.973364 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.974174 kubelet[2671]: W0527 03:25:29.973379 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.974174 kubelet[2671]: E0527 03:25:29.973388 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.975282 kubelet[2671]: E0527 03:25:29.975246 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.975282 kubelet[2671]: W0527 03:25:29.975278 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.975664 kubelet[2671]: E0527 03:25:29.975307 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.975664 kubelet[2671]: E0527 03:25:29.975648 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.975664 kubelet[2671]: W0527 03:25:29.975657 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.975889 kubelet[2671]: E0527 03:25:29.975677 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.975942 kubelet[2671]: E0527 03:25:29.975917 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.975942 kubelet[2671]: W0527 03:25:29.975926 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.975942 kubelet[2671]: E0527 03:25:29.975936 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.976377 kubelet[2671]: E0527 03:25:29.976272 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.976377 kubelet[2671]: W0527 03:25:29.976300 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.976377 kubelet[2671]: E0527 03:25:29.976317 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.976751 kubelet[2671]: E0527 03:25:29.976736 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.976840 kubelet[2671]: W0527 03:25:29.976826 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.976993 kubelet[2671]: E0527 03:25:29.976912 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.977300 kubelet[2671]: E0527 03:25:29.977287 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.977390 kubelet[2671]: W0527 03:25:29.977378 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.977474 kubelet[2671]: E0527 03:25:29.977463 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.977736 kubelet[2671]: E0527 03:25:29.977715 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.977857 kubelet[2671]: W0527 03:25:29.977785 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.977857 kubelet[2671]: E0527 03:25:29.977810 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.978197 kubelet[2671]: E0527 03:25:29.978182 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.978361 kubelet[2671]: W0527 03:25:29.978281 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.978361 kubelet[2671]: E0527 03:25:29.978299 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.979161 kubelet[2671]: E0527 03:25:29.979073 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.979161 kubelet[2671]: W0527 03:25:29.979089 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.979356 kubelet[2671]: E0527 03:25:29.979302 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.979818 kubelet[2671]: E0527 03:25:29.979758 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.979818 kubelet[2671]: W0527 03:25:29.979769 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.979818 kubelet[2671]: E0527 03:25:29.979779 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.980370 kubelet[2671]: E0527 03:25:29.980353 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.980495 kubelet[2671]: W0527 03:25:29.980444 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.980495 kubelet[2671]: E0527 03:25:29.980458 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.980735 kubelet[2671]: E0527 03:25:29.980723 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.980789 kubelet[2671]: W0527 03:25:29.980779 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.980838 kubelet[2671]: E0527 03:25:29.980828 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.981184 kubelet[2671]: E0527 03:25:29.981063 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.981184 kubelet[2671]: W0527 03:25:29.981075 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.981184 kubelet[2671]: E0527 03:25:29.981086 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.981454 kubelet[2671]: E0527 03:25:29.981441 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.981515 kubelet[2671]: W0527 03:25:29.981504 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.981570 kubelet[2671]: E0527 03:25:29.981557 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.981834 kubelet[2671]: E0527 03:25:29.981822 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.981985 kubelet[2671]: W0527 03:25:29.981889 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.981985 kubelet[2671]: E0527 03:25:29.981902 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.982152 kubelet[2671]: E0527 03:25:29.982120 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.982222 kubelet[2671]: W0527 03:25:29.982207 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.982293 kubelet[2671]: E0527 03:25:29.982278 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.982731 kubelet[2671]: E0527 03:25:29.982717 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.982868 kubelet[2671]: W0527 03:25:29.982783 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.982868 kubelet[2671]: E0527 03:25:29.982797 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.989840 kubelet[2671]: E0527 03:25:29.989623 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.989840 kubelet[2671]: W0527 03:25:29.989647 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.989840 kubelet[2671]: E0527 03:25:29.989669 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.989840 kubelet[2671]: I0527 03:25:29.989707 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b054e321-f80c-45e5-a80b-17a7bbc92d8f-kubelet-dir\") pod \"csi-node-driver-lktnw\" (UID: \"b054e321-f80c-45e5-a80b-17a7bbc92d8f\") " pod="calico-system/csi-node-driver-lktnw" May 27 03:25:29.990170 kubelet[2671]: E0527 03:25:29.990105 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.990170 kubelet[2671]: W0527 03:25:29.990118 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.990170 kubelet[2671]: E0527 03:25:29.990146 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.990493 kubelet[2671]: I0527 03:25:29.990349 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b054e321-f80c-45e5-a80b-17a7bbc92d8f-varrun\") pod \"csi-node-driver-lktnw\" (UID: \"b054e321-f80c-45e5-a80b-17a7bbc92d8f\") " pod="calico-system/csi-node-driver-lktnw" May 27 03:25:29.990653 kubelet[2671]: E0527 03:25:29.990640 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.990728 kubelet[2671]: W0527 03:25:29.990715 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.990788 kubelet[2671]: E0527 03:25:29.990777 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.991063 kubelet[2671]: E0527 03:25:29.991050 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.991124 kubelet[2671]: W0527 03:25:29.991112 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.991267 kubelet[2671]: E0527 03:25:29.991213 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.991596 kubelet[2671]: E0527 03:25:29.991581 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.991655 kubelet[2671]: W0527 03:25:29.991644 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.991725 kubelet[2671]: E0527 03:25:29.991710 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.991887 kubelet[2671]: I0527 03:25:29.991860 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b054e321-f80c-45e5-a80b-17a7bbc92d8f-socket-dir\") pod \"csi-node-driver-lktnw\" (UID: \"b054e321-f80c-45e5-a80b-17a7bbc92d8f\") " pod="calico-system/csi-node-driver-lktnw" May 27 03:25:29.992287 kubelet[2671]: E0527 03:25:29.992242 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.992287 kubelet[2671]: W0527 03:25:29.992255 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.992287 kubelet[2671]: E0527 03:25:29.992265 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.992699 kubelet[2671]: E0527 03:25:29.992666 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.992699 kubelet[2671]: W0527 03:25:29.992680 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.992944 kubelet[2671]: E0527 03:25:29.992785 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.993292 kubelet[2671]: E0527 03:25:29.993212 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.993292 kubelet[2671]: W0527 03:25:29.993225 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.993292 kubelet[2671]: E0527 03:25:29.993234 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.993628 kubelet[2671]: E0527 03:25:29.993615 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.993706 kubelet[2671]: W0527 03:25:29.993680 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.993706 kubelet[2671]: E0527 03:25:29.993693 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.994058 kubelet[2671]: E0527 03:25:29.994044 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.994188 kubelet[2671]: W0527 03:25:29.994105 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.994188 kubelet[2671]: E0527 03:25:29.994119 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.994294 kubelet[2671]: I0527 03:25:29.994276 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b054e321-f80c-45e5-a80b-17a7bbc92d8f-registration-dir\") pod \"csi-node-driver-lktnw\" (UID: \"b054e321-f80c-45e5-a80b-17a7bbc92d8f\") " pod="calico-system/csi-node-driver-lktnw" May 27 03:25:29.994851 kubelet[2671]: E0527 03:25:29.994838 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.995018 kubelet[2671]: W0527 03:25:29.994928 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.995018 kubelet[2671]: E0527 03:25:29.994961 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.996518 kubelet[2671]: E0527 03:25:29.996245 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.996518 kubelet[2671]: W0527 03:25:29.996258 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.996518 kubelet[2671]: E0527 03:25:29.996269 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.996880 kubelet[2671]: E0527 03:25:29.996799 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.996924 kubelet[2671]: W0527 03:25:29.996908 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.997266 kubelet[2671]: E0527 03:25:29.996940 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.997532 kubelet[2671]: E0527 03:25:29.997505 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.997532 kubelet[2671]: W0527 03:25:29.997526 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.997593 kubelet[2671]: E0527 03:25:29.997537 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.997806 kubelet[2671]: I0527 03:25:29.997746 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9wmj\" (UniqueName: \"kubernetes.io/projected/b054e321-f80c-45e5-a80b-17a7bbc92d8f-kube-api-access-v9wmj\") pod \"csi-node-driver-lktnw\" (UID: \"b054e321-f80c-45e5-a80b-17a7bbc92d8f\") " pod="calico-system/csi-node-driver-lktnw" May 27 03:25:29.998015 kubelet[2671]: E0527 03:25:29.997890 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:29.998015 kubelet[2671]: W0527 03:25:29.997904 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:29.998015 kubelet[2671]: E0527 03:25:29.997937 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:29.998582 containerd[1580]: time="2025-05-27T03:25:29.998545986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nl4v8,Uid:3ba286e9-822e-413a-a6bf-426b06794d9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"86b899ae1781c7479bad293950a28ce2383db90b8fab545394351cd7099347ef\"" May 27 03:25:30.001541 containerd[1580]: time="2025-05-27T03:25:30.001511862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:25:30.027089 containerd[1580]: time="2025-05-27T03:25:30.026933364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d64b75d5-w8w2n,Uid:d5855a5c-41c3-4994-a5e6-2f21333fef2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd29b6af627f403ab06c22019eed56db6e905aba341cf5d737b5850df0de9b71\"" May 27 03:25:30.099554 kubelet[2671]: E0527 03:25:30.099505 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.099554 kubelet[2671]: W0527 03:25:30.099543 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.099727 kubelet[2671]: E0527 03:25:30.099570 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.099831 kubelet[2671]: E0527 03:25:30.099807 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.099831 kubelet[2671]: W0527 03:25:30.099829 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.099882 kubelet[2671]: E0527 03:25:30.099850 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.100096 kubelet[2671]: E0527 03:25:30.100056 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.100096 kubelet[2671]: W0527 03:25:30.100070 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.100096 kubelet[2671]: E0527 03:25:30.100086 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.100565 kubelet[2671]: E0527 03:25:30.100375 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.100565 kubelet[2671]: W0527 03:25:30.100387 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.100565 kubelet[2671]: E0527 03:25:30.100399 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.100732 kubelet[2671]: E0527 03:25:30.100607 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.100732 kubelet[2671]: W0527 03:25:30.100618 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.100732 kubelet[2671]: E0527 03:25:30.100643 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.101404 kubelet[2671]: E0527 03:25:30.101376 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.101488 kubelet[2671]: W0527 03:25:30.101452 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.101525 kubelet[2671]: E0527 03:25:30.101486 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.101895 kubelet[2671]: E0527 03:25:30.101725 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.101895 kubelet[2671]: W0527 03:25:30.101744 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.101895 kubelet[2671]: E0527 03:25:30.101790 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.102043 kubelet[2671]: E0527 03:25:30.102011 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.102043 kubelet[2671]: W0527 03:25:30.102023 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.102175 kubelet[2671]: E0527 03:25:30.102067 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.102348 kubelet[2671]: E0527 03:25:30.102320 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.102348 kubelet[2671]: W0527 03:25:30.102333 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.102441 kubelet[2671]: E0527 03:25:30.102360 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.102594 kubelet[2671]: E0527 03:25:30.102567 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.102594 kubelet[2671]: W0527 03:25:30.102580 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.102672 kubelet[2671]: E0527 03:25:30.102607 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.102803 kubelet[2671]: E0527 03:25:30.102784 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.102803 kubelet[2671]: W0527 03:25:30.102796 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.102882 kubelet[2671]: E0527 03:25:30.102822 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.103031 kubelet[2671]: E0527 03:25:30.103001 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.103031 kubelet[2671]: W0527 03:25:30.103014 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.103126 kubelet[2671]: E0527 03:25:30.103038 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.103266 kubelet[2671]: E0527 03:25:30.103247 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.103266 kubelet[2671]: W0527 03:25:30.103259 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.103334 kubelet[2671]: E0527 03:25:30.103276 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.103511 kubelet[2671]: E0527 03:25:30.103490 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.103511 kubelet[2671]: W0527 03:25:30.103503 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.103596 kubelet[2671]: E0527 03:25:30.103522 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.103795 kubelet[2671]: E0527 03:25:30.103775 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.103795 kubelet[2671]: W0527 03:25:30.103787 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.103866 kubelet[2671]: E0527 03:25:30.103819 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.103999 kubelet[2671]: E0527 03:25:30.103980 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.103999 kubelet[2671]: W0527 03:25:30.103992 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.104084 kubelet[2671]: E0527 03:25:30.104024 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.104328 kubelet[2671]: E0527 03:25:30.104307 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.104328 kubelet[2671]: W0527 03:25:30.104320 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.104400 kubelet[2671]: E0527 03:25:30.104373 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.104565 kubelet[2671]: E0527 03:25:30.104546 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.104565 kubelet[2671]: W0527 03:25:30.104558 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.104656 kubelet[2671]: E0527 03:25:30.104592 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.104808 kubelet[2671]: E0527 03:25:30.104788 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.104808 kubelet[2671]: W0527 03:25:30.104801 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.104887 kubelet[2671]: E0527 03:25:30.104835 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.105030 kubelet[2671]: E0527 03:25:30.105011 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.105030 kubelet[2671]: W0527 03:25:30.105023 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.105115 kubelet[2671]: E0527 03:25:30.105043 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.105374 kubelet[2671]: E0527 03:25:30.105341 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.105374 kubelet[2671]: W0527 03:25:30.105358 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.105374 kubelet[2671]: E0527 03:25:30.105376 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.105693 kubelet[2671]: E0527 03:25:30.105672 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.105693 kubelet[2671]: W0527 03:25:30.105687 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.105783 kubelet[2671]: E0527 03:25:30.105702 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.105957 kubelet[2671]: E0527 03:25:30.105936 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.105957 kubelet[2671]: W0527 03:25:30.105952 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.106040 kubelet[2671]: E0527 03:25:30.105971 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.106264 kubelet[2671]: E0527 03:25:30.106248 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.106264 kubelet[2671]: W0527 03:25:30.106261 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.106345 kubelet[2671]: E0527 03:25:30.106278 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.106657 kubelet[2671]: E0527 03:25:30.106627 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.106657 kubelet[2671]: W0527 03:25:30.106647 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.106657 kubelet[2671]: E0527 03:25:30.106659 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:30.115365 kubelet[2671]: E0527 03:25:30.115322 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:25:30.115365 kubelet[2671]: W0527 03:25:30.115345 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:25:30.115365 kubelet[2671]: E0527 03:25:30.115368 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:25:31.363529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572136632.mount: Deactivated successfully. May 27 03:25:31.492709 containerd[1580]: time="2025-05-27T03:25:31.492644084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:31.495085 containerd[1580]: time="2025-05-27T03:25:31.495026480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 27 03:25:31.497055 containerd[1580]: time="2025-05-27T03:25:31.497009071Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:31.499209 containerd[1580]: time="2025-05-27T03:25:31.499077214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:31.499755 containerd[1580]: time="2025-05-27T03:25:31.499721513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.497946935s" May 27 03:25:31.499809 containerd[1580]: time="2025-05-27T03:25:31.499755949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:25:31.500833 containerd[1580]: time="2025-05-27T03:25:31.500793361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:25:31.502919 containerd[1580]: time="2025-05-27T03:25:31.502852246Z" level=info msg="CreateContainer within sandbox \"86b899ae1781c7479bad293950a28ce2383db90b8fab545394351cd7099347ef\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:25:31.521870 containerd[1580]: time="2025-05-27T03:25:31.521655820Z" level=info msg="Container ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:31.539919 containerd[1580]: time="2025-05-27T03:25:31.539865782Z" level=info msg="CreateContainer within sandbox \"86b899ae1781c7479bad293950a28ce2383db90b8fab545394351cd7099347ef\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74\"" May 27 03:25:31.540521 containerd[1580]: time="2025-05-27T03:25:31.540472950Z" level=info msg="StartContainer for \"ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74\"" May 27 03:25:31.541985 containerd[1580]: time="2025-05-27T03:25:31.541955746Z" level=info msg="connecting to shim ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74" address="unix:///run/containerd/s/d17162251f4da78e158b0b5f99583417b0102ad3b274c39bf645e4b1e97f4c43" protocol=ttrpc version=3 May 27 03:25:31.567379 systemd[1]: Started cri-containerd-ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74.scope - libcontainer container ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74. May 27 03:25:31.629269 systemd[1]: cri-containerd-ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74.scope: Deactivated successfully. May 27 03:25:31.634193 containerd[1580]: time="2025-05-27T03:25:31.634124829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74\" id:\"ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74\" pid:3290 exited_at:{seconds:1748316331 nanos:633506910}" May 27 03:25:31.839212 containerd[1580]: time="2025-05-27T03:25:31.839112734Z" level=info msg="received exit event container_id:\"ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74\" id:\"ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74\" pid:3290 exited_at:{seconds:1748316331 nanos:633506910}" May 27 03:25:31.840957 containerd[1580]: time="2025-05-27T03:25:31.840881069Z" level=info msg="StartContainer for \"ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74\" returns successfully" May 27 03:25:31.862203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae1af94f00a5214b95da819b72ab7d678a6a9bafeeefe96d96d154b73c889d74-rootfs.mount: Deactivated successfully. May 27 03:25:31.925078 kubelet[2671]: E0527 03:25:31.880703 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:33.878472 kubelet[2671]: E0527 03:25:33.878396 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:35.878885 kubelet[2671]: E0527 03:25:35.878800 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:36.751017 containerd[1580]: time="2025-05-27T03:25:36.750954016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:36.786280 containerd[1580]: time="2025-05-27T03:25:36.786226999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33665828" May 27 03:25:36.868569 containerd[1580]: time="2025-05-27T03:25:36.868493412Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:36.929335 containerd[1580]: time="2025-05-27T03:25:36.929271199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:36.929837 containerd[1580]: time="2025-05-27T03:25:36.929788577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 5.428958576s" May 27 03:25:36.929837 containerd[1580]: time="2025-05-27T03:25:36.929835234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:25:36.930891 containerd[1580]: time="2025-05-27T03:25:36.930871851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:25:36.939747 containerd[1580]: time="2025-05-27T03:25:36.939589724Z" level=info msg="CreateContainer within sandbox \"bd29b6af627f403ab06c22019eed56db6e905aba341cf5d737b5850df0de9b71\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:25:37.236582 containerd[1580]: time="2025-05-27T03:25:37.236525473Z" level=info msg="Container 4fc38f737a61c813dd4a343bb738c37986275a3fb256db1e62fd827fdae426f4: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:37.700270 containerd[1580]: time="2025-05-27T03:25:37.700211746Z" level=info msg="CreateContainer within sandbox \"bd29b6af627f403ab06c22019eed56db6e905aba341cf5d737b5850df0de9b71\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4fc38f737a61c813dd4a343bb738c37986275a3fb256db1e62fd827fdae426f4\"" May 27 03:25:37.700746 containerd[1580]: time="2025-05-27T03:25:37.700720726Z" level=info msg="StartContainer for \"4fc38f737a61c813dd4a343bb738c37986275a3fb256db1e62fd827fdae426f4\"" May 27 03:25:37.701794 containerd[1580]: time="2025-05-27T03:25:37.701767851Z" level=info msg="connecting to shim 4fc38f737a61c813dd4a343bb738c37986275a3fb256db1e62fd827fdae426f4" address="unix:///run/containerd/s/42c3189df2d357a3d23be29abe87b7e5f963a4dca567e183475fcc598188c51e" protocol=ttrpc version=3 May 27 03:25:37.727273 systemd[1]: Started cri-containerd-4fc38f737a61c813dd4a343bb738c37986275a3fb256db1e62fd827fdae426f4.scope - libcontainer container 4fc38f737a61c813dd4a343bb738c37986275a3fb256db1e62fd827fdae426f4. May 27 03:25:37.877958 kubelet[2671]: E0527 03:25:37.877886 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:37.919596 containerd[1580]: time="2025-05-27T03:25:37.919542051Z" level=info msg="StartContainer for \"4fc38f737a61c813dd4a343bb738c37986275a3fb256db1e62fd827fdae426f4\" returns successfully" May 27 03:25:38.190686 kubelet[2671]: I0527 03:25:38.190526 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d64b75d5-w8w2n" podStartSLOduration=2.288119825 podStartE2EDuration="9.190506244s" podCreationTimestamp="2025-05-27 03:25:29 +0000 UTC" firstStartedPulling="2025-05-27 03:25:30.028276858 +0000 UTC m=+20.253792194" lastFinishedPulling="2025-05-27 03:25:36.930663277 +0000 UTC m=+27.156178613" observedRunningTime="2025-05-27 03:25:38.190389043 +0000 UTC m=+28.415904369" watchObservedRunningTime="2025-05-27 03:25:38.190506244 +0000 UTC m=+28.416021570" May 27 03:25:38.945934 kubelet[2671]: I0527 03:25:38.945893 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:25:39.881377 kubelet[2671]: E0527 03:25:39.881295 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:41.880680 kubelet[2671]: E0527 03:25:41.880616 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:43.879868 kubelet[2671]: E0527 03:25:43.879814 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:44.457067 containerd[1580]: time="2025-05-27T03:25:44.456997016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:44.484988 containerd[1580]: time="2025-05-27T03:25:44.484872442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:25:44.508643 containerd[1580]: time="2025-05-27T03:25:44.508554954Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:44.540915 containerd[1580]: time="2025-05-27T03:25:44.540824271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:25:44.541671 containerd[1580]: time="2025-05-27T03:25:44.541616703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 7.610713423s" May 27 03:25:44.541736 containerd[1580]: time="2025-05-27T03:25:44.541670734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:25:44.543778 containerd[1580]: time="2025-05-27T03:25:44.543744408Z" level=info msg="CreateContainer within sandbox \"86b899ae1781c7479bad293950a28ce2383db90b8fab545394351cd7099347ef\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:25:44.637380 containerd[1580]: time="2025-05-27T03:25:44.637324182Z" level=info msg="Container 5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267: CDI devices from CRI Config.CDIDevices: []" May 27 03:25:44.687312 containerd[1580]: time="2025-05-27T03:25:44.687250679Z" level=info msg="CreateContainer within sandbox \"86b899ae1781c7479bad293950a28ce2383db90b8fab545394351cd7099347ef\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267\"" May 27 03:25:44.687966 containerd[1580]: time="2025-05-27T03:25:44.687897457Z" level=info msg="StartContainer for \"5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267\"" May 27 03:25:44.690086 containerd[1580]: time="2025-05-27T03:25:44.690051402Z" level=info msg="connecting to shim 5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267" address="unix:///run/containerd/s/d17162251f4da78e158b0b5f99583417b0102ad3b274c39bf645e4b1e97f4c43" protocol=ttrpc version=3 May 27 03:25:44.712294 systemd[1]: Started cri-containerd-5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267.scope - libcontainer container 5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267. May 27 03:25:44.813701 containerd[1580]: time="2025-05-27T03:25:44.813637332Z" level=info msg="StartContainer for \"5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267\" returns successfully" May 27 03:25:45.878271 kubelet[2671]: E0527 03:25:45.878160 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:46.131814 systemd[1]: cri-containerd-5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267.scope: Deactivated successfully. May 27 03:25:46.132658 systemd[1]: cri-containerd-5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267.scope: Consumed 608ms CPU time, 183M memory peak, 3.5M read from disk, 170.9M written to disk. May 27 03:25:46.134833 kubelet[2671]: I0527 03:25:46.134796 2671 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:25:46.135034 containerd[1580]: time="2025-05-27T03:25:46.134984910Z" level=info msg="received exit event container_id:\"5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267\" id:\"5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267\" pid:3393 exited_at:{seconds:1748316346 nanos:134663406}" May 27 03:25:46.135458 containerd[1580]: time="2025-05-27T03:25:46.135006651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267\" id:\"5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267\" pid:3393 exited_at:{seconds:1748316346 nanos:134663406}" May 27 03:25:46.174658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dbf74aee5d4f00c55a0d07c18b4d5fb51bbe6b016e9f295f277e7bbca381267-rootfs.mount: Deactivated successfully. May 27 03:25:46.182116 systemd[1]: Created slice kubepods-besteffort-pod57d6fdf8_dafc_4012_a8a7_1301381db58e.slice - libcontainer container kubepods-besteffort-pod57d6fdf8_dafc_4012_a8a7_1301381db58e.slice. May 27 03:25:46.192892 systemd[1]: Created slice kubepods-burstable-podba2701a4_383c_4885_b697_c2657b09fefa.slice - libcontainer container kubepods-burstable-podba2701a4_383c_4885_b697_c2657b09fefa.slice. May 27 03:25:46.202904 systemd[1]: Created slice kubepods-burstable-pod0025fdff_1c55_4c53_8432_c3b22baafc85.slice - libcontainer container kubepods-burstable-pod0025fdff_1c55_4c53_8432_c3b22baafc85.slice. May 27 03:25:46.213562 kubelet[2671]: I0527 03:25:46.213308 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrlm\" (UniqueName: \"kubernetes.io/projected/983a0221-2ac7-4637-a33d-b7cc65ccc040-kube-api-access-hkrlm\") pod \"calico-apiserver-76ccbfb48-bpwgs\" (UID: \"983a0221-2ac7-4637-a33d-b7cc65ccc040\") " pod="calico-apiserver/calico-apiserver-76ccbfb48-bpwgs" May 27 03:25:46.213562 kubelet[2671]: I0527 03:25:46.213341 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec93da8-25f2-4392-a00f-ed24f87d6be8-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-w2jhg\" (UID: \"eec93da8-25f2-4392-a00f-ed24f87d6be8\") " pod="calico-system/goldmane-78d55f7ddc-w2jhg" May 27 03:25:46.213562 kubelet[2671]: I0527 03:25:46.213360 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9sp\" (UniqueName: \"kubernetes.io/projected/ba2701a4-383c-4885-b697-c2657b09fefa-kube-api-access-6h9sp\") pod \"coredns-668d6bf9bc-lz678\" (UID: \"ba2701a4-383c-4885-b697-c2657b09fefa\") " pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:25:46.213562 kubelet[2671]: I0527 03:25:46.213384 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv8k8\" (UniqueName: \"kubernetes.io/projected/eec93da8-25f2-4392-a00f-ed24f87d6be8-kube-api-access-hv8k8\") pod \"goldmane-78d55f7ddc-w2jhg\" (UID: \"eec93da8-25f2-4392-a00f-ed24f87d6be8\") " pod="calico-system/goldmane-78d55f7ddc-w2jhg" May 27 03:25:46.213562 kubelet[2671]: I0527 03:25:46.213402 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvdl\" (UniqueName: \"kubernetes.io/projected/0025fdff-1c55-4c53-8432-c3b22baafc85-kube-api-access-5fvdl\") pod \"coredns-668d6bf9bc-qpxp6\" (UID: \"0025fdff-1c55-4c53-8432-c3b22baafc85\") " pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:25:46.213850 kubelet[2671]: I0527 03:25:46.213421 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqgp\" (UniqueName: \"kubernetes.io/projected/57d6fdf8-dafc-4012-a8a7-1301381db58e-kube-api-access-tpqgp\") pod \"calico-kube-controllers-79469b85c4-szmp2\" (UID: \"57d6fdf8-dafc-4012-a8a7-1301381db58e\") " pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:25:46.213850 kubelet[2671]: I0527 03:25:46.213439 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba2701a4-383c-4885-b697-c2657b09fefa-config-volume\") pod \"coredns-668d6bf9bc-lz678\" (UID: \"ba2701a4-383c-4885-b697-c2657b09fefa\") " pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:25:46.213850 kubelet[2671]: I0527 03:25:46.213456 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/505c9554-5eb5-4a9a-bd7b-577a3564eb3e-calico-apiserver-certs\") pod \"calico-apiserver-76ccbfb48-d4dj9\" (UID: \"505c9554-5eb5-4a9a-bd7b-577a3564eb3e\") " pod="calico-apiserver/calico-apiserver-76ccbfb48-d4dj9" May 27 03:25:46.213850 kubelet[2671]: I0527 03:25:46.213470 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/983a0221-2ac7-4637-a33d-b7cc65ccc040-calico-apiserver-certs\") pod \"calico-apiserver-76ccbfb48-bpwgs\" (UID: \"983a0221-2ac7-4637-a33d-b7cc65ccc040\") " pod="calico-apiserver/calico-apiserver-76ccbfb48-bpwgs" May 27 03:25:46.213850 kubelet[2671]: I0527 03:25:46.213483 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec93da8-25f2-4392-a00f-ed24f87d6be8-config\") pod \"goldmane-78d55f7ddc-w2jhg\" (UID: \"eec93da8-25f2-4392-a00f-ed24f87d6be8\") " pod="calico-system/goldmane-78d55f7ddc-w2jhg" May 27 03:25:46.214006 kubelet[2671]: I0527 03:25:46.213514 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-whisker-backend-key-pair\") pod \"whisker-ff97fb58b-ldlhh\" (UID: \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\") " pod="calico-system/whisker-ff97fb58b-ldlhh" May 27 03:25:46.214006 kubelet[2671]: I0527 03:25:46.213531 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvsf\" (UniqueName: \"kubernetes.io/projected/505c9554-5eb5-4a9a-bd7b-577a3564eb3e-kube-api-access-9nvsf\") pod \"calico-apiserver-76ccbfb48-d4dj9\" (UID: \"505c9554-5eb5-4a9a-bd7b-577a3564eb3e\") " pod="calico-apiserver/calico-apiserver-76ccbfb48-d4dj9" May 27 03:25:46.214006 kubelet[2671]: I0527 03:25:46.213592 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eec93da8-25f2-4392-a00f-ed24f87d6be8-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-w2jhg\" (UID: \"eec93da8-25f2-4392-a00f-ed24f87d6be8\") " pod="calico-system/goldmane-78d55f7ddc-w2jhg" May 27 03:25:46.214006 kubelet[2671]: I0527 03:25:46.213628 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0025fdff-1c55-4c53-8432-c3b22baafc85-config-volume\") pod \"coredns-668d6bf9bc-qpxp6\" (UID: \"0025fdff-1c55-4c53-8432-c3b22baafc85\") " pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:25:46.214006 kubelet[2671]: I0527 03:25:46.213652 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-whisker-ca-bundle\") pod \"whisker-ff97fb58b-ldlhh\" (UID: \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\") " pod="calico-system/whisker-ff97fb58b-ldlhh" May 27 03:25:46.214198 kubelet[2671]: I0527 03:25:46.214155 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr6d4\" (UniqueName: \"kubernetes.io/projected/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-kube-api-access-xr6d4\") pod \"whisker-ff97fb58b-ldlhh\" (UID: \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\") " pod="calico-system/whisker-ff97fb58b-ldlhh" May 27 03:25:46.214198 kubelet[2671]: I0527 03:25:46.214192 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57d6fdf8-dafc-4012-a8a7-1301381db58e-tigera-ca-bundle\") pod \"calico-kube-controllers-79469b85c4-szmp2\" (UID: \"57d6fdf8-dafc-4012-a8a7-1301381db58e\") " pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:25:46.216311 systemd[1]: Created slice kubepods-besteffort-pod983a0221_2ac7_4637_a33d_b7cc65ccc040.slice - libcontainer container kubepods-besteffort-pod983a0221_2ac7_4637_a33d_b7cc65ccc040.slice. May 27 03:25:46.222275 systemd[1]: Created slice kubepods-besteffort-pod505c9554_5eb5_4a9a_bd7b_577a3564eb3e.slice - libcontainer container kubepods-besteffort-pod505c9554_5eb5_4a9a_bd7b_577a3564eb3e.slice. May 27 03:25:46.227627 systemd[1]: Created slice kubepods-besteffort-podeec93da8_25f2_4392_a00f_ed24f87d6be8.slice - libcontainer container kubepods-besteffort-podeec93da8_25f2_4392_a00f_ed24f87d6be8.slice. May 27 03:25:46.232174 systemd[1]: Created slice kubepods-besteffort-pod6a7e716e_6209_4c4d_b385_cdc9a5ecc1e6.slice - libcontainer container kubepods-besteffort-pod6a7e716e_6209_4c4d_b385_cdc9a5ecc1e6.slice. May 27 03:25:46.509404 containerd[1580]: time="2025-05-27T03:25:46.509356241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,}" May 27 03:25:46.512373 containerd[1580]: time="2025-05-27T03:25:46.512335968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,}" May 27 03:25:46.512531 containerd[1580]: time="2025-05-27T03:25:46.512497783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,}" May 27 03:25:46.523663 containerd[1580]: time="2025-05-27T03:25:46.523535361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76ccbfb48-bpwgs,Uid:983a0221-2ac7-4637-a33d-b7cc65ccc040,Namespace:calico-apiserver,Attempt:0,}" May 27 03:25:46.526755 containerd[1580]: time="2025-05-27T03:25:46.525950094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76ccbfb48-d4dj9,Uid:505c9554-5eb5-4a9a-bd7b-577a3564eb3e,Namespace:calico-apiserver,Attempt:0,}" May 27 03:25:46.531306 containerd[1580]: time="2025-05-27T03:25:46.531197358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-w2jhg,Uid:eec93da8-25f2-4392-a00f-ed24f87d6be8,Namespace:calico-system,Attempt:0,}" May 27 03:25:46.535046 containerd[1580]: time="2025-05-27T03:25:46.535013108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ff97fb58b-ldlhh,Uid:6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6,Namespace:calico-system,Attempt:0,}" May 27 03:25:46.642164 containerd[1580]: time="2025-05-27T03:25:46.641752302Z" level=error msg="Failed to destroy network for sandbox \"72a3e025a85c87ccffce9e55d4f28b4cba8565c502a762736bec0b8032c71a92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.644702 containerd[1580]: time="2025-05-27T03:25:46.644655255Z" level=error msg="Failed to destroy network for sandbox \"7cf85b64f9750412eb6da1610739f98de6db39cbc8a22b07d1200938df52cdaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.646533 containerd[1580]: time="2025-05-27T03:25:46.646477744Z" level=error msg="Failed to destroy network for sandbox \"0fed6a723f967fcadb505c9b1e53408af7eff9a73cf2db0fc19d9bba7a32b812\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.647675 containerd[1580]: time="2025-05-27T03:25:46.647597722Z" level=error msg="Failed to destroy network for sandbox \"bd7c92aea3cd4513dffd2779ca3c8625e940b8c219ba7dfcccf5f0fa75aef8d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.649089 containerd[1580]: time="2025-05-27T03:25:46.649037290Z" level=error msg="Failed to destroy network for sandbox \"38faea2a3d87a8bdd97edb9886f66c5ebe9a9d9fe008ddcd05e739664018d653\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.650420 containerd[1580]: time="2025-05-27T03:25:46.650394784Z" level=error msg="Failed to destroy network for sandbox \"b2ef3fc7d155910bb2cdd244f5b0806a6ae7ce3916ba59974a36d435602114bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.664664 containerd[1580]: time="2025-05-27T03:25:46.664626373Z" level=error msg="Failed to destroy network for sandbox \"472f46b46d00f9e6ddfc61278c3fe0773cb4e7f97d875317a2738215e886360f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.676849 containerd[1580]: time="2025-05-27T03:25:46.676774441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-w2jhg,Uid:eec93da8-25f2-4392-a00f-ed24f87d6be8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"472f46b46d00f9e6ddfc61278c3fe0773cb4e7f97d875317a2738215e886360f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.676849 containerd[1580]: time="2025-05-27T03:25:46.676832370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd7c92aea3cd4513dffd2779ca3c8625e940b8c219ba7dfcccf5f0fa75aef8d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.677127 containerd[1580]: time="2025-05-27T03:25:46.676835105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf85b64f9750412eb6da1610739f98de6db39cbc8a22b07d1200938df52cdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.677127 containerd[1580]: time="2025-05-27T03:25:46.676823543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fed6a723f967fcadb505c9b1e53408af7eff9a73cf2db0fc19d9bba7a32b812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.677127 containerd[1580]: time="2025-05-27T03:25:46.676812052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76ccbfb48-bpwgs,Uid:983a0221-2ac7-4637-a33d-b7cc65ccc040,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a3e025a85c87ccffce9e55d4f28b4cba8565c502a762736bec0b8032c71a92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.677838 containerd[1580]: time="2025-05-27T03:25:46.677805702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ff97fb58b-ldlhh,Uid:6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38faea2a3d87a8bdd97edb9886f66c5ebe9a9d9fe008ddcd05e739664018d653\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.678744 containerd[1580]: time="2025-05-27T03:25:46.678706086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76ccbfb48-d4dj9,Uid:505c9554-5eb5-4a9a-bd7b-577a3564eb3e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ef3fc7d155910bb2cdd244f5b0806a6ae7ce3916ba59974a36d435602114bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.689247 kubelet[2671]: E0527 03:25:46.688856 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472f46b46d00f9e6ddfc61278c3fe0773cb4e7f97d875317a2738215e886360f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.689247 kubelet[2671]: E0527 03:25:46.688856 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf85b64f9750412eb6da1610739f98de6db39cbc8a22b07d1200938df52cdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.689247 kubelet[2671]: E0527 03:25:46.688943 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38faea2a3d87a8bdd97edb9886f66c5ebe9a9d9fe008ddcd05e739664018d653\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.689247 kubelet[2671]: E0527 03:25:46.688985 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472f46b46d00f9e6ddfc61278c3fe0773cb4e7f97d875317a2738215e886360f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-w2jhg" May 27 03:25:46.689417 kubelet[2671]: E0527 03:25:46.688997 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38faea2a3d87a8bdd97edb9886f66c5ebe9a9d9fe008ddcd05e739664018d653\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ff97fb58b-ldlhh" May 27 03:25:46.689417 kubelet[2671]: E0527 03:25:46.689009 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472f46b46d00f9e6ddfc61278c3fe0773cb4e7f97d875317a2738215e886360f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-w2jhg" May 27 03:25:46.689417 kubelet[2671]: E0527 03:25:46.689026 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38faea2a3d87a8bdd97edb9886f66c5ebe9a9d9fe008ddcd05e739664018d653\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ff97fb58b-ldlhh" May 27 03:25:46.689417 kubelet[2671]: E0527 03:25:46.688932 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fed6a723f967fcadb505c9b1e53408af7eff9a73cf2db0fc19d9bba7a32b812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.689529 kubelet[2671]: E0527 03:25:46.689041 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf85b64f9750412eb6da1610739f98de6db39cbc8a22b07d1200938df52cdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:25:46.689529 kubelet[2671]: E0527 03:25:46.689064 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fed6a723f967fcadb505c9b1e53408af7eff9a73cf2db0fc19d9bba7a32b812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:25:46.689529 kubelet[2671]: E0527 03:25:46.688944 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a3e025a85c87ccffce9e55d4f28b4cba8565c502a762736bec0b8032c71a92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.689529 kubelet[2671]: E0527 03:25:46.689083 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fed6a723f967fcadb505c9b1e53408af7eff9a73cf2db0fc19d9bba7a32b812\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:25:46.689658 kubelet[2671]: E0527 03:25:46.689075 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-ff97fb58b-ldlhh_calico-system(6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-ff97fb58b-ldlhh_calico-system(6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38faea2a3d87a8bdd97edb9886f66c5ebe9a9d9fe008ddcd05e739664018d653\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-ff97fb58b-ldlhh" podUID="6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6" May 27 03:25:46.689658 kubelet[2671]: E0527 03:25:46.688879 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ef3fc7d155910bb2cdd244f5b0806a6ae7ce3916ba59974a36d435602114bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.689658 kubelet[2671]: E0527 03:25:46.689090 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a3e025a85c87ccffce9e55d4f28b4cba8565c502a762736bec0b8032c71a92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76ccbfb48-bpwgs" May 27 03:25:46.689756 kubelet[2671]: E0527 03:25:46.689105 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ef3fc7d155910bb2cdd244f5b0806a6ae7ce3916ba59974a36d435602114bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76ccbfb48-d4dj9" May 27 03:25:46.689756 kubelet[2671]: E0527 03:25:46.689112 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a3e025a85c87ccffce9e55d4f28b4cba8565c502a762736bec0b8032c71a92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76ccbfb48-bpwgs" May 27 03:25:46.689756 kubelet[2671]: E0527 03:25:46.689119 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ef3fc7d155910bb2cdd244f5b0806a6ae7ce3916ba59974a36d435602114bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76ccbfb48-d4dj9" May 27 03:25:46.689839 kubelet[2671]: E0527 03:25:46.689159 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fed6a723f967fcadb505c9b1e53408af7eff9a73cf2db0fc19d9bba7a32b812\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" podUID="57d6fdf8-dafc-4012-a8a7-1301381db58e" May 27 03:25:46.689839 kubelet[2671]: E0527 03:25:46.689169 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76ccbfb48-d4dj9_calico-apiserver(505c9554-5eb5-4a9a-bd7b-577a3564eb3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76ccbfb48-d4dj9_calico-apiserver(505c9554-5eb5-4a9a-bd7b-577a3564eb3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2ef3fc7d155910bb2cdd244f5b0806a6ae7ce3916ba59974a36d435602114bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76ccbfb48-d4dj9" podUID="505c9554-5eb5-4a9a-bd7b-577a3564eb3e" May 27 03:25:46.689962 kubelet[2671]: E0527 03:25:46.689056 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-w2jhg_calico-system(eec93da8-25f2-4392-a00f-ed24f87d6be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-w2jhg_calico-system(eec93da8-25f2-4392-a00f-ed24f87d6be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"472f46b46d00f9e6ddfc61278c3fe0773cb4e7f97d875317a2738215e886360f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-w2jhg" podUID="eec93da8-25f2-4392-a00f-ed24f87d6be8" May 27 03:25:46.689962 kubelet[2671]: E0527 03:25:46.689169 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76ccbfb48-bpwgs_calico-apiserver(983a0221-2ac7-4637-a33d-b7cc65ccc040)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76ccbfb48-bpwgs_calico-apiserver(983a0221-2ac7-4637-a33d-b7cc65ccc040)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72a3e025a85c87ccffce9e55d4f28b4cba8565c502a762736bec0b8032c71a92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76ccbfb48-bpwgs" podUID="983a0221-2ac7-4637-a33d-b7cc65ccc040" May 27 03:25:46.689962 kubelet[2671]: E0527 03:25:46.688891 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd7c92aea3cd4513dffd2779ca3c8625e940b8c219ba7dfcccf5f0fa75aef8d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:46.690079 kubelet[2671]: E0527 03:25:46.689220 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd7c92aea3cd4513dffd2779ca3c8625e940b8c219ba7dfcccf5f0fa75aef8d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:25:46.690079 kubelet[2671]: E0527 03:25:46.689231 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd7c92aea3cd4513dffd2779ca3c8625e940b8c219ba7dfcccf5f0fa75aef8d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:25:46.690079 kubelet[2671]: E0527 03:25:46.689255 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd7c92aea3cd4513dffd2779ca3c8625e940b8c219ba7dfcccf5f0fa75aef8d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lz678" podUID="ba2701a4-383c-4885-b697-c2657b09fefa" May 27 03:25:46.690189 kubelet[2671]: E0527 03:25:46.689066 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf85b64f9750412eb6da1610739f98de6db39cbc8a22b07d1200938df52cdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:25:46.690189 kubelet[2671]: E0527 03:25:46.689293 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cf85b64f9750412eb6da1610739f98de6db39cbc8a22b07d1200938df52cdaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qpxp6" podUID="0025fdff-1c55-4c53-8432-c3b22baafc85" May 27 03:25:46.972462 containerd[1580]: time="2025-05-27T03:25:46.972220142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:25:47.885655 systemd[1]: Created slice kubepods-besteffort-podb054e321_f80c_45e5_a80b_17a7bbc92d8f.slice - libcontainer container kubepods-besteffort-podb054e321_f80c_45e5_a80b_17a7bbc92d8f.slice. May 27 03:25:47.887745 containerd[1580]: time="2025-05-27T03:25:47.887703122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,}" May 27 03:25:47.931575 containerd[1580]: time="2025-05-27T03:25:47.931525122Z" level=error msg="Failed to destroy network for sandbox \"adb5d8773fc9c06cf1cae8a18414105922f92003c68a4be6f74c315d7678b4df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:47.934206 systemd[1]: run-netns-cni\x2d7fad5987\x2d8b74\x2d59f9\x2dc1c2\x2df5fd52100577.mount: Deactivated successfully. May 27 03:25:47.934343 containerd[1580]: time="2025-05-27T03:25:47.934236994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb5d8773fc9c06cf1cae8a18414105922f92003c68a4be6f74c315d7678b4df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:47.934522 kubelet[2671]: E0527 03:25:47.934484 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb5d8773fc9c06cf1cae8a18414105922f92003c68a4be6f74c315d7678b4df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:47.934801 kubelet[2671]: E0527 03:25:47.934548 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb5d8773fc9c06cf1cae8a18414105922f92003c68a4be6f74c315d7678b4df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:25:47.934801 kubelet[2671]: E0527 03:25:47.934568 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb5d8773fc9c06cf1cae8a18414105922f92003c68a4be6f74c315d7678b4df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:25:47.934801 kubelet[2671]: E0527 03:25:47.934607 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adb5d8773fc9c06cf1cae8a18414105922f92003c68a4be6f74c315d7678b4df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:50.072419 kubelet[2671]: I0527 03:25:50.072341 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:25:50.072419 kubelet[2671]: I0527 03:25:50.072405 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:25:50.076001 kubelet[2671]: I0527 03:25:50.075976 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:25:50.089171 kubelet[2671]: I0527 03:25:50.089113 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:25:50.089334 kubelet[2671]: I0527 03:25:50.089244 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-78d55f7ddc-w2jhg","calico-apiserver/calico-apiserver-76ccbfb48-bpwgs","calico-apiserver/calico-apiserver-76ccbfb48-d4dj9","calico-system/whisker-ff97fb58b-ldlhh","calico-system/calico-kube-controllers-79469b85c4-szmp2","kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/csi-node-driver-lktnw","calico-system/calico-node-nl4v8","tigera-operator/tigera-operator-844669ff44-fr86j","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:25:50.096964 kubelet[2671]: I0527 03:25:50.096938 2671 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-78d55f7ddc-w2jhg" May 27 03:25:50.096964 kubelet[2671]: I0527 03:25:50.096961 2671 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-78d55f7ddc-w2jhg"] May 27 03:25:50.141075 kubelet[2671]: I0527 03:25:50.141019 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec93da8-25f2-4392-a00f-ed24f87d6be8-goldmane-ca-bundle\") pod \"eec93da8-25f2-4392-a00f-ed24f87d6be8\" (UID: \"eec93da8-25f2-4392-a00f-ed24f87d6be8\") " May 27 03:25:50.141075 kubelet[2671]: I0527 03:25:50.141074 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eec93da8-25f2-4392-a00f-ed24f87d6be8-goldmane-key-pair\") pod \"eec93da8-25f2-4392-a00f-ed24f87d6be8\" (UID: \"eec93da8-25f2-4392-a00f-ed24f87d6be8\") " May 27 03:25:50.141319 kubelet[2671]: I0527 03:25:50.141113 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec93da8-25f2-4392-a00f-ed24f87d6be8-config\") pod \"eec93da8-25f2-4392-a00f-ed24f87d6be8\" (UID: \"eec93da8-25f2-4392-a00f-ed24f87d6be8\") " May 27 03:25:50.141640 kubelet[2671]: I0527 03:25:50.141609 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec93da8-25f2-4392-a00f-ed24f87d6be8-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "eec93da8-25f2-4392-a00f-ed24f87d6be8" (UID: "eec93da8-25f2-4392-a00f-ed24f87d6be8"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:25:50.142013 kubelet[2671]: I0527 03:25:50.141987 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv8k8\" (UniqueName: \"kubernetes.io/projected/eec93da8-25f2-4392-a00f-ed24f87d6be8-kube-api-access-hv8k8\") pod \"eec93da8-25f2-4392-a00f-ed24f87d6be8\" (UID: \"eec93da8-25f2-4392-a00f-ed24f87d6be8\") " May 27 03:25:50.142178 kubelet[2671]: I0527 03:25:50.142074 2671 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec93da8-25f2-4392-a00f-ed24f87d6be8-goldmane-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 03:25:50.142664 kubelet[2671]: I0527 03:25:50.142630 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec93da8-25f2-4392-a00f-ed24f87d6be8-config" (OuterVolumeSpecName: "config") pod "eec93da8-25f2-4392-a00f-ed24f87d6be8" (UID: "eec93da8-25f2-4392-a00f-ed24f87d6be8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:25:50.157176 kubelet[2671]: I0527 03:25:50.152891 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec93da8-25f2-4392-a00f-ed24f87d6be8-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "eec93da8-25f2-4392-a00f-ed24f87d6be8" (UID: "eec93da8-25f2-4392-a00f-ed24f87d6be8"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:25:50.154825 systemd[1]: var-lib-kubelet-pods-eec93da8\x2d25f2\x2d4392\x2da00f\x2ded24f87d6be8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhv8k8.mount: Deactivated successfully. May 27 03:25:50.154932 systemd[1]: var-lib-kubelet-pods-eec93da8\x2d25f2\x2d4392\x2da00f\x2ded24f87d6be8-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:25:50.158793 kubelet[2671]: I0527 03:25:50.158331 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec93da8-25f2-4392-a00f-ed24f87d6be8-kube-api-access-hv8k8" (OuterVolumeSpecName: "kube-api-access-hv8k8") pod "eec93da8-25f2-4392-a00f-ed24f87d6be8" (UID: "eec93da8-25f2-4392-a00f-ed24f87d6be8"). InnerVolumeSpecName "kube-api-access-hv8k8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:25:50.243353 kubelet[2671]: I0527 03:25:50.243301 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hv8k8\" (UniqueName: \"kubernetes.io/projected/eec93da8-25f2-4392-a00f-ed24f87d6be8-kube-api-access-hv8k8\") on node \"localhost\" DevicePath \"\"" May 27 03:25:50.243353 kubelet[2671]: I0527 03:25:50.243335 2671 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eec93da8-25f2-4392-a00f-ed24f87d6be8-goldmane-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 03:25:50.243353 kubelet[2671]: I0527 03:25:50.243345 2671 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec93da8-25f2-4392-a00f-ed24f87d6be8-config\") on node \"localhost\" DevicePath \"\"" May 27 03:25:50.341353 kubelet[2671]: I0527 03:25:50.340722 2671 kubelet.go:2351] "Pod admission denied" podUID="a18f5b54-98e7-4265-85ba-502b14aec7b7" pod="calico-system/goldmane-78d55f7ddc-7t7nw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 27 03:25:50.843248 kubelet[2671]: I0527 03:25:50.843152 2671 kubelet.go:2351] "Pod admission denied" podUID="55cea98d-23e9-403c-957c-52121a42c46d" pod="calico-system/goldmane-78d55f7ddc-j5zss" reason="Evicted" message="The node had condition: [DiskPressure]. " May 27 03:25:50.986787 systemd[1]: Removed slice kubepods-besteffort-podeec93da8_25f2_4392_a00f_ed24f87d6be8.slice - libcontainer container kubepods-besteffort-podeec93da8_25f2_4392_a00f_ed24f87d6be8.slice. May 27 03:25:51.097957 kubelet[2671]: I0527 03:25:51.097815 2671 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-78d55f7ddc-w2jhg"] May 27 03:25:51.109574 kubelet[2671]: I0527 03:25:51.109511 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:25:51.109574 kubelet[2671]: I0527 03:25:51.109554 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:25:51.111854 kubelet[2671]: I0527 03:25:51.111823 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:25:51.123523 kubelet[2671]: I0527 03:25:51.123485 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:25:51.123671 kubelet[2671]: I0527 03:25:51.123571 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-76ccbfb48-d4dj9","calico-system/whisker-ff97fb58b-ldlhh","calico-apiserver/calico-apiserver-76ccbfb48-bpwgs","calico-system/calico-kube-controllers-79469b85c4-szmp2","kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/calico-node-nl4v8","calico-system/csi-node-driver-lktnw","tigera-operator/tigera-operator-844669ff44-fr86j","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:25:51.127691 kubelet[2671]: I0527 03:25:51.127664 2671 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-76ccbfb48-d4dj9" May 27 03:25:51.127691 kubelet[2671]: I0527 03:25:51.127683 2671 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-76ccbfb48-d4dj9"] May 27 03:25:51.250645 kubelet[2671]: I0527 03:25:51.250586 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/505c9554-5eb5-4a9a-bd7b-577a3564eb3e-calico-apiserver-certs\") pod \"505c9554-5eb5-4a9a-bd7b-577a3564eb3e\" (UID: \"505c9554-5eb5-4a9a-bd7b-577a3564eb3e\") " May 27 03:25:51.250871 kubelet[2671]: I0527 03:25:51.250694 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvsf\" (UniqueName: \"kubernetes.io/projected/505c9554-5eb5-4a9a-bd7b-577a3564eb3e-kube-api-access-9nvsf\") pod \"505c9554-5eb5-4a9a-bd7b-577a3564eb3e\" (UID: \"505c9554-5eb5-4a9a-bd7b-577a3564eb3e\") " May 27 03:25:51.254250 kubelet[2671]: I0527 03:25:51.254188 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505c9554-5eb5-4a9a-bd7b-577a3564eb3e-kube-api-access-9nvsf" (OuterVolumeSpecName: "kube-api-access-9nvsf") pod "505c9554-5eb5-4a9a-bd7b-577a3564eb3e" (UID: "505c9554-5eb5-4a9a-bd7b-577a3564eb3e"). InnerVolumeSpecName "kube-api-access-9nvsf". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:25:51.255770 kubelet[2671]: I0527 03:25:51.255739 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505c9554-5eb5-4a9a-bd7b-577a3564eb3e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "505c9554-5eb5-4a9a-bd7b-577a3564eb3e" (UID: "505c9554-5eb5-4a9a-bd7b-577a3564eb3e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:25:51.256193 systemd[1]: var-lib-kubelet-pods-505c9554\x2d5eb5\x2d4a9a\x2dbd7b\x2d577a3564eb3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9nvsf.mount: Deactivated successfully. May 27 03:25:51.256351 systemd[1]: var-lib-kubelet-pods-505c9554\x2d5eb5\x2d4a9a\x2dbd7b\x2d577a3564eb3e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 27 03:25:51.351775 kubelet[2671]: I0527 03:25:51.351609 2671 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/505c9554-5eb5-4a9a-bd7b-577a3564eb3e-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 27 03:25:51.351775 kubelet[2671]: I0527 03:25:51.351643 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9nvsf\" (UniqueName: \"kubernetes.io/projected/505c9554-5eb5-4a9a-bd7b-577a3564eb3e-kube-api-access-9nvsf\") on node \"localhost\" DevicePath \"\"" May 27 03:25:51.439442 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:43676.service - OpenSSH per-connection server daemon (10.0.0.1:43676). May 27 03:25:51.492011 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 43676 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:25:51.493679 sshd-session[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:51.498861 systemd-logind[1505]: New session 8 of user core. May 27 03:25:51.508302 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:25:51.629329 sshd[3705]: Connection closed by 10.0.0.1 port 43676 May 27 03:25:51.629599 sshd-session[3703]: pam_unix(sshd:session): session closed for user core May 27 03:25:51.634064 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:43676.service: Deactivated successfully. May 27 03:25:51.636399 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:25:51.637246 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. May 27 03:25:51.638956 systemd-logind[1505]: Removed session 8. May 27 03:25:51.887581 systemd[1]: Removed slice kubepods-besteffort-pod505c9554_5eb5_4a9a_bd7b_577a3564eb3e.slice - libcontainer container kubepods-besteffort-pod505c9554_5eb5_4a9a_bd7b_577a3564eb3e.slice. May 27 03:25:52.128514 kubelet[2671]: I0527 03:25:52.128419 2671 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-76ccbfb48-d4dj9"] May 27 03:25:52.144223 kubelet[2671]: I0527 03:25:52.144062 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:25:52.144223 kubelet[2671]: I0527 03:25:52.144181 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:25:52.147590 kubelet[2671]: I0527 03:25:52.147561 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:25:52.160493 kubelet[2671]: I0527 03:25:52.160462 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:25:52.160657 kubelet[2671]: I0527 03:25:52.160556 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-76ccbfb48-bpwgs","calico-system/whisker-ff97fb58b-ldlhh","kube-system/coredns-668d6bf9bc-qpxp6","calico-system/calico-kube-controllers-79469b85c4-szmp2","kube-system/coredns-668d6bf9bc-lz678","calico-system/csi-node-driver-lktnw","calico-system/calico-node-nl4v8","tigera-operator/tigera-operator-844669ff44-fr86j","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:25:52.165031 kubelet[2671]: I0527 03:25:52.165003 2671 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-76ccbfb48-bpwgs" May 27 03:25:52.165031 kubelet[2671]: I0527 03:25:52.165025 2671 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-76ccbfb48-bpwgs"] May 27 03:25:52.257273 kubelet[2671]: I0527 03:25:52.257201 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkrlm\" (UniqueName: \"kubernetes.io/projected/983a0221-2ac7-4637-a33d-b7cc65ccc040-kube-api-access-hkrlm\") pod \"983a0221-2ac7-4637-a33d-b7cc65ccc040\" (UID: \"983a0221-2ac7-4637-a33d-b7cc65ccc040\") " May 27 03:25:52.257273 kubelet[2671]: I0527 03:25:52.257264 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/983a0221-2ac7-4637-a33d-b7cc65ccc040-calico-apiserver-certs\") pod \"983a0221-2ac7-4637-a33d-b7cc65ccc040\" (UID: \"983a0221-2ac7-4637-a33d-b7cc65ccc040\") " May 27 03:25:52.262986 systemd[1]: var-lib-kubelet-pods-983a0221\x2d2ac7\x2d4637\x2da33d\x2db7cc65ccc040-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 27 03:25:52.263380 kubelet[2671]: I0527 03:25:52.263288 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983a0221-2ac7-4637-a33d-b7cc65ccc040-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "983a0221-2ac7-4637-a33d-b7cc65ccc040" (UID: "983a0221-2ac7-4637-a33d-b7cc65ccc040"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:25:52.263602 kubelet[2671]: I0527 03:25:52.263556 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983a0221-2ac7-4637-a33d-b7cc65ccc040-kube-api-access-hkrlm" (OuterVolumeSpecName: "kube-api-access-hkrlm") pod "983a0221-2ac7-4637-a33d-b7cc65ccc040" (UID: "983a0221-2ac7-4637-a33d-b7cc65ccc040"). InnerVolumeSpecName "kube-api-access-hkrlm". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:25:52.266191 systemd[1]: var-lib-kubelet-pods-983a0221\x2d2ac7\x2d4637\x2da33d\x2db7cc65ccc040-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhkrlm.mount: Deactivated successfully. May 27 03:25:52.358373 kubelet[2671]: I0527 03:25:52.358315 2671 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/983a0221-2ac7-4637-a33d-b7cc65ccc040-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 27 03:25:52.358373 kubelet[2671]: I0527 03:25:52.358347 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkrlm\" (UniqueName: \"kubernetes.io/projected/983a0221-2ac7-4637-a33d-b7cc65ccc040-kube-api-access-hkrlm\") on node \"localhost\" DevicePath \"\"" May 27 03:25:52.990988 systemd[1]: Removed slice kubepods-besteffort-pod983a0221_2ac7_4637_a33d_b7cc65ccc040.slice - libcontainer container kubepods-besteffort-pod983a0221_2ac7_4637_a33d_b7cc65ccc040.slice. May 27 03:25:53.165344 kubelet[2671]: I0527 03:25:53.165270 2671 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-76ccbfb48-bpwgs"] May 27 03:25:53.180611 kubelet[2671]: I0527 03:25:53.180562 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:25:53.180611 kubelet[2671]: I0527 03:25:53.180603 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:25:53.182733 kubelet[2671]: I0527 03:25:53.182705 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:25:53.213265 kubelet[2671]: I0527 03:25:53.213228 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:25:53.213634 kubelet[2671]: I0527 03:25:53.213584 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-ff97fb58b-ldlhh","kube-system/coredns-668d6bf9bc-lz678","kube-system/coredns-668d6bf9bc-qpxp6","calico-system/calico-kube-controllers-79469b85c4-szmp2","calico-system/calico-node-nl4v8","calico-system/csi-node-driver-lktnw","tigera-operator/tigera-operator-844669ff44-fr86j","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:25:53.231343 kubelet[2671]: I0527 03:25:53.231311 2671 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-ff97fb58b-ldlhh" May 27 03:25:53.231343 kubelet[2671]: I0527 03:25:53.231332 2671 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-ff97fb58b-ldlhh"] May 27 03:25:53.367179 kubelet[2671]: I0527 03:25:53.366700 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr6d4\" (UniqueName: \"kubernetes.io/projected/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-kube-api-access-xr6d4\") pod \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\" (UID: \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\") " May 27 03:25:53.367179 kubelet[2671]: I0527 03:25:53.366756 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-whisker-ca-bundle\") pod \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\" (UID: \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\") " May 27 03:25:53.367179 kubelet[2671]: I0527 03:25:53.366774 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-whisker-backend-key-pair\") pod \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\" (UID: \"6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6\") " May 27 03:25:53.367517 kubelet[2671]: I0527 03:25:53.367463 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6" (UID: "6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:25:53.370647 kubelet[2671]: I0527 03:25:53.370597 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6" (UID: "6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:25:53.370765 kubelet[2671]: I0527 03:25:53.370733 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-kube-api-access-xr6d4" (OuterVolumeSpecName: "kube-api-access-xr6d4") pod "6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6" (UID: "6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6"). InnerVolumeSpecName "kube-api-access-xr6d4". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:25:53.372508 systemd[1]: var-lib-kubelet-pods-6a7e716e\x2d6209\x2d4c4d\x2db385\x2dcdc9a5ecc1e6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:25:53.372640 systemd[1]: var-lib-kubelet-pods-6a7e716e\x2d6209\x2d4c4d\x2db385\x2dcdc9a5ecc1e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxr6d4.mount: Deactivated successfully. May 27 03:25:53.467952 kubelet[2671]: I0527 03:25:53.467910 2671 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 03:25:53.467952 kubelet[2671]: I0527 03:25:53.467942 2671 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 03:25:53.467952 kubelet[2671]: I0527 03:25:53.467953 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xr6d4\" (UniqueName: \"kubernetes.io/projected/6a7e716e-6209-4c4d-b385-cdc9a5ecc1e6-kube-api-access-xr6d4\") on node \"localhost\" DevicePath \"\"" May 27 03:25:53.887036 systemd[1]: Removed slice kubepods-besteffort-pod6a7e716e_6209_4c4d_b385_cdc9a5ecc1e6.slice - libcontainer container kubepods-besteffort-pod6a7e716e_6209_4c4d_b385_cdc9a5ecc1e6.slice. May 27 03:25:54.056955 kubelet[2671]: I0527 03:25:54.056892 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:25:54.232180 kubelet[2671]: I0527 03:25:54.231996 2671 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-ff97fb58b-ldlhh"] May 27 03:25:54.243152 kubelet[2671]: I0527 03:25:54.243109 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:25:54.243217 kubelet[2671]: I0527 03:25:54.243162 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:25:54.247722 kubelet[2671]: I0527 03:25:54.247576 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:25:54.259864 kubelet[2671]: I0527 03:25:54.259828 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:25:54.259962 kubelet[2671]: I0527 03:25:54.259911 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/calico-kube-controllers-79469b85c4-szmp2","calico-system/calico-node-nl4v8","calico-system/csi-node-driver-lktnw","tigera-operator/tigera-operator-844669ff44-fr86j","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:25:54.259962 kubelet[2671]: E0527 03:25:54.259943 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:25:54.259962 kubelet[2671]: E0527 03:25:54.259953 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:25:54.259962 kubelet[2671]: E0527 03:25:54.259960 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:25:54.259962 kubelet[2671]: E0527 03:25:54.259966 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nl4v8" May 27 03:25:54.260125 kubelet[2671]: E0527 03:25:54.259974 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lktnw" May 27 03:25:54.682901 containerd[1580]: time="2025-05-27T03:25:54.682827272Z" level=info msg="StopContainer for \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" with timeout 2 (s)" May 27 03:25:54.696094 containerd[1580]: time="2025-05-27T03:25:54.696024210Z" level=info msg="Stop container \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" with signal terminated" May 27 03:25:54.747129 systemd[1]: cri-containerd-e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77.scope: Deactivated successfully. May 27 03:25:54.747588 systemd[1]: cri-containerd-e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77.scope: Consumed 4.451s CPU time, 74.7M memory peak. May 27 03:25:54.760929 containerd[1580]: time="2025-05-27T03:25:54.748195375Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-6d64b75d5-w8w2n_d5855a5c-41c3-4994-a5e6-2f21333fef2e/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-6d64b75d5-w8w2n_d5855a5c-41c3-4994-a5e6-2f21333fef2e/calico-typha/0.log: no space left on device" May 27 03:25:54.761038 containerd[1580]: time="2025-05-27T03:25:54.748891463Z" level=info msg="received exit event container_id:\"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" id:\"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" pid:2994 exited_at:{seconds:1748316354 nanos:748504857}" May 27 03:25:54.761208 containerd[1580]: time="2025-05-27T03:25:54.760973146Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-6d64b75d5-w8w2n_d5855a5c-41c3-4994-a5e6-2f21333fef2e/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-6d64b75d5-w8w2n_d5855a5c-41c3-4994-a5e6-2f21333fef2e/calico-typha/0.log: no space left on device" May 27 03:25:54.761259 containerd[1580]: time="2025-05-27T03:25:54.748979909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" id:\"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" pid:2994 exited_at:{seconds:1748316354 nanos:748504857}" May 27 03:25:54.761663 containerd[1580]: time="2025-05-27T03:25:54.761315460Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-6d64b75d5-w8w2n_d5855a5c-41c3-4994-a5e6-2f21333fef2e/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-6d64b75d5-w8w2n_d5855a5c-41c3-4994-a5e6-2f21333fef2e/calico-typha/0.log: no space left on device" May 27 03:25:54.768765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991994897.mount: Deactivated successfully. May 27 03:25:54.829495 containerd[1580]: time="2025-05-27T03:25:54.829408794Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829495 containerd[1580]: time="2025-05-27T03:25:54.829493784Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829660 containerd[1580]: time="2025-05-27T03:25:54.829546272Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829660 containerd[1580]: time="2025-05-27T03:25:54.829580246Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829660 containerd[1580]: time="2025-05-27T03:25:54.829610703Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829660 containerd[1580]: time="2025-05-27T03:25:54.829639347Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829806 containerd[1580]: time="2025-05-27T03:25:54.829670295Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829806 containerd[1580]: time="2025-05-27T03:25:54.829718706Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829806 containerd[1580]: time="2025-05-27T03:25:54.829751899Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829918 containerd[1580]: time="2025-05-27T03:25:54.829802935Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829918 containerd[1580]: time="2025-05-27T03:25:54.829840655Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.829918 containerd[1580]: time="2025-05-27T03:25:54.829882424Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830022 containerd[1580]: time="2025-05-27T03:25:54.829930764Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830022 containerd[1580]: time="2025-05-27T03:25:54.829965580Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830022 containerd[1580]: time="2025-05-27T03:25:54.829997741Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830176 containerd[1580]: time="2025-05-27T03:25:54.830056621Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830176 containerd[1580]: time="2025-05-27T03:25:54.830109060Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830176 containerd[1580]: time="2025-05-27T03:25:54.830169523Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830290 containerd[1580]: time="2025-05-27T03:25:54.830208126Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830290 containerd[1580]: time="2025-05-27T03:25:54.830241999Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830290 containerd[1580]: time="2025-05-27T03:25:54.830273820Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830394 containerd[1580]: time="2025-05-27T03:25:54.830326328Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830394 containerd[1580]: time="2025-05-27T03:25:54.830372896Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830472 containerd[1580]: time="2025-05-27T03:25:54.830406970Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830472 containerd[1580]: time="2025-05-27T03:25:54.830439771Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830543 containerd[1580]: time="2025-05-27T03:25:54.830485066Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830543 containerd[1580]: time="2025-05-27T03:25:54.830519712Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830622 containerd[1580]: time="2025-05-27T03:25:54.830552574Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830622 containerd[1580]: time="2025-05-27T03:25:54.830585926Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830700 containerd[1580]: time="2025-05-27T03:25:54.830618106Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830700 containerd[1580]: time="2025-05-27T03:25:54.830668531Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830770 containerd[1580]: time="2025-05-27T03:25:54.830703296Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830770 containerd[1580]: time="2025-05-27T03:25:54.830738432Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830845 containerd[1580]: time="2025-05-27T03:25:54.830770432Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830845 containerd[1580]: time="2025-05-27T03:25:54.830821950Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830915 containerd[1580]: time="2025-05-27T03:25:54.830856685Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830915 containerd[1580]: time="2025-05-27T03:25:54.830890228Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830991 containerd[1580]: time="2025-05-27T03:25:54.830921446Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.830991 containerd[1580]: time="2025-05-27T03:25:54.830957093Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831188 containerd[1580]: time="2025-05-27T03:25:54.831117675Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831224 containerd[1580]: time="2025-05-27T03:25:54.831203035Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831266 containerd[1580]: time="2025-05-27T03:25:54.831244323Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831308 containerd[1580]: time="2025-05-27T03:25:54.831287875Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831332 containerd[1580]: time="2025-05-27T03:25:54.831319896Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831358 containerd[1580]: time="2025-05-27T03:25:54.831345333Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831383 containerd[1580]: time="2025-05-27T03:25:54.831370851Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831408 containerd[1580]: time="2025-05-27T03:25:54.831393093Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831437 containerd[1580]: time="2025-05-27T03:25:54.831416256Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831474 containerd[1580]: time="2025-05-27T03:25:54.831449619Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831497 containerd[1580]: time="2025-05-27T03:25:54.831473283Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831523 containerd[1580]: time="2025-05-27T03:25:54.831496477Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831553 containerd[1580]: time="2025-05-27T03:25:54.831520582Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831584 containerd[1580]: time="2025-05-27T03:25:54.831552632Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831584 containerd[1580]: time="2025-05-27T03:25:54.831575876Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831636 containerd[1580]: time="2025-05-27T03:25:54.831598780Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831636 containerd[1580]: time="2025-05-27T03:25:54.831620801Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831702 containerd[1580]: time="2025-05-27T03:25:54.831642061Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831702 containerd[1580]: time="2025-05-27T03:25:54.831675774Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831702 containerd[1580]: time="2025-05-27T03:25:54.831701212Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831784 containerd[1580]: time="2025-05-27T03:25:54.831723644Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831784 containerd[1580]: time="2025-05-27T03:25:54.831747358Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831784 containerd[1580]: time="2025-05-27T03:25:54.831776763Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831872 containerd[1580]: time="2025-05-27T03:25:54.831800147Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831872 containerd[1580]: time="2025-05-27T03:25:54.831823711Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831872 containerd[1580]: time="2025-05-27T03:25:54.831846153Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831872 containerd[1580]: time="2025-05-27T03:25:54.831869679Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831986 containerd[1580]: time="2025-05-27T03:25:54.831901238Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831986 containerd[1580]: time="2025-05-27T03:25:54.831923459Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831986 containerd[1580]: time="2025-05-27T03:25:54.831944379Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.831986 containerd[1580]: time="2025-05-27T03:25:54.831967262Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832121 containerd[1580]: time="2025-05-27T03:25:54.831989674Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832121 containerd[1580]: time="2025-05-27T03:25:54.832016294Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832121 containerd[1580]: time="2025-05-27T03:25:54.832037984Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832121 containerd[1580]: time="2025-05-27T03:25:54.832075294Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832121 containerd[1580]: time="2025-05-27T03:25:54.832105751Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832349 containerd[1580]: time="2025-05-27T03:25:54.832290659Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832349 containerd[1580]: time="2025-05-27T03:25:54.832358847Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832453 containerd[1580]: time="2025-05-27T03:25:54.832388223Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832453 containerd[1580]: time="2025-05-27T03:25:54.832410605Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.832453 containerd[1580]: time="2025-05-27T03:25:54.832432145Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-844669ff44-fr86j_4cdf2d34-39af-4acb-bcfa-79504bf9a2ab/tigera-operator/0.log: no space left on device" May 27 03:25:54.841394 containerd[1580]: time="2025-05-27T03:25:54.841360582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:25:54.847600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77-rootfs.mount: Deactivated successfully. May 27 03:25:54.851766 containerd[1580]: time="2025-05-27T03:25:54.851664224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1991994897: write /var/lib/containerd/tmpmounts/containerd-mount1991994897/usr/bin/calico-node: no space left on device" May 27 03:25:54.852060 kubelet[2671]: E0527 03:25:54.851985 2671 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1991994897: write /var/lib/containerd/tmpmounts/containerd-mount1991994897/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 03:25:54.852206 kubelet[2671]: E0527 03:25:54.852058 2671 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1991994897: write /var/lib/containerd/tmpmounts/containerd-mount1991994897/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 03:25:54.861208 kubelet[2671]: E0527 03:25:54.857258 2671 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4vj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-nl4v8_calico-system(3ba286e9-822e-413a-a6bf-426b06794d9c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1991994897: write /var/lib/containerd/tmpmounts/containerd-mount1991994897/usr/bin/calico-node: no space left on device" logger="UnhandledError" May 27 03:25:54.861576 kubelet[2671]: E0527 03:25:54.858649 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1991994897: write /var/lib/containerd/tmpmounts/containerd-mount1991994897/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-nl4v8" podUID="3ba286e9-822e-413a-a6bf-426b06794d9c" May 27 03:25:55.084964 containerd[1580]: time="2025-05-27T03:25:55.084899248Z" level=info msg="StopContainer for \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" returns successfully" May 27 03:25:55.085649 containerd[1580]: time="2025-05-27T03:25:55.085612368Z" level=info msg="StopPodSandbox for \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\"" May 27 03:25:55.090322 containerd[1580]: time="2025-05-27T03:25:55.090279627Z" level=info msg="Container to stop \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:25:55.098450 systemd[1]: cri-containerd-25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e.scope: Deactivated successfully. May 27 03:25:55.100720 containerd[1580]: time="2025-05-27T03:25:55.100682453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" id:\"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" pid:2801 exit_status:137 exited_at:{seconds:1748316355 nanos:100427694}" May 27 03:25:55.130884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e-rootfs.mount: Deactivated successfully. May 27 03:25:55.249222 containerd[1580]: time="2025-05-27T03:25:55.249158028Z" level=info msg="shim disconnected" id=25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e namespace=k8s.io May 27 03:25:55.249222 containerd[1580]: time="2025-05-27T03:25:55.249204486Z" level=warning msg="cleaning up after shim disconnected" id=25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e namespace=k8s.io May 27 03:25:55.256327 containerd[1580]: time="2025-05-27T03:25:55.249217360Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:25:55.288160 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e-shm.mount: Deactivated successfully. May 27 03:25:55.296676 containerd[1580]: time="2025-05-27T03:25:55.296617949Z" level=info msg="TearDown network for sandbox \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" successfully" May 27 03:25:55.296676 containerd[1580]: time="2025-05-27T03:25:55.296659397Z" level=info msg="StopPodSandbox for \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" returns successfully" May 27 03:25:55.300967 containerd[1580]: time="2025-05-27T03:25:55.300934870Z" level=info msg="received exit event sandbox_id:\"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" exit_status:137 exited_at:{seconds:1748316355 nanos:100427694}" May 27 03:25:55.302228 kubelet[2671]: I0527 03:25:55.302201 2671 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-844669ff44-fr86j" May 27 03:25:55.302228 kubelet[2671]: I0527 03:25:55.302225 2671 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-844669ff44-fr86j"] May 27 03:25:55.382351 kubelet[2671]: I0527 03:25:55.382204 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlsvq\" (UniqueName: \"kubernetes.io/projected/4cdf2d34-39af-4acb-bcfa-79504bf9a2ab-kube-api-access-rlsvq\") pod \"4cdf2d34-39af-4acb-bcfa-79504bf9a2ab\" (UID: \"4cdf2d34-39af-4acb-bcfa-79504bf9a2ab\") " May 27 03:25:55.382351 kubelet[2671]: I0527 03:25:55.382244 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4cdf2d34-39af-4acb-bcfa-79504bf9a2ab-var-lib-calico\") pod \"4cdf2d34-39af-4acb-bcfa-79504bf9a2ab\" (UID: \"4cdf2d34-39af-4acb-bcfa-79504bf9a2ab\") " May 27 03:25:55.382715 kubelet[2671]: I0527 03:25:55.382673 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cdf2d34-39af-4acb-bcfa-79504bf9a2ab-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "4cdf2d34-39af-4acb-bcfa-79504bf9a2ab" (UID: "4cdf2d34-39af-4acb-bcfa-79504bf9a2ab"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:25:55.385632 kubelet[2671]: I0527 03:25:55.385604 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cdf2d34-39af-4acb-bcfa-79504bf9a2ab-kube-api-access-rlsvq" (OuterVolumeSpecName: "kube-api-access-rlsvq") pod "4cdf2d34-39af-4acb-bcfa-79504bf9a2ab" (UID: "4cdf2d34-39af-4acb-bcfa-79504bf9a2ab"). InnerVolumeSpecName "kube-api-access-rlsvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:25:55.387192 systemd[1]: var-lib-kubelet-pods-4cdf2d34\x2d39af\x2d4acb\x2dbcfa\x2d79504bf9a2ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drlsvq.mount: Deactivated successfully. May 27 03:25:55.482602 kubelet[2671]: I0527 03:25:55.482503 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rlsvq\" (UniqueName: \"kubernetes.io/projected/4cdf2d34-39af-4acb-bcfa-79504bf9a2ab-kube-api-access-rlsvq\") on node \"localhost\" DevicePath \"\"" May 27 03:25:55.482602 kubelet[2671]: I0527 03:25:55.482544 2671 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4cdf2d34-39af-4acb-bcfa-79504bf9a2ab-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 27 03:25:55.887317 systemd[1]: Removed slice kubepods-besteffort-pod4cdf2d34_39af_4acb_bcfa_79504bf9a2ab.slice - libcontainer container kubepods-besteffort-pod4cdf2d34_39af_4acb_bcfa_79504bf9a2ab.slice. May 27 03:25:55.887415 systemd[1]: kubepods-besteffort-pod4cdf2d34_39af_4acb_bcfa_79504bf9a2ab.slice: Consumed 4.482s CPU time, 75M memory peak. May 27 03:25:55.991982 kubelet[2671]: I0527 03:25:55.991937 2671 scope.go:117] "RemoveContainer" containerID="e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77" May 27 03:25:55.993869 containerd[1580]: time="2025-05-27T03:25:55.993811005Z" level=info msg="RemoveContainer for \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\"" May 27 03:25:56.011953 containerd[1580]: time="2025-05-27T03:25:56.011893336Z" level=info msg="RemoveContainer for \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" returns successfully" May 27 03:25:56.016705 kubelet[2671]: I0527 03:25:56.016659 2671 scope.go:117] "RemoveContainer" containerID="e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77" May 27 03:25:56.016993 containerd[1580]: time="2025-05-27T03:25:56.016949906Z" level=error msg="ContainerStatus for \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\": not found" May 27 03:25:56.017154 kubelet[2671]: E0527 03:25:56.017110 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\": not found" containerID="e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77" May 27 03:25:56.017218 kubelet[2671]: I0527 03:25:56.017166 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77"} err="failed to get container status \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\": rpc error: code = NotFound desc = an error occurred when try to find container \"e06f97ca3872e61b522a11b5db341e73752fde43565484b6accb072b74904f77\": not found" May 27 03:25:56.302796 kubelet[2671]: I0527 03:25:56.302725 2671 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-844669ff44-fr86j"] May 27 03:25:56.642940 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:40210.service - OpenSSH per-connection server daemon (10.0.0.1:40210). May 27 03:25:56.701304 sshd[3794]: Accepted publickey for core from 10.0.0.1 port 40210 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:25:56.702779 sshd-session[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:56.707415 systemd-logind[1505]: New session 9 of user core. May 27 03:25:56.718348 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:25:56.839758 sshd[3796]: Connection closed by 10.0.0.1 port 40210 May 27 03:25:56.840472 sshd-session[3794]: pam_unix(sshd:session): session closed for user core May 27 03:25:56.844847 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:40210.service: Deactivated successfully. May 27 03:25:56.847489 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:25:56.848344 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. May 27 03:25:56.849707 systemd-logind[1505]: Removed session 9. May 27 03:25:57.879368 containerd[1580]: time="2025-05-27T03:25:57.879303276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,}" May 27 03:25:57.934260 containerd[1580]: time="2025-05-27T03:25:57.934182795Z" level=error msg="Failed to destroy network for sandbox \"1020478c05fadcd4d6df72eb00c140b77020c5dd0421f5bcc0e4bca8da7d8d83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:57.935762 containerd[1580]: time="2025-05-27T03:25:57.935577915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1020478c05fadcd4d6df72eb00c140b77020c5dd0421f5bcc0e4bca8da7d8d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:57.936111 kubelet[2671]: E0527 03:25:57.936046 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1020478c05fadcd4d6df72eb00c140b77020c5dd0421f5bcc0e4bca8da7d8d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:57.936528 kubelet[2671]: E0527 03:25:57.936157 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1020478c05fadcd4d6df72eb00c140b77020c5dd0421f5bcc0e4bca8da7d8d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:25:57.936528 kubelet[2671]: E0527 03:25:57.936190 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1020478c05fadcd4d6df72eb00c140b77020c5dd0421f5bcc0e4bca8da7d8d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:25:57.936501 systemd[1]: run-netns-cni\x2dd59bb2a7\x2d02e9\x2d9039\x2d9755\x2d12a59857728d.mount: Deactivated successfully. May 27 03:25:57.937182 kubelet[2671]: E0527 03:25:57.936267 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1020478c05fadcd4d6df72eb00c140b77020c5dd0421f5bcc0e4bca8da7d8d83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lz678" podUID="ba2701a4-383c-4885-b697-c2657b09fefa" May 27 03:25:58.878940 containerd[1580]: time="2025-05-27T03:25:58.878883293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,}" May 27 03:25:58.925097 containerd[1580]: time="2025-05-27T03:25:58.925024899Z" level=error msg="Failed to destroy network for sandbox \"7fb722f85d729ff02cfd74b0449eef6733a71e8c6bd0155d5041cc4a87635937\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:58.927579 containerd[1580]: time="2025-05-27T03:25:58.927320841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb722f85d729ff02cfd74b0449eef6733a71e8c6bd0155d5041cc4a87635937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:58.927766 kubelet[2671]: E0527 03:25:58.927591 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb722f85d729ff02cfd74b0449eef6733a71e8c6bd0155d5041cc4a87635937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:58.927766 kubelet[2671]: E0527 03:25:58.927671 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb722f85d729ff02cfd74b0449eef6733a71e8c6bd0155d5041cc4a87635937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:25:58.927766 kubelet[2671]: E0527 03:25:58.927700 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb722f85d729ff02cfd74b0449eef6733a71e8c6bd0155d5041cc4a87635937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:25:58.927636 systemd[1]: run-netns-cni\x2dee2a1806\x2d9f28\x2d0f7a\x2d831e\x2dbf9266f64f57.mount: Deactivated successfully. May 27 03:25:58.927950 kubelet[2671]: E0527 03:25:58.927759 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fb722f85d729ff02cfd74b0449eef6733a71e8c6bd0155d5041cc4a87635937\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:25:59.879152 containerd[1580]: time="2025-05-27T03:25:59.879076047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,}" May 27 03:25:59.938342 containerd[1580]: time="2025-05-27T03:25:59.938289044Z" level=error msg="Failed to destroy network for sandbox \"26542e6e8125999ac7b9f4d6b2bf6c4c3e5102df9ad2b0a3cfe63a7e8cf4b298\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:59.939697 containerd[1580]: time="2025-05-27T03:25:59.939649679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26542e6e8125999ac7b9f4d6b2bf6c4c3e5102df9ad2b0a3cfe63a7e8cf4b298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:59.940449 kubelet[2671]: E0527 03:25:59.940066 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26542e6e8125999ac7b9f4d6b2bf6c4c3e5102df9ad2b0a3cfe63a7e8cf4b298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:25:59.940449 kubelet[2671]: E0527 03:25:59.940156 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26542e6e8125999ac7b9f4d6b2bf6c4c3e5102df9ad2b0a3cfe63a7e8cf4b298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:25:59.940449 kubelet[2671]: E0527 03:25:59.940176 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26542e6e8125999ac7b9f4d6b2bf6c4c3e5102df9ad2b0a3cfe63a7e8cf4b298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:25:59.940449 kubelet[2671]: E0527 03:25:59.940220 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26542e6e8125999ac7b9f4d6b2bf6c4c3e5102df9ad2b0a3cfe63a7e8cf4b298\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" podUID="57d6fdf8-dafc-4012-a8a7-1301381db58e" May 27 03:25:59.941405 systemd[1]: run-netns-cni\x2d186ce8c4\x2def21\x2dd381\x2d88a0\x2d8a5f494234a8.mount: Deactivated successfully. May 27 03:26:01.852462 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:40332.service - OpenSSH per-connection server daemon (10.0.0.1:40332). May 27 03:26:01.879301 containerd[1580]: time="2025-05-27T03:26:01.879260448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,}" May 27 03:26:01.897031 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 40332 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:01.898552 sshd-session[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:01.903023 systemd-logind[1505]: New session 10 of user core. May 27 03:26:01.908398 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:26:01.980104 containerd[1580]: time="2025-05-27T03:26:01.980028975Z" level=error msg="Failed to destroy network for sandbox \"f1e9b9ad2e0b2646a6c1f65d88077c01b1f5bfc735534f3e209495cd44a42e9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:01.983921 systemd[1]: run-netns-cni\x2d63dd2fc8\x2d6f01\x2d7da0\x2d6228\x2d9408f474f9a9.mount: Deactivated successfully. May 27 03:26:02.008122 containerd[1580]: time="2025-05-27T03:26:02.008051704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e9b9ad2e0b2646a6c1f65d88077c01b1f5bfc735534f3e209495cd44a42e9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:02.008674 kubelet[2671]: E0527 03:26:02.008390 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e9b9ad2e0b2646a6c1f65d88077c01b1f5bfc735534f3e209495cd44a42e9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:02.008674 kubelet[2671]: E0527 03:26:02.008459 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e9b9ad2e0b2646a6c1f65d88077c01b1f5bfc735534f3e209495cd44a42e9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:02.008674 kubelet[2671]: E0527 03:26:02.008478 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e9b9ad2e0b2646a6c1f65d88077c01b1f5bfc735534f3e209495cd44a42e9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:02.008674 kubelet[2671]: E0527 03:26:02.008523 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1e9b9ad2e0b2646a6c1f65d88077c01b1f5bfc735534f3e209495cd44a42e9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qpxp6" podUID="0025fdff-1c55-4c53-8432-c3b22baafc85" May 27 03:26:02.059608 sshd[3908]: Connection closed by 10.0.0.1 port 40332 May 27 03:26:02.059967 sshd-session[3906]: pam_unix(sshd:session): session closed for user core May 27 03:26:02.063956 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:40332.service: Deactivated successfully. May 27 03:26:02.066212 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:26:02.067652 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. May 27 03:26:02.068935 systemd-logind[1505]: Removed session 10. May 27 03:26:06.342394 kubelet[2671]: I0527 03:26:06.342342 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:26:06.342394 kubelet[2671]: I0527 03:26:06.342389 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:26:06.343966 containerd[1580]: time="2025-05-27T03:26:06.343923990Z" level=info msg="StopPodSandbox for \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\"" May 27 03:26:06.344363 containerd[1580]: time="2025-05-27T03:26:06.344114939Z" level=info msg="TearDown network for sandbox \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" successfully" May 27 03:26:06.344363 containerd[1580]: time="2025-05-27T03:26:06.344164622Z" level=info msg="StopPodSandbox for \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" returns successfully" May 27 03:26:06.344577 containerd[1580]: time="2025-05-27T03:26:06.344538413Z" level=info msg="RemovePodSandbox for \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\"" May 27 03:26:06.349634 containerd[1580]: time="2025-05-27T03:26:06.349596207Z" level=info msg="Forcibly stopping sandbox \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\"" May 27 03:26:06.349713 containerd[1580]: time="2025-05-27T03:26:06.349686006Z" level=info msg="TearDown network for sandbox \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" successfully" May 27 03:26:06.373390 containerd[1580]: time="2025-05-27T03:26:06.373334466Z" level=info msg="Ensure that sandbox 25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e in task-service has been cleanup successfully" May 27 03:26:06.406337 containerd[1580]: time="2025-05-27T03:26:06.406281250Z" level=info msg="RemovePodSandbox \"25360b72948a782a41a4c876ccb263372a397673797c90ee82cf3ede567bd41e\" returns successfully" May 27 03:26:06.406973 kubelet[2671]: I0527 03:26:06.406932 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:26:06.417923 kubelet[2671]: I0527 03:26:06.417878 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:26:06.418043 kubelet[2671]: I0527 03:26:06.417956 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/calico-kube-controllers-79469b85c4-szmp2","calico-system/calico-node-nl4v8","calico-system/csi-node-driver-lktnw","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.417985 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.417993 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.418000 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.418008 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nl4v8" May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.418014 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.418023 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.418032 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.418041 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-5pmvk" May 27 03:26:06.418043 kubelet[2671]: E0527 03:26:06.418049 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 03:26:06.418394 kubelet[2671]: E0527 03:26:06.418058 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 03:26:06.418394 kubelet[2671]: I0527 03:26:06.418070 2671 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 03:26:07.077697 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:51966.service - OpenSSH per-connection server daemon (10.0.0.1:51966). May 27 03:26:07.132562 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 51966 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:07.134484 sshd-session[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:07.139349 systemd-logind[1505]: New session 11 of user core. May 27 03:26:07.153263 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:26:07.268463 sshd[3958]: Connection closed by 10.0.0.1 port 51966 May 27 03:26:07.268878 sshd-session[3956]: pam_unix(sshd:session): session closed for user core May 27 03:26:07.281103 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:51966.service: Deactivated successfully. May 27 03:26:07.283186 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:26:07.284234 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. May 27 03:26:07.287764 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:51980.service - OpenSSH per-connection server daemon (10.0.0.1:51980). May 27 03:26:07.288662 systemd-logind[1505]: Removed session 11. May 27 03:26:07.344474 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 51980 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:07.346384 sshd-session[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:07.351671 systemd-logind[1505]: New session 12 of user core. May 27 03:26:07.366285 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:26:07.518173 sshd[3974]: Connection closed by 10.0.0.1 port 51980 May 27 03:26:07.519594 sshd-session[3972]: pam_unix(sshd:session): session closed for user core May 27 03:26:07.530466 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:51980.service: Deactivated successfully. May 27 03:26:07.533429 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:26:07.534536 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. May 27 03:26:07.541160 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:51994.service - OpenSSH per-connection server daemon (10.0.0.1:51994). May 27 03:26:07.542813 systemd-logind[1505]: Removed session 12. May 27 03:26:07.589302 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 51994 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:07.591485 sshd-session[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:07.596468 systemd-logind[1505]: New session 13 of user core. May 27 03:26:07.610378 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:26:07.725853 sshd[3988]: Connection closed by 10.0.0.1 port 51994 May 27 03:26:07.726345 sshd-session[3986]: pam_unix(sshd:session): session closed for user core May 27 03:26:07.730934 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:51994.service: Deactivated successfully. May 27 03:26:07.733271 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:26:07.734301 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. May 27 03:26:07.735757 systemd-logind[1505]: Removed session 13. May 27 03:26:08.879698 containerd[1580]: time="2025-05-27T03:26:08.879639178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:26:10.879302 containerd[1580]: time="2025-05-27T03:26:10.879187615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,}" May 27 03:26:11.669789 containerd[1580]: time="2025-05-27T03:26:11.669660883Z" level=error msg="Failed to destroy network for sandbox \"8661101259b63654fe280967566f3b404e24298afafdb581bc06bdf8ff61b7cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:11.671836 systemd[1]: run-netns-cni\x2de9557f2c\x2de17b\x2d49f5\x2d4620\x2dd15449160b65.mount: Deactivated successfully. May 27 03:26:11.717009 containerd[1580]: time="2025-05-27T03:26:11.716954555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8661101259b63654fe280967566f3b404e24298afafdb581bc06bdf8ff61b7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:11.717368 kubelet[2671]: E0527 03:26:11.717297 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8661101259b63654fe280967566f3b404e24298afafdb581bc06bdf8ff61b7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:11.718126 kubelet[2671]: E0527 03:26:11.717427 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8661101259b63654fe280967566f3b404e24298afafdb581bc06bdf8ff61b7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:11.718126 kubelet[2671]: E0527 03:26:11.717453 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8661101259b63654fe280967566f3b404e24298afafdb581bc06bdf8ff61b7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:11.718126 kubelet[2671]: E0527 03:26:11.717529 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8661101259b63654fe280967566f3b404e24298afafdb581bc06bdf8ff61b7cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lz678" podUID="ba2701a4-383c-4885-b697-c2657b09fefa" May 27 03:26:12.738239 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:52000.service - OpenSSH per-connection server daemon (10.0.0.1:52000). May 27 03:26:12.787426 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 52000 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:12.789473 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:12.794710 systemd-logind[1505]: New session 14 of user core. May 27 03:26:12.800279 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:26:12.879789 containerd[1580]: time="2025-05-27T03:26:12.879473538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,}" May 27 03:26:12.879789 containerd[1580]: time="2025-05-27T03:26:12.879565892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,}" May 27 03:26:12.910699 sshd[4041]: Connection closed by 10.0.0.1 port 52000 May 27 03:26:12.911047 sshd-session[4039]: pam_unix(sshd:session): session closed for user core May 27 03:26:12.915844 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:52000.service: Deactivated successfully. May 27 03:26:12.918020 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:26:12.918905 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. May 27 03:26:12.920282 systemd-logind[1505]: Removed session 14. May 27 03:26:13.692175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055963045.mount: Deactivated successfully. May 27 03:26:13.877029 containerd[1580]: time="2025-05-27T03:26:13.876956060Z" level=error msg="Failed to destroy network for sandbox \"7d6c93df624149aadefd23525533b45fca68dd38ae0a9ec8fc474805aa981498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:13.879223 systemd[1]: run-netns-cni\x2d61b1187a\x2de223\x2d5924\x2d611d\x2d0a677b758d55.mount: Deactivated successfully. May 27 03:26:13.883842 containerd[1580]: time="2025-05-27T03:26:13.883766960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2055963045: write /var/lib/containerd/tmpmounts/containerd-mount2055963045/usr/bin/calico-node: no space left on device" May 27 03:26:13.884299 containerd[1580]: time="2025-05-27T03:26:13.883858622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:26:13.884329 kubelet[2671]: E0527 03:26:13.884011 2671 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2055963045: write /var/lib/containerd/tmpmounts/containerd-mount2055963045/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 03:26:13.884329 kubelet[2671]: E0527 03:26:13.884054 2671 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2055963045: write /var/lib/containerd/tmpmounts/containerd-mount2055963045/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 03:26:13.884654 kubelet[2671]: E0527 03:26:13.884279 2671 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4vj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-nl4v8_calico-system(3ba286e9-822e-413a-a6bf-426b06794d9c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2055963045: write /var/lib/containerd/tmpmounts/containerd-mount2055963045/usr/bin/calico-node: no space left on device" logger="UnhandledError" May 27 03:26:13.886345 kubelet[2671]: E0527 03:26:13.886301 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2055963045: write /var/lib/containerd/tmpmounts/containerd-mount2055963045/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-nl4v8" podUID="3ba286e9-822e-413a-a6bf-426b06794d9c" May 27 03:26:13.923734 containerd[1580]: time="2025-05-27T03:26:13.923658685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6c93df624149aadefd23525533b45fca68dd38ae0a9ec8fc474805aa981498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:13.923997 kubelet[2671]: E0527 03:26:13.923894 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6c93df624149aadefd23525533b45fca68dd38ae0a9ec8fc474805aa981498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:13.923997 kubelet[2671]: E0527 03:26:13.923956 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6c93df624149aadefd23525533b45fca68dd38ae0a9ec8fc474805aa981498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:13.923997 kubelet[2671]: E0527 03:26:13.923975 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6c93df624149aadefd23525533b45fca68dd38ae0a9ec8fc474805aa981498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:13.924120 kubelet[2671]: E0527 03:26:13.924019 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d6c93df624149aadefd23525533b45fca68dd38ae0a9ec8fc474805aa981498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:26:13.943357 containerd[1580]: time="2025-05-27T03:26:13.943208941Z" level=error msg="Failed to destroy network for sandbox \"5b8447f64acb122e0fa1ef3e62d954730b0967327d9df1a140a095b88fac623c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:13.945649 systemd[1]: run-netns-cni\x2d06f06540\x2d81b0\x2ddbee\x2dcd51\x2df76f2fb1a6fc.mount: Deactivated successfully. May 27 03:26:13.968932 containerd[1580]: time="2025-05-27T03:26:13.968854012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8447f64acb122e0fa1ef3e62d954730b0967327d9df1a140a095b88fac623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:13.969217 kubelet[2671]: E0527 03:26:13.969163 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8447f64acb122e0fa1ef3e62d954730b0967327d9df1a140a095b88fac623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:13.969288 kubelet[2671]: E0527 03:26:13.969237 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8447f64acb122e0fa1ef3e62d954730b0967327d9df1a140a095b88fac623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:13.969288 kubelet[2671]: E0527 03:26:13.969260 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8447f64acb122e0fa1ef3e62d954730b0967327d9df1a140a095b88fac623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:13.969337 kubelet[2671]: E0527 03:26:13.969308 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b8447f64acb122e0fa1ef3e62d954730b0967327d9df1a140a095b88fac623c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" podUID="57d6fdf8-dafc-4012-a8a7-1301381db58e" May 27 03:26:14.879031 containerd[1580]: time="2025-05-27T03:26:14.878961298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,}" May 27 03:26:14.955973 containerd[1580]: time="2025-05-27T03:26:14.955896678Z" level=error msg="Failed to destroy network for sandbox \"a3a19b574520111e2599c89d7ce849062f2322c551c02dece074b6228333d4ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:14.958196 systemd[1]: run-netns-cni\x2d0bd392e7\x2d0947\x2d54a9\x2de858\x2d5079bb5fea53.mount: Deactivated successfully. May 27 03:26:14.962071 containerd[1580]: time="2025-05-27T03:26:14.962027723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a19b574520111e2599c89d7ce849062f2322c551c02dece074b6228333d4ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:14.962331 kubelet[2671]: E0527 03:26:14.962281 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a19b574520111e2599c89d7ce849062f2322c551c02dece074b6228333d4ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:14.962584 kubelet[2671]: E0527 03:26:14.962352 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a19b574520111e2599c89d7ce849062f2322c551c02dece074b6228333d4ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:14.962584 kubelet[2671]: E0527 03:26:14.962375 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a19b574520111e2599c89d7ce849062f2322c551c02dece074b6228333d4ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:14.962584 kubelet[2671]: E0527 03:26:14.962438 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3a19b574520111e2599c89d7ce849062f2322c551c02dece074b6228333d4ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qpxp6" podUID="0025fdff-1c55-4c53-8432-c3b22baafc85" May 27 03:26:16.432587 kubelet[2671]: I0527 03:26:16.432535 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:26:16.432587 kubelet[2671]: I0527 03:26:16.432584 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:26:16.433758 kubelet[2671]: I0527 03:26:16.433739 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:26:16.445114 kubelet[2671]: I0527 03:26:16.445070 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:26:16.445292 kubelet[2671]: I0527 03:26:16.445164 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/calico-kube-controllers-79469b85c4-szmp2","calico-system/calico-node-nl4v8","calico-system/csi-node-driver-lktnw","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445193 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445201 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445208 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445214 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nl4v8" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445221 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445231 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445241 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445251 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-5pmvk" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445259 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 03:26:16.445292 kubelet[2671]: E0527 03:26:16.445267 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 03:26:16.445292 kubelet[2671]: I0527 03:26:16.445277 2671 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 03:26:17.924749 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:45718.service - OpenSSH per-connection server daemon (10.0.0.1:45718). May 27 03:26:17.980228 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 45718 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:17.982173 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:17.986638 systemd-logind[1505]: New session 15 of user core. May 27 03:26:17.997271 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:26:18.119254 sshd[4154]: Connection closed by 10.0.0.1 port 45718 May 27 03:26:18.119602 sshd-session[4152]: pam_unix(sshd:session): session closed for user core May 27 03:26:18.124398 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:45718.service: Deactivated successfully. May 27 03:26:18.126369 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:26:18.127150 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. May 27 03:26:18.128644 systemd-logind[1505]: Removed session 15. May 27 03:26:23.140790 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:55136.service - OpenSSH per-connection server daemon (10.0.0.1:55136). May 27 03:26:23.186682 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 55136 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:23.188097 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:23.193072 systemd-logind[1505]: New session 16 of user core. May 27 03:26:23.210912 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:26:23.333202 sshd[4173]: Connection closed by 10.0.0.1 port 55136 May 27 03:26:23.333606 sshd-session[4171]: pam_unix(sshd:session): session closed for user core May 27 03:26:23.338823 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:55136.service: Deactivated successfully. May 27 03:26:23.341308 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:26:23.342413 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. May 27 03:26:23.344855 systemd-logind[1505]: Removed session 16. May 27 03:26:23.878832 containerd[1580]: time="2025-05-27T03:26:23.878774817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,}" May 27 03:26:23.930358 containerd[1580]: time="2025-05-27T03:26:23.930276379Z" level=error msg="Failed to destroy network for sandbox \"2a267bb648a91a6eddd18e5dcf9410bc6ce27f3920da0eb77323aa613480546d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:23.932054 containerd[1580]: time="2025-05-27T03:26:23.932001336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a267bb648a91a6eddd18e5dcf9410bc6ce27f3920da0eb77323aa613480546d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:23.932319 kubelet[2671]: E0527 03:26:23.932281 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a267bb648a91a6eddd18e5dcf9410bc6ce27f3920da0eb77323aa613480546d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:23.932918 kubelet[2671]: E0527 03:26:23.932345 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a267bb648a91a6eddd18e5dcf9410bc6ce27f3920da0eb77323aa613480546d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:23.932918 kubelet[2671]: E0527 03:26:23.932371 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a267bb648a91a6eddd18e5dcf9410bc6ce27f3920da0eb77323aa613480546d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:23.932918 kubelet[2671]: E0527 03:26:23.932419 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a267bb648a91a6eddd18e5dcf9410bc6ce27f3920da0eb77323aa613480546d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lz678" podUID="ba2701a4-383c-4885-b697-c2657b09fefa" May 27 03:26:23.932537 systemd[1]: run-netns-cni\x2d331a7b5e\x2d3f65\x2da14c\x2d8b29\x2d727967b9af0a.mount: Deactivated successfully. May 27 03:26:25.878949 containerd[1580]: time="2025-05-27T03:26:25.878629689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,}" May 27 03:26:25.942675 containerd[1580]: time="2025-05-27T03:26:25.941905301Z" level=error msg="Failed to destroy network for sandbox \"d70cb161b93f329300598da88ba97934212cd846f2a2e6383dba1bed762328fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:25.944369 containerd[1580]: time="2025-05-27T03:26:25.944316693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70cb161b93f329300598da88ba97934212cd846f2a2e6383dba1bed762328fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:25.944594 systemd[1]: run-netns-cni\x2d5eef4bc4\x2df78b\x2d5b2e\x2db760\x2d8b32e9176fdf.mount: Deactivated successfully. May 27 03:26:25.944880 kubelet[2671]: E0527 03:26:25.944755 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70cb161b93f329300598da88ba97934212cd846f2a2e6383dba1bed762328fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:25.944880 kubelet[2671]: E0527 03:26:25.944845 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70cb161b93f329300598da88ba97934212cd846f2a2e6383dba1bed762328fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:25.945166 kubelet[2671]: E0527 03:26:25.944877 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70cb161b93f329300598da88ba97934212cd846f2a2e6383dba1bed762328fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:25.945166 kubelet[2671]: E0527 03:26:25.944941 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d70cb161b93f329300598da88ba97934212cd846f2a2e6383dba1bed762328fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" podUID="57d6fdf8-dafc-4012-a8a7-1301381db58e" May 27 03:26:26.461901 kubelet[2671]: I0527 03:26:26.461846 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:26:26.461901 kubelet[2671]: I0527 03:26:26.461891 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:26:26.463606 kubelet[2671]: I0527 03:26:26.463566 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:26:26.475945 kubelet[2671]: I0527 03:26:26.475903 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:26:26.476057 kubelet[2671]: I0527 03:26:26.476021 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/calico-kube-controllers-79469b85c4-szmp2","calico-system/calico-node-nl4v8","calico-system/csi-node-driver-lktnw","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:26:26.476160 kubelet[2671]: E0527 03:26:26.476065 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:26.476160 kubelet[2671]: E0527 03:26:26.476077 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:26.476160 kubelet[2671]: E0527 03:26:26.476086 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:26.476160 kubelet[2671]: E0527 03:26:26.476095 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nl4v8" May 27 03:26:26.476160 kubelet[2671]: E0527 03:26:26.476104 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:26.476160 kubelet[2671]: E0527 03:26:26.476118 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:26:26.476160 kubelet[2671]: E0527 03:26:26.476130 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:26:26.476160 kubelet[2671]: E0527 03:26:26.476164 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-5pmvk" May 27 03:26:26.476376 kubelet[2671]: E0527 03:26:26.476176 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 03:26:26.476376 kubelet[2671]: E0527 03:26:26.476188 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 03:26:26.476376 kubelet[2671]: I0527 03:26:26.476200 2671 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 03:26:27.879922 kubelet[2671]: E0527 03:26:27.879842 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2055963045: write /var/lib/containerd/tmpmounts/containerd-mount2055963045/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-nl4v8" podUID="3ba286e9-822e-413a-a6bf-426b06794d9c" May 27 03:26:28.349797 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:55142.service - OpenSSH per-connection server daemon (10.0.0.1:55142). May 27 03:26:28.400612 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 55142 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:28.402101 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:28.406205 systemd-logind[1505]: New session 17 of user core. May 27 03:26:28.417255 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:26:28.523249 sshd[4260]: Connection closed by 10.0.0.1 port 55142 May 27 03:26:28.523566 sshd-session[4258]: pam_unix(sshd:session): session closed for user core May 27 03:26:28.527399 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:55142.service: Deactivated successfully. May 27 03:26:28.529292 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:26:28.530023 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. May 27 03:26:28.531356 systemd-logind[1505]: Removed session 17. May 27 03:26:28.878631 containerd[1580]: time="2025-05-27T03:26:28.878578269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,}" May 27 03:26:28.942192 containerd[1580]: time="2025-05-27T03:26:28.942113613Z" level=error msg="Failed to destroy network for sandbox \"07c4da9e9893d2ad274b94d2b4535ed31c0539905b0a11edc03397cead71b95a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:28.943778 containerd[1580]: time="2025-05-27T03:26:28.943696858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07c4da9e9893d2ad274b94d2b4535ed31c0539905b0a11edc03397cead71b95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:28.944041 kubelet[2671]: E0527 03:26:28.943994 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07c4da9e9893d2ad274b94d2b4535ed31c0539905b0a11edc03397cead71b95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:28.944699 kubelet[2671]: E0527 03:26:28.944067 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07c4da9e9893d2ad274b94d2b4535ed31c0539905b0a11edc03397cead71b95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:28.944699 kubelet[2671]: E0527 03:26:28.944090 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07c4da9e9893d2ad274b94d2b4535ed31c0539905b0a11edc03397cead71b95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:28.944699 kubelet[2671]: E0527 03:26:28.944168 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07c4da9e9893d2ad274b94d2b4535ed31c0539905b0a11edc03397cead71b95a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:26:28.945186 systemd[1]: run-netns-cni\x2d1c7c4342\x2dd788\x2d442b\x2d58f2\x2d6c3831724410.mount: Deactivated successfully. May 27 03:26:30.879421 containerd[1580]: time="2025-05-27T03:26:30.879357451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,}" May 27 03:26:30.978698 containerd[1580]: time="2025-05-27T03:26:30.978641370Z" level=error msg="Failed to destroy network for sandbox \"5e897412adb47fb4617bd2e3b3ad7d01131d177e06b171edd83282295f7b1a10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:30.980241 containerd[1580]: time="2025-05-27T03:26:30.980199113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e897412adb47fb4617bd2e3b3ad7d01131d177e06b171edd83282295f7b1a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:30.980494 kubelet[2671]: E0527 03:26:30.980450 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e897412adb47fb4617bd2e3b3ad7d01131d177e06b171edd83282295f7b1a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:30.981995 kubelet[2671]: E0527 03:26:30.980521 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e897412adb47fb4617bd2e3b3ad7d01131d177e06b171edd83282295f7b1a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:30.981995 kubelet[2671]: E0527 03:26:30.980543 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e897412adb47fb4617bd2e3b3ad7d01131d177e06b171edd83282295f7b1a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:30.981995 kubelet[2671]: E0527 03:26:30.980592 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e897412adb47fb4617bd2e3b3ad7d01131d177e06b171edd83282295f7b1a10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qpxp6" podUID="0025fdff-1c55-4c53-8432-c3b22baafc85" May 27 03:26:30.980867 systemd[1]: run-netns-cni\x2dd2633178\x2dfbba\x2dba9d\x2d8bcb\x2dd2e70532c79e.mount: Deactivated successfully. May 27 03:26:33.548228 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:44436.service - OpenSSH per-connection server daemon (10.0.0.1:44436). May 27 03:26:33.598005 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 44436 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:33.599339 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:33.603508 systemd-logind[1505]: New session 18 of user core. May 27 03:26:33.613281 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:26:33.726577 sshd[4339]: Connection closed by 10.0.0.1 port 44436 May 27 03:26:33.726977 sshd-session[4337]: pam_unix(sshd:session): session closed for user core May 27 03:26:33.736348 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:44436.service: Deactivated successfully. May 27 03:26:33.739414 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:26:33.740198 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. May 27 03:26:33.743622 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:44450.service - OpenSSH per-connection server daemon (10.0.0.1:44450). May 27 03:26:33.744548 systemd-logind[1505]: Removed session 18. May 27 03:26:33.794893 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 44450 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:33.796269 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:33.800737 systemd-logind[1505]: New session 19 of user core. May 27 03:26:33.814270 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:26:34.034785 sshd[4354]: Connection closed by 10.0.0.1 port 44450 May 27 03:26:34.035207 sshd-session[4352]: pam_unix(sshd:session): session closed for user core May 27 03:26:34.045853 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:44450.service: Deactivated successfully. May 27 03:26:34.047916 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:26:34.048751 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. May 27 03:26:34.051822 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:44456.service - OpenSSH per-connection server daemon (10.0.0.1:44456). May 27 03:26:34.053013 systemd-logind[1505]: Removed session 19. May 27 03:26:34.103291 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 44456 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:34.104726 sshd-session[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:34.109104 systemd-logind[1505]: New session 20 of user core. May 27 03:26:34.118301 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:26:34.984955 sshd[4368]: Connection closed by 10.0.0.1 port 44456 May 27 03:26:34.985897 sshd-session[4366]: pam_unix(sshd:session): session closed for user core May 27 03:26:34.995543 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:44456.service: Deactivated successfully. May 27 03:26:34.998396 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:26:35.000030 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. May 27 03:26:35.005237 systemd[1]: Started sshd@20-10.0.0.141:22-10.0.0.1:44460.service - OpenSSH per-connection server daemon (10.0.0.1:44460). May 27 03:26:35.006384 systemd-logind[1505]: Removed session 20. May 27 03:26:35.049608 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 44460 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:35.051260 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:35.055713 systemd-logind[1505]: New session 21 of user core. May 27 03:26:35.065277 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:26:35.264777 sshd[4389]: Connection closed by 10.0.0.1 port 44460 May 27 03:26:35.265416 sshd-session[4387]: pam_unix(sshd:session): session closed for user core May 27 03:26:35.275404 systemd[1]: sshd@20-10.0.0.141:22-10.0.0.1:44460.service: Deactivated successfully. May 27 03:26:35.277561 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:26:35.278489 systemd-logind[1505]: Session 21 logged out. Waiting for processes to exit. May 27 03:26:35.281578 systemd[1]: Started sshd@21-10.0.0.141:22-10.0.0.1:44462.service - OpenSSH per-connection server daemon (10.0.0.1:44462). May 27 03:26:35.282576 systemd-logind[1505]: Removed session 21. May 27 03:26:35.337789 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 44462 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:35.339851 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:35.344495 systemd-logind[1505]: New session 22 of user core. May 27 03:26:35.355287 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:26:35.463997 sshd[4402]: Connection closed by 10.0.0.1 port 44462 May 27 03:26:35.464383 sshd-session[4400]: pam_unix(sshd:session): session closed for user core May 27 03:26:35.469315 systemd[1]: sshd@21-10.0.0.141:22-10.0.0.1:44462.service: Deactivated successfully. May 27 03:26:35.471266 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:26:35.472235 systemd-logind[1505]: Session 22 logged out. Waiting for processes to exit. May 27 03:26:35.473663 systemd-logind[1505]: Removed session 22. May 27 03:26:36.494156 kubelet[2671]: I0527 03:26:36.494082 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:26:36.494156 kubelet[2671]: I0527 03:26:36.494172 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:26:36.495995 kubelet[2671]: I0527 03:26:36.495917 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:26:36.505755 kubelet[2671]: I0527 03:26:36.505712 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:26:36.505832 kubelet[2671]: I0527 03:26:36.505782 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-79469b85c4-szmp2","kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/csi-node-driver-lktnw","calico-system/calico-node-nl4v8","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:26:36.505832 kubelet[2671]: E0527 03:26:36.505814 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:36.505832 kubelet[2671]: E0527 03:26:36.505822 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:36.505832 kubelet[2671]: E0527 03:26:36.505829 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:36.505954 kubelet[2671]: E0527 03:26:36.505836 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:36.505954 kubelet[2671]: E0527 03:26:36.505852 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nl4v8" May 27 03:26:36.505954 kubelet[2671]: E0527 03:26:36.505862 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:26:36.505954 kubelet[2671]: E0527 03:26:36.505871 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:26:36.505954 kubelet[2671]: E0527 03:26:36.505880 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-5pmvk" May 27 03:26:36.505954 kubelet[2671]: E0527 03:26:36.505889 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 03:26:36.505954 kubelet[2671]: E0527 03:26:36.505897 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 03:26:36.505954 kubelet[2671]: I0527 03:26:36.505906 2671 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 03:26:36.879094 containerd[1580]: time="2025-05-27T03:26:36.879041753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,}" May 27 03:26:36.879605 containerd[1580]: time="2025-05-27T03:26:36.879043055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,}" May 27 03:26:36.936404 containerd[1580]: time="2025-05-27T03:26:36.936340060Z" level=error msg="Failed to destroy network for sandbox \"76f9b492a0f5877a9151ea2e55408854db628c18a9a2bab6ad6f038f81356c9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:36.939018 containerd[1580]: time="2025-05-27T03:26:36.938957718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76f9b492a0f5877a9151ea2e55408854db628c18a9a2bab6ad6f038f81356c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:36.939443 kubelet[2671]: E0527 03:26:36.939404 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76f9b492a0f5877a9151ea2e55408854db628c18a9a2bab6ad6f038f81356c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:36.939541 kubelet[2671]: E0527 03:26:36.939472 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76f9b492a0f5877a9151ea2e55408854db628c18a9a2bab6ad6f038f81356c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:36.939541 kubelet[2671]: E0527 03:26:36.939495 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76f9b492a0f5877a9151ea2e55408854db628c18a9a2bab6ad6f038f81356c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:36.939619 kubelet[2671]: E0527 03:26:36.939547 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76f9b492a0f5877a9151ea2e55408854db628c18a9a2bab6ad6f038f81356c9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" podUID="57d6fdf8-dafc-4012-a8a7-1301381db58e" May 27 03:26:36.939800 systemd[1]: run-netns-cni\x2d12ac2464\x2d59ed\x2dfe93\x2d641c\x2da858aaea77aa.mount: Deactivated successfully. May 27 03:26:36.941390 containerd[1580]: time="2025-05-27T03:26:36.940804323Z" level=error msg="Failed to destroy network for sandbox \"d208b0fadc4858bb6fd147c04318b0a6bfec8875df1f9759c7be5164d42eca0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:36.942715 containerd[1580]: time="2025-05-27T03:26:36.942610000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d208b0fadc4858bb6fd147c04318b0a6bfec8875df1f9759c7be5164d42eca0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:36.942896 kubelet[2671]: E0527 03:26:36.942835 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d208b0fadc4858bb6fd147c04318b0a6bfec8875df1f9759c7be5164d42eca0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:36.942962 kubelet[2671]: E0527 03:26:36.942912 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d208b0fadc4858bb6fd147c04318b0a6bfec8875df1f9759c7be5164d42eca0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:36.942962 kubelet[2671]: E0527 03:26:36.942938 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d208b0fadc4858bb6fd147c04318b0a6bfec8875df1f9759c7be5164d42eca0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:36.943041 kubelet[2671]: E0527 03:26:36.942992 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d208b0fadc4858bb6fd147c04318b0a6bfec8875df1f9759c7be5164d42eca0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lz678" podUID="ba2701a4-383c-4885-b697-c2657b09fefa" May 27 03:26:36.943036 systemd[1]: run-netns-cni\x2df4f15460\x2dae6b\x2d4772\x2d0703\x2d1709898fc312.mount: Deactivated successfully. May 27 03:26:40.478182 systemd[1]: Started sshd@22-10.0.0.141:22-10.0.0.1:44476.service - OpenSSH per-connection server daemon (10.0.0.1:44476). May 27 03:26:40.524737 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 44476 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:40.526270 sshd-session[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:40.530788 systemd-logind[1505]: New session 23 of user core. May 27 03:26:40.544285 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:26:40.651574 sshd[4492]: Connection closed by 10.0.0.1 port 44476 May 27 03:26:40.651889 sshd-session[4490]: pam_unix(sshd:session): session closed for user core May 27 03:26:40.656007 systemd[1]: sshd@22-10.0.0.141:22-10.0.0.1:44476.service: Deactivated successfully. May 27 03:26:40.658208 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:26:40.658988 systemd-logind[1505]: Session 23 logged out. Waiting for processes to exit. May 27 03:26:40.660665 systemd-logind[1505]: Removed session 23. May 27 03:26:41.879275 containerd[1580]: time="2025-05-27T03:26:41.879203168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,}" May 27 03:26:41.881215 containerd[1580]: time="2025-05-27T03:26:41.881150958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:26:41.941925 containerd[1580]: time="2025-05-27T03:26:41.941858136Z" level=error msg="Failed to destroy network for sandbox \"031f11d36c40db4aca04ef62bda19e145b37cbc1f365a165727afee21e8ad470\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:41.944188 containerd[1580]: time="2025-05-27T03:26:41.943397177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"031f11d36c40db4aca04ef62bda19e145b37cbc1f365a165727afee21e8ad470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:41.944070 systemd[1]: run-netns-cni\x2d4536b6e5\x2d2846\x2d4275\x2dc600\x2daa80c51981e8.mount: Deactivated successfully. May 27 03:26:41.944550 kubelet[2671]: E0527 03:26:41.943609 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031f11d36c40db4aca04ef62bda19e145b37cbc1f365a165727afee21e8ad470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:41.944550 kubelet[2671]: E0527 03:26:41.943672 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031f11d36c40db4aca04ef62bda19e145b37cbc1f365a165727afee21e8ad470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:41.944550 kubelet[2671]: E0527 03:26:41.943695 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031f11d36c40db4aca04ef62bda19e145b37cbc1f365a165727afee21e8ad470\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:41.944550 kubelet[2671]: E0527 03:26:41.943734 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"031f11d36c40db4aca04ef62bda19e145b37cbc1f365a165727afee21e8ad470\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:26:45.297971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713149093.mount: Deactivated successfully. May 27 03:26:45.298833 containerd[1580]: time="2025-05-27T03:26:45.297972830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2713149093: write /var/lib/containerd/tmpmounts/containerd-mount2713149093/usr/bin/calico-node: no space left on device" May 27 03:26:45.298833 containerd[1580]: time="2025-05-27T03:26:45.298061338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:26:45.299212 kubelet[2671]: E0527 03:26:45.298945 2671 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2713149093: write /var/lib/containerd/tmpmounts/containerd-mount2713149093/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 03:26:45.299212 kubelet[2671]: E0527 03:26:45.299016 2671 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2713149093: write /var/lib/containerd/tmpmounts/containerd-mount2713149093/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.0" May 27 03:26:45.299523 kubelet[2671]: E0527 03:26:45.299292 2671 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4vj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-nl4v8_calico-system(3ba286e9-822e-413a-a6bf-426b06794d9c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.0\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2713149093: write /var/lib/containerd/tmpmounts/containerd-mount2713149093/usr/bin/calico-node: no space left on device" logger="UnhandledError" May 27 03:26:45.300736 kubelet[2671]: E0527 03:26:45.300707 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.0\\\": failed to extract layer sha256:7a5cb5f4a2e3923ad79d2692d08de3a5238c395e141d8f7c21d1bfa5c6eb3e0f: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2713149093: write /var/lib/containerd/tmpmounts/containerd-mount2713149093/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-nl4v8" podUID="3ba286e9-822e-413a-a6bf-426b06794d9c" May 27 03:26:45.668259 systemd[1]: Started sshd@23-10.0.0.141:22-10.0.0.1:57628.service - OpenSSH per-connection server daemon (10.0.0.1:57628). May 27 03:26:45.723697 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 57628 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:45.725619 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:45.730228 systemd-logind[1505]: New session 24 of user core. May 27 03:26:45.736297 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:26:45.847961 sshd[4544]: Connection closed by 10.0.0.1 port 57628 May 27 03:26:45.848321 sshd-session[4542]: pam_unix(sshd:session): session closed for user core May 27 03:26:45.852083 systemd[1]: sshd@23-10.0.0.141:22-10.0.0.1:57628.service: Deactivated successfully. May 27 03:26:45.854289 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:26:45.856706 systemd-logind[1505]: Session 24 logged out. Waiting for processes to exit. May 27 03:26:45.857724 systemd-logind[1505]: Removed session 24. May 27 03:26:45.879264 containerd[1580]: time="2025-05-27T03:26:45.879189237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,}" May 27 03:26:45.933337 containerd[1580]: time="2025-05-27T03:26:45.933177031Z" level=error msg="Failed to destroy network for sandbox \"23c3d64e936e3a0db19a973e1e56be3830f30acf733c7d7d7ea2dbd40cfa509b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:45.934877 containerd[1580]: time="2025-05-27T03:26:45.934767968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpxp6,Uid:0025fdff-1c55-4c53-8432-c3b22baafc85,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c3d64e936e3a0db19a973e1e56be3830f30acf733c7d7d7ea2dbd40cfa509b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:45.935247 kubelet[2671]: E0527 03:26:45.935167 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c3d64e936e3a0db19a973e1e56be3830f30acf733c7d7d7ea2dbd40cfa509b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:45.935323 kubelet[2671]: E0527 03:26:45.935261 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c3d64e936e3a0db19a973e1e56be3830f30acf733c7d7d7ea2dbd40cfa509b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:45.935323 kubelet[2671]: E0527 03:26:45.935287 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c3d64e936e3a0db19a973e1e56be3830f30acf733c7d7d7ea2dbd40cfa509b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:45.935419 kubelet[2671]: E0527 03:26:45.935349 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qpxp6_kube-system(0025fdff-1c55-4c53-8432-c3b22baafc85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23c3d64e936e3a0db19a973e1e56be3830f30acf733c7d7d7ea2dbd40cfa509b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qpxp6" podUID="0025fdff-1c55-4c53-8432-c3b22baafc85" May 27 03:26:45.936055 systemd[1]: run-netns-cni\x2db3cfde04\x2ddceb\x2d6c3b\x2d78fd\x2d7e85af4518cc.mount: Deactivated successfully. May 27 03:26:46.521797 kubelet[2671]: I0527 03:26:46.521754 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:26:46.521797 kubelet[2671]: I0527 03:26:46.521795 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:26:46.523349 kubelet[2671]: I0527 03:26:46.523326 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:26:46.546059 kubelet[2671]: I0527 03:26:46.546026 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:26:46.546173 kubelet[2671]: I0527 03:26:46.546114 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/calico-kube-controllers-79469b85c4-szmp2","calico-system/calico-node-nl4v8","calico-system/csi-node-driver-lktnw","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:26:46.546173 kubelet[2671]: E0527 03:26:46.546161 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:46.546173 kubelet[2671]: E0527 03:26:46.546171 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:46.546297 kubelet[2671]: E0527 03:26:46.546178 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:46.546297 kubelet[2671]: E0527 03:26:46.546185 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nl4v8" May 27 03:26:46.546297 kubelet[2671]: E0527 03:26:46.546192 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:46.546297 kubelet[2671]: E0527 03:26:46.546202 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:26:46.546297 kubelet[2671]: E0527 03:26:46.546210 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:26:46.546297 kubelet[2671]: E0527 03:26:46.546218 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-5pmvk" May 27 03:26:46.546297 kubelet[2671]: E0527 03:26:46.546228 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 03:26:46.546297 kubelet[2671]: E0527 03:26:46.546236 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 03:26:46.546297 kubelet[2671]: I0527 03:26:46.546245 2671 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 03:26:47.879103 containerd[1580]: time="2025-05-27T03:26:47.879026115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,}" May 27 03:26:47.879551 containerd[1580]: time="2025-05-27T03:26:47.879193785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,}" May 27 03:26:47.932662 containerd[1580]: time="2025-05-27T03:26:47.932604723Z" level=error msg="Failed to destroy network for sandbox \"52dd5258f222d365753b949517aeac568a9a607258ab85f208626176042eb40b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:47.935502 systemd[1]: run-netns-cni\x2d89585d06\x2d5c1f\x2d1be4\x2dd28e\x2dd4294e8a8f68.mount: Deactivated successfully. May 27 03:26:47.935832 containerd[1580]: time="2025-05-27T03:26:47.935763036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz678,Uid:ba2701a4-383c-4885-b697-c2657b09fefa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dd5258f222d365753b949517aeac568a9a607258ab85f208626176042eb40b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:47.936121 kubelet[2671]: E0527 03:26:47.936077 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dd5258f222d365753b949517aeac568a9a607258ab85f208626176042eb40b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:47.936651 kubelet[2671]: E0527 03:26:47.936165 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dd5258f222d365753b949517aeac568a9a607258ab85f208626176042eb40b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:47.936651 kubelet[2671]: E0527 03:26:47.936189 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dd5258f222d365753b949517aeac568a9a607258ab85f208626176042eb40b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:47.936651 kubelet[2671]: E0527 03:26:47.936242 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lz678_kube-system(ba2701a4-383c-4885-b697-c2657b09fefa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52dd5258f222d365753b949517aeac568a9a607258ab85f208626176042eb40b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lz678" podUID="ba2701a4-383c-4885-b697-c2657b09fefa" May 27 03:26:47.937152 containerd[1580]: time="2025-05-27T03:26:47.937105319Z" level=error msg="Failed to destroy network for sandbox \"bbf2a316d2f592e4ab9c0cc529985170fb45e6d234e1e7d1d697e1c6b4ff01c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:47.939264 systemd[1]: run-netns-cni\x2d5cbaa614\x2d6f6d\x2d5d22\x2d0c8a\x2d578349d3cc9c.mount: Deactivated successfully. May 27 03:26:47.939463 containerd[1580]: time="2025-05-27T03:26:47.939379581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79469b85c4-szmp2,Uid:57d6fdf8-dafc-4012-a8a7-1301381db58e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf2a316d2f592e4ab9c0cc529985170fb45e6d234e1e7d1d697e1c6b4ff01c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:47.940030 kubelet[2671]: E0527 03:26:47.939983 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf2a316d2f592e4ab9c0cc529985170fb45e6d234e1e7d1d697e1c6b4ff01c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:47.940088 kubelet[2671]: E0527 03:26:47.940043 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf2a316d2f592e4ab9c0cc529985170fb45e6d234e1e7d1d697e1c6b4ff01c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:47.940088 kubelet[2671]: E0527 03:26:47.940068 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbf2a316d2f592e4ab9c0cc529985170fb45e6d234e1e7d1d697e1c6b4ff01c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:47.940166 kubelet[2671]: E0527 03:26:47.940113 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79469b85c4-szmp2_calico-system(57d6fdf8-dafc-4012-a8a7-1301381db58e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbf2a316d2f592e4ab9c0cc529985170fb45e6d234e1e7d1d697e1c6b4ff01c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" podUID="57d6fdf8-dafc-4012-a8a7-1301381db58e" May 27 03:26:50.861239 systemd[1]: Started sshd@24-10.0.0.141:22-10.0.0.1:57632.service - OpenSSH per-connection server daemon (10.0.0.1:57632). May 27 03:26:50.915731 sshd[4659]: Accepted publickey for core from 10.0.0.1 port 57632 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:50.917235 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:50.921753 systemd-logind[1505]: New session 25 of user core. May 27 03:26:50.928257 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:26:51.035623 sshd[4661]: Connection closed by 10.0.0.1 port 57632 May 27 03:26:51.035961 sshd-session[4659]: pam_unix(sshd:session): session closed for user core May 27 03:26:51.040681 systemd[1]: sshd@24-10.0.0.141:22-10.0.0.1:57632.service: Deactivated successfully. May 27 03:26:51.043050 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:26:51.044367 systemd-logind[1505]: Session 25 logged out. Waiting for processes to exit. May 27 03:26:51.045957 systemd-logind[1505]: Removed session 25. May 27 03:26:53.878848 containerd[1580]: time="2025-05-27T03:26:53.878774474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,}" May 27 03:26:53.991654 containerd[1580]: time="2025-05-27T03:26:53.991584802Z" level=error msg="Failed to destroy network for sandbox \"5dde6809d16f096616231f8b143cf83ea0229f2a266f576077c9f4339f0b10f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:53.993050 containerd[1580]: time="2025-05-27T03:26:53.992984949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lktnw,Uid:b054e321-f80c-45e5-a80b-17a7bbc92d8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dde6809d16f096616231f8b143cf83ea0229f2a266f576077c9f4339f0b10f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:53.993417 kubelet[2671]: E0527 03:26:53.993296 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dde6809d16f096616231f8b143cf83ea0229f2a266f576077c9f4339f0b10f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:26:53.994042 kubelet[2671]: E0527 03:26:53.993447 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dde6809d16f096616231f8b143cf83ea0229f2a266f576077c9f4339f0b10f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:53.994042 kubelet[2671]: E0527 03:26:53.993471 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dde6809d16f096616231f8b143cf83ea0229f2a266f576077c9f4339f0b10f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:53.994042 kubelet[2671]: E0527 03:26:53.993535 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lktnw_calico-system(b054e321-f80c-45e5-a80b-17a7bbc92d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dde6809d16f096616231f8b143cf83ea0229f2a266f576077c9f4339f0b10f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lktnw" podUID="b054e321-f80c-45e5-a80b-17a7bbc92d8f" May 27 03:26:53.994765 systemd[1]: run-netns-cni\x2deeddcdaa\x2deee2\x2da290\x2d6c92\x2d592530397379.mount: Deactivated successfully. May 27 03:26:56.048011 systemd[1]: Started sshd@25-10.0.0.141:22-10.0.0.1:36058.service - OpenSSH per-connection server daemon (10.0.0.1:36058). May 27 03:26:56.085979 sshd[4710]: Accepted publickey for core from 10.0.0.1 port 36058 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:26:56.087757 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:26:56.092822 systemd-logind[1505]: New session 26 of user core. May 27 03:26:56.104358 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:26:56.217079 sshd[4712]: Connection closed by 10.0.0.1 port 36058 May 27 03:26:56.217425 sshd-session[4710]: pam_unix(sshd:session): session closed for user core May 27 03:26:56.222464 systemd[1]: sshd@25-10.0.0.141:22-10.0.0.1:36058.service: Deactivated successfully. May 27 03:26:56.224654 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:26:56.225594 systemd-logind[1505]: Session 26 logged out. Waiting for processes to exit. May 27 03:26:56.226955 systemd-logind[1505]: Removed session 26. May 27 03:26:56.560515 kubelet[2671]: I0527 03:26:56.560468 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:26:56.560515 kubelet[2671]: I0527 03:26:56.560508 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:26:56.561648 kubelet[2671]: I0527 03:26:56.561620 2671 image_gc_manager.go:431] "Attempting to delete unused images" May 27 03:26:56.571342 kubelet[2671]: I0527 03:26:56.571288 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:26:56.571446 kubelet[2671]: I0527 03:26:56.571363 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-qpxp6","kube-system/coredns-668d6bf9bc-lz678","calico-system/calico-kube-controllers-79469b85c4-szmp2","calico-system/calico-node-nl4v8","calico-system/csi-node-driver-lktnw","calico-system/calico-typha-6d64b75d5-w8w2n","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-5pmvk","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571390 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qpxp6" May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571399 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-lz678" May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571406 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-79469b85c4-szmp2" May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571413 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nl4v8" May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571419 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-lktnw" May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571429 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-6d64b75d5-w8w2n" May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571441 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571449 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-5pmvk" May 27 03:26:56.571446 kubelet[2671]: E0527 03:26:56.571458 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 27 03:26:56.571702 kubelet[2671]: E0527 03:26:56.571467 2671 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 27 03:26:56.571702 kubelet[2671]: I0527 03:26:56.571477 2671 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node"