May 15 16:00:49.873394 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 16:00:49.873424 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 16:00:49.873435 kernel: BIOS-provided physical RAM map: May 15 16:00:49.873442 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 16:00:49.873448 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 16:00:49.873455 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 16:00:49.873463 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 15 16:00:49.873475 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 15 16:00:49.873486 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 16:00:49.873493 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 16:00:49.873500 kernel: NX (Execute Disable) protection: active May 15 16:00:49.873507 kernel: APIC: Static calls initialized May 15 16:00:49.873514 kernel: SMBIOS 2.8 present. May 15 16:00:49.873521 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 15 16:00:49.873532 kernel: DMI: Memory slots populated: 1/1 May 15 16:00:49.873540 kernel: Hypervisor detected: KVM May 15 16:00:49.873551 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 16:00:49.873559 kernel: kvm-clock: using sched offset of 4501939217 cycles May 15 16:00:49.873567 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 16:00:49.873575 kernel: tsc: Detected 2494.146 MHz processor May 15 16:00:49.873583 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 16:00:49.873592 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 16:00:49.873599 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 15 16:00:49.873610 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 16:00:49.873619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 16:00:49.873627 kernel: ACPI: Early table checksum verification disabled May 15 16:00:49.873634 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 15 16:00:49.873642 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:00:49.873650 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:00:49.873658 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:00:49.873666 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 16:00:49.873674 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:00:49.873684 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:00:49.873692 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:00:49.873700 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:00:49.873708 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 15 16:00:49.873716 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 15 16:00:49.873723 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 16:00:49.873731 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 15 16:00:49.873740 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 15 16:00:49.873754 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 15 16:00:49.873762 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 15 16:00:49.873771 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 15 16:00:49.873779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 15 16:00:49.873787 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 15 16:00:49.873796 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 15 16:00:49.873807 kernel: Zone ranges: May 15 16:00:49.873815 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 16:00:49.873824 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 15 16:00:49.873832 kernel: Normal empty May 15 16:00:49.873840 kernel: Device empty May 15 16:00:49.873848 kernel: Movable zone start for each node May 15 16:00:49.873857 kernel: Early memory node ranges May 15 16:00:49.873865 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 16:00:49.873873 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 15 16:00:49.873884 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 15 16:00:49.873893 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 16:00:49.873901 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 16:00:49.873910 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 15 16:00:49.873918 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 16:00:49.873926 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 16:00:49.873937 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 16:00:49.873946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 16:00:49.873956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 16:00:49.873968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 16:00:49.873978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 16:00:49.875162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 16:00:49.875179 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 16:00:49.875188 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 16:00:49.875197 kernel: TSC deadline timer available May 15 16:00:49.875206 kernel: CPU topo: Max. logical packages: 1 May 15 16:00:49.875214 kernel: CPU topo: Max. logical dies: 1 May 15 16:00:49.875223 kernel: CPU topo: Max. dies per package: 1 May 15 16:00:49.875232 kernel: CPU topo: Max. threads per core: 1 May 15 16:00:49.875248 kernel: CPU topo: Num. cores per package: 2 May 15 16:00:49.875257 kernel: CPU topo: Num. threads per package: 2 May 15 16:00:49.875265 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 16:00:49.875274 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 16:00:49.875282 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 15 16:00:49.875291 kernel: Booting paravirtualized kernel on KVM May 15 16:00:49.875300 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 16:00:49.875309 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 16:00:49.875317 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 16:00:49.875329 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 16:00:49.875337 kernel: pcpu-alloc: [0] 0 1 May 15 16:00:49.875345 kernel: kvm-guest: PV spinlocks disabled, no host support May 15 16:00:49.875356 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 16:00:49.875366 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 16:00:49.875374 kernel: random: crng init done May 15 16:00:49.875383 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 16:00:49.875392 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 15 16:00:49.875403 kernel: Fallback order for Node 0: 0 May 15 16:00:49.875412 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 15 16:00:49.875420 kernel: Policy zone: DMA32 May 15 16:00:49.875429 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 16:00:49.875437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 16:00:49.875455 kernel: Kernel/User page tables isolation: enabled May 15 16:00:49.875468 kernel: ftrace: allocating 40065 entries in 157 pages May 15 16:00:49.875476 kernel: ftrace: allocated 157 pages with 5 groups May 15 16:00:49.875485 kernel: Dynamic Preempt: voluntary May 15 16:00:49.875497 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 16:00:49.875507 kernel: rcu: RCU event tracing is enabled. May 15 16:00:49.875515 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 16:00:49.875524 kernel: Trampoline variant of Tasks RCU enabled. May 15 16:00:49.875532 kernel: Rude variant of Tasks RCU enabled. May 15 16:00:49.875541 kernel: Tracing variant of Tasks RCU enabled. May 15 16:00:49.875549 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 16:00:49.875563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 16:00:49.875572 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 16:00:49.875591 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 16:00:49.875600 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 16:00:49.875609 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 16:00:49.875617 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 16:00:49.875626 kernel: Console: colour VGA+ 80x25 May 15 16:00:49.875634 kernel: printk: legacy console [tty0] enabled May 15 16:00:49.875642 kernel: printk: legacy console [ttyS0] enabled May 15 16:00:49.875651 kernel: ACPI: Core revision 20240827 May 15 16:00:49.875660 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 16:00:49.875681 kernel: APIC: Switch to symmetric I/O mode setup May 15 16:00:49.875693 kernel: x2apic enabled May 15 16:00:49.875711 kernel: APIC: Switched APIC routing to: physical x2apic May 15 16:00:49.875726 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 16:00:49.875742 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns May 15 16:00:49.875754 kernel: Calibrating delay loop (skipped) preset value.. 4988.29 BogoMIPS (lpj=2494146) May 15 16:00:49.875767 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 15 16:00:49.875780 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 15 16:00:49.875794 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 16:00:49.875811 kernel: Spectre V2 : Mitigation: Retpolines May 15 16:00:49.875824 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 16:00:49.875833 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 16:00:49.875842 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 16:00:49.875851 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 16:00:49.875861 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 16:00:49.875870 kernel: MDS: Mitigation: Clear CPU buffers May 15 16:00:49.875879 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 15 16:00:49.875891 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 16:00:49.875900 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 16:00:49.875909 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 16:00:49.875917 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 16:00:49.875926 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 16:00:49.875935 kernel: Freeing SMP alternatives memory: 32K May 15 16:00:49.875944 kernel: pid_max: default: 32768 minimum: 301 May 15 16:00:49.875953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 16:00:49.875962 kernel: landlock: Up and running. May 15 16:00:49.875973 kernel: SELinux: Initializing. May 15 16:00:49.875982 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 16:00:49.876004 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 16:00:49.876013 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 15 16:00:49.876021 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 15 16:00:49.877031 kernel: signal: max sigframe size: 1776 May 15 16:00:49.877046 kernel: rcu: Hierarchical SRCU implementation. May 15 16:00:49.877057 kernel: rcu: Max phase no-delay instances is 400. May 15 16:00:49.877066 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 16:00:49.877075 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 15 16:00:49.877090 kernel: smp: Bringing up secondary CPUs ... May 15 16:00:49.877100 kernel: smpboot: x86: Booting SMP configuration: May 15 16:00:49.877117 kernel: .... node #0, CPUs: #1 May 15 16:00:49.877126 kernel: smp: Brought up 1 node, 2 CPUs May 15 16:00:49.877135 kernel: smpboot: Total of 2 processors activated (9976.58 BogoMIPS) May 15 16:00:49.877145 kernel: Memory: 1966904K/2096612K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 125144K reserved, 0K cma-reserved) May 15 16:00:49.877155 kernel: devtmpfs: initialized May 15 16:00:49.877164 kernel: x86/mm: Memory block size: 128MB May 15 16:00:49.877173 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 16:00:49.877185 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 16:00:49.877194 kernel: pinctrl core: initialized pinctrl subsystem May 15 16:00:49.877203 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 16:00:49.877212 kernel: audit: initializing netlink subsys (disabled) May 15 16:00:49.877221 kernel: audit: type=2000 audit(1747324846.279:1): state=initialized audit_enabled=0 res=1 May 15 16:00:49.877230 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 16:00:49.877239 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 16:00:49.877248 kernel: cpuidle: using governor menu May 15 16:00:49.877257 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 16:00:49.877269 kernel: dca service started, version 1.12.1 May 15 16:00:49.877278 kernel: PCI: Using configuration type 1 for base access May 15 16:00:49.877287 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 16:00:49.877296 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 16:00:49.877305 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 16:00:49.877314 kernel: ACPI: Added _OSI(Module Device) May 15 16:00:49.877323 kernel: ACPI: Added _OSI(Processor Device) May 15 16:00:49.877332 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 16:00:49.877341 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 16:00:49.877352 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 16:00:49.877361 kernel: ACPI: Interpreter enabled May 15 16:00:49.877370 kernel: ACPI: PM: (supports S0 S5) May 15 16:00:49.877379 kernel: ACPI: Using IOAPIC for interrupt routing May 15 16:00:49.877388 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 16:00:49.877397 kernel: PCI: Using E820 reservations for host bridge windows May 15 16:00:49.877406 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 15 16:00:49.877415 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 16:00:49.877620 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 16:00:49.877724 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 15 16:00:49.877815 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 15 16:00:49.877828 kernel: acpiphp: Slot [3] registered May 15 16:00:49.877837 kernel: acpiphp: Slot [4] registered May 15 16:00:49.877846 kernel: acpiphp: Slot [5] registered May 15 16:00:49.877854 kernel: acpiphp: Slot [6] registered May 15 16:00:49.877863 kernel: acpiphp: Slot [7] registered May 15 16:00:49.877876 kernel: acpiphp: Slot [8] registered May 15 16:00:49.877885 kernel: acpiphp: Slot [9] registered May 15 16:00:49.877894 kernel: acpiphp: Slot [10] registered May 15 16:00:49.877902 kernel: acpiphp: Slot [11] registered May 15 16:00:49.877911 kernel: acpiphp: Slot [12] registered May 15 16:00:49.877919 kernel: acpiphp: Slot [13] registered May 15 16:00:49.877928 kernel: acpiphp: Slot [14] registered May 15 16:00:49.877937 kernel: acpiphp: Slot [15] registered May 15 16:00:49.877946 kernel: acpiphp: Slot [16] registered May 15 16:00:49.877958 kernel: acpiphp: Slot [17] registered May 15 16:00:49.877967 kernel: acpiphp: Slot [18] registered May 15 16:00:49.877975 kernel: acpiphp: Slot [19] registered May 15 16:00:49.878570 kernel: acpiphp: Slot [20] registered May 15 16:00:49.878585 kernel: acpiphp: Slot [21] registered May 15 16:00:49.878594 kernel: acpiphp: Slot [22] registered May 15 16:00:49.878604 kernel: acpiphp: Slot [23] registered May 15 16:00:49.878612 kernel: acpiphp: Slot [24] registered May 15 16:00:49.878621 kernel: acpiphp: Slot [25] registered May 15 16:00:49.878630 kernel: acpiphp: Slot [26] registered May 15 16:00:49.878645 kernel: acpiphp: Slot [27] registered May 15 16:00:49.878654 kernel: acpiphp: Slot [28] registered May 15 16:00:49.878663 kernel: acpiphp: Slot [29] registered May 15 16:00:49.878672 kernel: acpiphp: Slot [30] registered May 15 16:00:49.878681 kernel: acpiphp: Slot [31] registered May 15 16:00:49.878690 kernel: PCI host bridge to bus 0000:00 May 15 16:00:49.878850 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 16:00:49.878949 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 16:00:49.879098 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 16:00:49.879204 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 15 16:00:49.879286 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 15 16:00:49.879366 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 16:00:49.879492 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 15 16:00:49.879602 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 15 16:00:49.879729 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 15 16:00:49.879859 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 15 16:00:49.879954 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 15 16:00:49.880062 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 15 16:00:49.880154 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 15 16:00:49.880248 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 15 16:00:49.880361 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 15 16:00:49.880462 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 15 16:00:49.880571 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 15 16:00:49.880666 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 15 16:00:49.880759 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 15 16:00:49.880961 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 15 16:00:49.884213 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 15 16:00:49.884340 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 15 16:00:49.884434 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 15 16:00:49.884528 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 15 16:00:49.884620 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 16:00:49.884734 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 16:00:49.884829 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 15 16:00:49.884961 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 15 16:00:49.885532 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 15 16:00:49.885663 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 16:00:49.885759 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 15 16:00:49.885851 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 15 16:00:49.885944 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 15 16:00:49.887139 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 16:00:49.887258 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 15 16:00:49.887366 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 15 16:00:49.887459 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 15 16:00:49.887565 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 16:00:49.887657 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 15 16:00:49.887748 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 15 16:00:49.887838 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 15 16:00:49.887942 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 16:00:49.888071 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 15 16:00:49.888162 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 15 16:00:49.888254 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 15 16:00:49.888368 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 15 16:00:49.888463 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 15 16:00:49.888554 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 15 16:00:49.888571 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 16:00:49.888580 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 16:00:49.888589 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 16:00:49.888599 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 16:00:49.888608 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 16:00:49.888617 kernel: iommu: Default domain type: Translated May 15 16:00:49.888626 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 16:00:49.888635 kernel: PCI: Using ACPI for IRQ routing May 15 16:00:49.888644 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 16:00:49.888656 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 16:00:49.888665 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 15 16:00:49.888757 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 15 16:00:49.888848 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 15 16:00:49.888958 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 16:00:49.888971 kernel: vgaarb: loaded May 15 16:00:49.888980 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 16:00:49.891897 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 16:00:49.891910 kernel: clocksource: Switched to clocksource kvm-clock May 15 16:00:49.891926 kernel: VFS: Disk quotas dquot_6.6.0 May 15 16:00:49.891936 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 16:00:49.891946 kernel: pnp: PnP ACPI init May 15 16:00:49.891955 kernel: pnp: PnP ACPI: found 4 devices May 15 16:00:49.891965 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 16:00:49.891974 kernel: NET: Registered PF_INET protocol family May 15 16:00:49.892005 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 16:00:49.892015 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 15 16:00:49.892028 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 16:00:49.892038 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 15 16:00:49.892046 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 15 16:00:49.892056 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 15 16:00:49.892065 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 16:00:49.892074 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 16:00:49.892083 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 16:00:49.892092 kernel: NET: Registered PF_XDP protocol family May 15 16:00:49.892220 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 16:00:49.892310 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 16:00:49.892392 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 16:00:49.892472 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 15 16:00:49.892553 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 15 16:00:49.892654 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 15 16:00:49.892750 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 16:00:49.892763 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 15 16:00:49.892855 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 38079 usecs May 15 16:00:49.892892 kernel: PCI: CLS 0 bytes, default 64 May 15 16:00:49.892901 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 15 16:00:49.892911 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns May 15 16:00:49.892920 kernel: Initialise system trusted keyrings May 15 16:00:49.892929 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 15 16:00:49.892938 kernel: Key type asymmetric registered May 15 16:00:49.892947 kernel: Asymmetric key parser 'x509' registered May 15 16:00:49.892956 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 16:00:49.892965 kernel: io scheduler mq-deadline registered May 15 16:00:49.892977 kernel: io scheduler kyber registered May 15 16:00:49.893000 kernel: io scheduler bfq registered May 15 16:00:49.893009 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 16:00:49.893018 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 15 16:00:49.893027 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 16:00:49.893036 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 16:00:49.893045 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 16:00:49.893054 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 16:00:49.893063 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 16:00:49.893075 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 16:00:49.893084 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 16:00:49.893203 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 16:00:49.893217 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 16:00:49.893299 kernel: rtc_cmos 00:03: registered as rtc0 May 15 16:00:49.893383 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T16:00:49 UTC (1747324849) May 15 16:00:49.893465 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 15 16:00:49.893477 kernel: intel_pstate: CPU model not supported May 15 16:00:49.893490 kernel: NET: Registered PF_INET6 protocol family May 15 16:00:49.893498 kernel: Segment Routing with IPv6 May 15 16:00:49.893507 kernel: In-situ OAM (IOAM) with IPv6 May 15 16:00:49.893516 kernel: NET: Registered PF_PACKET protocol family May 15 16:00:49.893525 kernel: Key type dns_resolver registered May 15 16:00:49.893534 kernel: IPI shorthand broadcast: enabled May 15 16:00:49.893543 kernel: sched_clock: Marking stable (3269005690, 95770471)->(3388007868, -23231707) May 15 16:00:49.893552 kernel: registered taskstats version 1 May 15 16:00:49.893560 kernel: Loading compiled-in X.509 certificates May 15 16:00:49.893572 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 16:00:49.893580 kernel: Demotion targets for Node 0: null May 15 16:00:49.893589 kernel: Key type .fscrypt registered May 15 16:00:49.893598 kernel: Key type fscrypt-provisioning registered May 15 16:00:49.893624 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 16:00:49.893636 kernel: ima: Allocated hash algorithm: sha1 May 15 16:00:49.893645 kernel: ima: No architecture policies found May 15 16:00:49.893654 kernel: clk: Disabling unused clocks May 15 16:00:49.893666 kernel: Warning: unable to open an initial console. May 15 16:00:49.893676 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 16:00:49.893685 kernel: Write protecting the kernel read-only data: 24576k May 15 16:00:49.893695 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 16:00:49.893704 kernel: Run /init as init process May 15 16:00:49.893714 kernel: with arguments: May 15 16:00:49.893723 kernel: /init May 15 16:00:49.893732 kernel: with environment: May 15 16:00:49.893741 kernel: HOME=/ May 15 16:00:49.893754 kernel: TERM=linux May 15 16:00:49.893763 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 16:00:49.893774 systemd[1]: Successfully made /usr/ read-only. May 15 16:00:49.893787 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 16:00:49.893798 systemd[1]: Detected virtualization kvm. May 15 16:00:49.893807 systemd[1]: Detected architecture x86-64. May 15 16:00:49.893822 systemd[1]: Running in initrd. May 15 16:00:49.893834 systemd[1]: No hostname configured, using default hostname. May 15 16:00:49.893847 systemd[1]: Hostname set to . May 15 16:00:49.893857 systemd[1]: Initializing machine ID from VM UUID. May 15 16:00:49.893866 systemd[1]: Queued start job for default target initrd.target. May 15 16:00:49.893876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 16:00:49.893886 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 16:00:49.893896 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 16:00:49.893906 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 16:00:49.893916 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 16:00:49.893933 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 16:00:49.893944 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 16:00:49.893954 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 16:00:49.893966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 16:00:49.893976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 16:00:49.893999 systemd[1]: Reached target paths.target - Path Units. May 15 16:00:49.894009 systemd[1]: Reached target slices.target - Slice Units. May 15 16:00:49.894019 systemd[1]: Reached target swap.target - Swaps. May 15 16:00:49.894029 systemd[1]: Reached target timers.target - Timer Units. May 15 16:00:49.894039 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 16:00:49.894049 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 16:00:49.894059 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 16:00:49.894072 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 16:00:49.894082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 16:00:49.894091 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 16:00:49.894101 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 16:00:49.894111 systemd[1]: Reached target sockets.target - Socket Units. May 15 16:00:49.894121 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 16:00:49.894131 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 16:00:49.894141 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 16:00:49.894153 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 16:00:49.894166 systemd[1]: Starting systemd-fsck-usr.service... May 15 16:00:49.894176 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 16:00:49.894185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 16:00:49.894195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:00:49.894205 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 16:00:49.894218 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 16:00:49.894228 systemd[1]: Finished systemd-fsck-usr.service. May 15 16:00:49.894270 systemd-journald[210]: Collecting audit messages is disabled. May 15 16:00:49.894297 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 16:00:49.894308 systemd-journald[210]: Journal started May 15 16:00:49.894329 systemd-journald[210]: Runtime Journal (/run/log/journal/08a45def8a7c45ab9e01136f86d9e881) is 4.9M, max 39.5M, 34.6M free. May 15 16:00:49.874140 systemd-modules-load[212]: Inserted module 'overlay' May 15 16:00:49.903059 systemd[1]: Started systemd-journald.service - Journal Service. May 15 16:00:49.909022 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 16:00:49.909524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 16:00:49.935906 kernel: Bridge firewalling registered May 15 16:00:49.913780 systemd-modules-load[212]: Inserted module 'br_netfilter' May 15 16:00:49.936464 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 16:00:49.940231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:00:49.940963 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 16:00:49.945195 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 16:00:49.948163 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 16:00:49.948940 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 16:00:49.951214 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 16:00:49.963439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 16:00:49.973157 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 16:00:49.980392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 16:00:49.984251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 16:00:49.990274 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 16:00:49.992152 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 16:00:50.022384 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 16:00:50.041518 systemd-resolved[246]: Positive Trust Anchors: May 15 16:00:50.041531 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 16:00:50.041569 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 16:00:50.045348 systemd-resolved[246]: Defaulting to hostname 'linux'. May 15 16:00:50.047749 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 16:00:50.048375 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 16:00:50.142034 kernel: SCSI subsystem initialized May 15 16:00:50.155044 kernel: Loading iSCSI transport class v2.0-870. May 15 16:00:50.170035 kernel: iscsi: registered transport (tcp) May 15 16:00:50.195241 kernel: iscsi: registered transport (qla4xxx) May 15 16:00:50.195320 kernel: QLogic iSCSI HBA Driver May 15 16:00:50.219229 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 16:00:50.237181 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 16:00:50.240276 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 16:00:50.317601 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 16:00:50.320826 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 16:00:50.388060 kernel: raid6: avx2x4 gen() 19214 MB/s May 15 16:00:50.405050 kernel: raid6: avx2x2 gen() 18632 MB/s May 15 16:00:50.422516 kernel: raid6: avx2x1 gen() 18679 MB/s May 15 16:00:50.422617 kernel: raid6: using algorithm avx2x4 gen() 19214 MB/s May 15 16:00:50.440403 kernel: raid6: .... xor() 6613 MB/s, rmw enabled May 15 16:00:50.440507 kernel: raid6: using avx2x2 recovery algorithm May 15 16:00:50.466066 kernel: xor: automatically using best checksumming function avx May 15 16:00:50.717037 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 16:00:50.728652 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 16:00:50.731832 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 16:00:50.787239 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 15 16:00:50.797239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 16:00:50.800687 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 16:00:50.834014 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation May 15 16:00:50.871228 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 16:00:50.875939 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 16:00:50.946884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 16:00:50.949853 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 16:00:51.040396 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 15 16:00:51.084149 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 15 16:00:51.084292 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 15 16:00:51.084445 kernel: cryptd: max_cpu_qlen set to 1000 May 15 16:00:51.084465 kernel: scsi host0: Virtio SCSI HBA May 15 16:00:51.084647 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 16:00:51.084667 kernel: GPT:9289727 != 125829119 May 15 16:00:51.084684 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 16:00:51.084707 kernel: GPT:9289727 != 125829119 May 15 16:00:51.084723 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 16:00:51.084738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 16:00:51.089857 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 15 16:00:51.138891 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 16:00:51.138915 kernel: AES CTR mode by8 optimization enabled May 15 16:00:51.138928 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 15 16:00:51.118804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 16:00:51.118929 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:00:51.123982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:00:51.126770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:00:51.144828 kernel: ACPI: bus type USB registered May 15 16:00:51.144880 kernel: usbcore: registered new interface driver usbfs May 15 16:00:51.144899 kernel: usbcore: registered new interface driver hub May 15 16:00:51.144915 kernel: libata version 3.00 loaded. May 15 16:00:51.129677 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 16:00:51.147406 kernel: usbcore: registered new device driver usb May 15 16:00:51.158074 kernel: ata_piix 0000:00:01.1: version 2.13 May 15 16:00:51.165868 kernel: scsi host1: ata_piix May 15 16:00:51.166154 kernel: scsi host2: ata_piix May 15 16:00:51.166309 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 15 16:00:51.166332 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 15 16:00:51.236190 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 16:00:51.239541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:00:51.249683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 16:00:51.259359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 16:00:51.267850 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 16:00:51.268490 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 16:00:51.271280 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 16:00:51.307913 disk-uuid[605]: Primary Header is updated. May 15 16:00:51.307913 disk-uuid[605]: Secondary Entries is updated. May 15 16:00:51.307913 disk-uuid[605]: Secondary Header is updated. May 15 16:00:51.320023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 16:00:51.347678 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 16:00:51.347771 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 15 16:00:51.358258 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 15 16:00:51.358463 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 15 16:00:51.358629 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 15 16:00:51.358805 kernel: hub 1-0:1.0: USB hub found May 15 16:00:51.359008 kernel: hub 1-0:1.0: 2 ports detected May 15 16:00:51.486027 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 16:00:51.518779 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 16:00:51.519243 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 16:00:51.519975 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 16:00:51.521751 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 16:00:51.544224 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 16:00:52.331114 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 16:00:52.331750 disk-uuid[606]: The operation has completed successfully. May 15 16:00:52.391250 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 16:00:52.391463 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 16:00:52.448689 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 16:00:52.465327 sh[630]: Success May 15 16:00:52.489596 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 16:00:52.489693 kernel: device-mapper: uevent: version 1.0.3 May 15 16:00:52.492006 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 16:00:52.503047 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 15 16:00:52.567193 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 16:00:52.572119 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 16:00:52.587754 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 16:00:52.605053 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 16:00:52.605132 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (642) May 15 16:00:52.606252 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 16:00:52.607124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 16:00:52.608128 kernel: BTRFS info (device dm-0): using free-space-tree May 15 16:00:52.616514 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 16:00:52.617770 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 16:00:52.618474 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 16:00:52.619606 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 16:00:52.622506 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 16:00:52.664527 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (675) May 15 16:00:52.664614 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:00:52.666988 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 16:00:52.668093 kernel: BTRFS info (device vda6): using free-space-tree May 15 16:00:52.681048 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:00:52.682808 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 16:00:52.686196 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 16:00:52.846763 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 16:00:52.850228 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 16:00:52.890630 ignition[723]: Ignition 2.21.0 May 15 16:00:52.891280 ignition[723]: Stage: fetch-offline May 15 16:00:52.891633 ignition[723]: no configs at "/usr/lib/ignition/base.d" May 15 16:00:52.891642 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 16:00:52.891736 ignition[723]: parsed url from cmdline: "" May 15 16:00:52.891739 ignition[723]: no config URL provided May 15 16:00:52.891744 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" May 15 16:00:52.891751 ignition[723]: no config at "/usr/lib/ignition/user.ign" May 15 16:00:52.891757 ignition[723]: failed to fetch config: resource requires networking May 15 16:00:52.891939 ignition[723]: Ignition finished successfully May 15 16:00:52.895457 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 16:00:52.910891 systemd-networkd[817]: lo: Link UP May 15 16:00:52.910909 systemd-networkd[817]: lo: Gained carrier May 15 16:00:52.914902 systemd-networkd[817]: Enumeration completed May 15 16:00:52.915120 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 16:00:52.916047 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 15 16:00:52.916055 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 15 16:00:52.917084 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 16:00:52.917123 systemd-networkd[817]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 16:00:52.917564 systemd[1]: Reached target network.target - Network. May 15 16:00:52.917954 systemd-networkd[817]: eth0: Link UP May 15 16:00:52.917961 systemd-networkd[817]: eth0: Gained carrier May 15 16:00:52.917978 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 15 16:00:52.921139 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 16:00:52.923454 systemd-networkd[817]: eth1: Link UP May 15 16:00:52.923462 systemd-networkd[817]: eth1: Gained carrier May 15 16:00:52.923485 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 16:00:52.940085 systemd-networkd[817]: eth0: DHCPv4 address 146.190.42.225/19, gateway 146.190.32.1 acquired from 169.254.169.253 May 15 16:00:52.944149 systemd-networkd[817]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 May 15 16:00:52.972267 ignition[821]: Ignition 2.21.0 May 15 16:00:52.972278 ignition[821]: Stage: fetch May 15 16:00:52.972449 ignition[821]: no configs at "/usr/lib/ignition/base.d" May 15 16:00:52.972458 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 16:00:52.972569 ignition[821]: parsed url from cmdline: "" May 15 16:00:52.972572 ignition[821]: no config URL provided May 15 16:00:52.972578 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" May 15 16:00:52.972586 ignition[821]: no config at "/usr/lib/ignition/user.ign" May 15 16:00:52.972618 ignition[821]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 15 16:00:52.989231 ignition[821]: GET result: OK May 15 16:00:52.990417 ignition[821]: parsing config with SHA512: 421c963c2b1f887c3d6bca9f0fb547098e9a7b1ab871607d9447416f368eb790607c2901ec50243fbd424fa88badc6533e5166a0a5ff69266ab5700fa3fc9e8a May 15 16:00:52.998447 unknown[821]: fetched base config from "system" May 15 16:00:52.999414 unknown[821]: fetched base config from "system" May 15 16:00:52.999835 ignition[821]: fetch: fetch complete May 15 16:00:52.999428 unknown[821]: fetched user config from "digitalocean" May 15 16:00:52.999842 ignition[821]: fetch: fetch passed May 15 16:00:52.999932 ignition[821]: Ignition finished successfully May 15 16:00:53.004025 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 16:00:53.009148 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 16:00:53.042527 ignition[828]: Ignition 2.21.0 May 15 16:00:53.042542 ignition[828]: Stage: kargs May 15 16:00:53.042693 ignition[828]: no configs at "/usr/lib/ignition/base.d" May 15 16:00:53.042702 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 16:00:53.043608 ignition[828]: kargs: kargs passed May 15 16:00:53.045394 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 16:00:53.043671 ignition[828]: Ignition finished successfully May 15 16:00:53.048397 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 16:00:53.085227 ignition[834]: Ignition 2.21.0 May 15 16:00:53.085247 ignition[834]: Stage: disks May 15 16:00:53.085478 ignition[834]: no configs at "/usr/lib/ignition/base.d" May 15 16:00:53.085493 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 16:00:53.087346 ignition[834]: disks: disks passed May 15 16:00:53.087452 ignition[834]: Ignition finished successfully May 15 16:00:53.089801 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 16:00:53.091275 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 16:00:53.091843 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 16:00:53.093250 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 16:00:53.094457 systemd[1]: Reached target sysinit.target - System Initialization. May 15 16:00:53.095270 systemd[1]: Reached target basic.target - Basic System. May 15 16:00:53.097931 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 16:00:53.134573 systemd-fsck[843]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 16:00:53.138772 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 16:00:53.142093 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 16:00:53.289030 kernel: EXT4-fs (vda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 16:00:53.290150 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 16:00:53.291659 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 16:00:53.294715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 16:00:53.298269 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 16:00:53.303346 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 15 16:00:53.314286 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 16:00:53.314948 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 16:00:53.315127 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 16:00:53.328833 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 16:00:53.335182 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (851) May 15 16:00:53.335224 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:00:53.339095 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 16:00:53.339196 kernel: BTRFS info (device vda6): using free-space-tree May 15 16:00:53.346314 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 16:00:53.380633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 16:00:53.447080 coreos-metadata[853]: May 15 16:00:53.446 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 16:00:53.461223 coreos-metadata[853]: May 15 16:00:53.461 INFO Fetch successful May 15 16:00:53.463762 coreos-metadata[854]: May 15 16:00:53.463 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 16:00:53.470148 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 15 16:00:53.470282 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 15 16:00:53.476438 coreos-metadata[854]: May 15 16:00:53.475 INFO Fetch successful May 15 16:00:53.477208 initrd-setup-root[882]: cut: /sysroot/etc/passwd: No such file or directory May 15 16:00:53.483334 coreos-metadata[854]: May 15 16:00:53.483 INFO wrote hostname ci-4334.0.0-a-32b0bb88bb to /sysroot/etc/hostname May 15 16:00:53.486942 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 16:00:53.489862 initrd-setup-root[890]: cut: /sysroot/etc/group: No such file or directory May 15 16:00:53.498642 initrd-setup-root[897]: cut: /sysroot/etc/shadow: No such file or directory May 15 16:00:53.507244 initrd-setup-root[904]: cut: /sysroot/etc/gshadow: No such file or directory May 15 16:00:53.655043 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 16:00:53.657494 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 16:00:53.658596 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 16:00:53.688076 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:00:53.688308 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 16:00:53.711495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 16:00:53.722588 ignition[975]: INFO : Ignition 2.21.0 May 15 16:00:53.722588 ignition[975]: INFO : Stage: mount May 15 16:00:53.723626 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 16:00:53.723626 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 16:00:53.724411 ignition[975]: INFO : mount: mount passed May 15 16:00:53.724411 ignition[975]: INFO : Ignition finished successfully May 15 16:00:53.725765 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 16:00:53.727231 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 16:00:53.748069 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 16:00:53.778371 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (987) May 15 16:00:53.778438 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:00:53.780497 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 16:00:53.780563 kernel: BTRFS info (device vda6): using free-space-tree May 15 16:00:53.787076 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 16:00:53.818664 ignition[1004]: INFO : Ignition 2.21.0 May 15 16:00:53.821661 ignition[1004]: INFO : Stage: files May 15 16:00:53.821661 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 16:00:53.821661 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 16:00:53.821661 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping May 15 16:00:53.823422 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 16:00:53.823422 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 16:00:53.826420 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 16:00:53.827068 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 16:00:53.827068 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 16:00:53.827026 unknown[1004]: wrote ssh authorized keys file for user: core May 15 16:00:53.828734 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 16:00:53.829388 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 16:00:53.975140 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 16:00:54.064510 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 16:00:54.064510 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 16:00:54.070978 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 16:00:54.086300 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 16:00:54.642848 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 16:00:54.880283 systemd-networkd[817]: eth0: Gained IPv6LL May 15 16:00:54.944458 systemd-networkd[817]: eth1: Gained IPv6LL May 15 16:00:55.539247 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 16:00:55.539247 ignition[1004]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 16:00:55.542345 ignition[1004]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 16:00:55.544465 ignition[1004]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 16:00:55.544465 ignition[1004]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 16:00:55.544465 ignition[1004]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 15 16:00:55.544465 ignition[1004]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 15 16:00:55.547412 ignition[1004]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 16:00:55.547412 ignition[1004]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 16:00:55.547412 ignition[1004]: INFO : files: files passed May 15 16:00:55.547412 ignition[1004]: INFO : Ignition finished successfully May 15 16:00:55.547537 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 16:00:55.550819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 16:00:55.552059 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 16:00:55.572007 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 16:00:55.572149 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 16:00:55.582590 initrd-setup-root-after-ignition[1034]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 16:00:55.582590 initrd-setup-root-after-ignition[1034]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 16:00:55.585627 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 16:00:55.589242 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 16:00:55.590111 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 16:00:55.591842 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 16:00:55.657118 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 16:00:55.657264 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 16:00:55.659065 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 16:00:55.659592 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 16:00:55.660074 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 16:00:55.662241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 16:00:55.692341 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 16:00:55.695197 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 16:00:55.722586 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 16:00:55.723296 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 16:00:55.723933 systemd[1]: Stopped target timers.target - Timer Units. May 15 16:00:55.724718 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 16:00:55.724965 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 16:00:55.725928 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 16:00:55.726540 systemd[1]: Stopped target basic.target - Basic System. May 15 16:00:55.727264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 16:00:55.727909 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 16:00:55.728713 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 16:00:55.729477 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 16:00:55.730026 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 16:00:55.730725 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 16:00:55.731420 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 16:00:55.732177 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 16:00:55.732740 systemd[1]: Stopped target swap.target - Swaps. May 15 16:00:55.733485 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 16:00:55.733679 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 16:00:55.734519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 16:00:55.734957 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 16:00:55.735558 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 16:00:55.735659 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 16:00:55.736165 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 16:00:55.736292 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 16:00:55.737327 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 16:00:55.737502 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 16:00:55.738279 systemd[1]: ignition-files.service: Deactivated successfully. May 15 16:00:55.738418 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 16:00:55.739033 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 16:00:55.739180 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 16:00:55.742107 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 16:00:55.743521 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 16:00:55.746086 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 16:00:55.746305 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 16:00:55.747102 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 16:00:55.747259 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 16:00:55.757294 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 16:00:55.760129 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 16:00:55.778196 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 16:00:55.784309 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 16:00:55.785050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 16:00:55.790822 ignition[1058]: INFO : Ignition 2.21.0 May 15 16:00:55.790822 ignition[1058]: INFO : Stage: umount May 15 16:00:55.792139 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 16:00:55.792139 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 16:00:55.795941 ignition[1058]: INFO : umount: umount passed May 15 16:00:55.796707 ignition[1058]: INFO : Ignition finished successfully May 15 16:00:55.799242 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 16:00:55.799431 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 16:00:55.800453 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 16:00:55.800534 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 16:00:55.801603 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 16:00:55.801680 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 16:00:55.802332 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 16:00:55.802402 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 16:00:55.803113 systemd[1]: Stopped target network.target - Network. May 15 16:00:55.803720 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 16:00:55.803798 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 16:00:55.805030 systemd[1]: Stopped target paths.target - Path Units. May 15 16:00:55.805622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 16:00:55.809085 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 16:00:55.809709 systemd[1]: Stopped target slices.target - Slice Units. May 15 16:00:55.810550 systemd[1]: Stopped target sockets.target - Socket Units. May 15 16:00:55.811334 systemd[1]: iscsid.socket: Deactivated successfully. May 15 16:00:55.811394 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 16:00:55.811997 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 16:00:55.812054 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 16:00:55.812670 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 16:00:55.812823 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 16:00:55.813500 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 16:00:55.813560 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 16:00:55.814177 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 16:00:55.814246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 16:00:55.815110 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 16:00:55.815919 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 16:00:55.823461 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 16:00:55.823651 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 16:00:55.827687 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 16:00:55.828063 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 16:00:55.828125 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 16:00:55.831202 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 16:00:55.832300 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 16:00:55.832452 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 16:00:55.835242 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 16:00:55.835970 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 16:00:55.837310 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 16:00:55.837380 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 16:00:55.839448 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 16:00:55.839791 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 16:00:55.839849 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 16:00:55.840372 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 16:00:55.840437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 16:00:55.840956 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 16:00:55.841066 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 16:00:55.841826 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 16:00:55.845079 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 16:00:55.857035 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 16:00:55.858214 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 16:00:55.859144 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 16:00:55.859208 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 16:00:55.859731 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 16:00:55.859777 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 16:00:55.860285 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 16:00:55.860351 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 16:00:55.861718 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 16:00:55.861788 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 16:00:55.862961 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 16:00:55.863159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 16:00:55.865531 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 16:00:55.867325 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 16:00:55.867430 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 16:00:55.870524 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 16:00:55.870613 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 16:00:55.871342 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 16:00:55.871406 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 16:00:55.872185 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 16:00:55.872236 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 16:00:55.873268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 16:00:55.873321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:00:55.885557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 16:00:55.885911 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 16:00:55.891179 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 16:00:55.891338 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 16:00:55.892849 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 16:00:55.894454 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 16:00:55.917388 systemd[1]: Switching root. May 15 16:00:55.988568 systemd-journald[210]: Journal stopped May 15 16:00:57.252651 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). May 15 16:00:57.252810 kernel: SELinux: policy capability network_peer_controls=1 May 15 16:00:57.252832 kernel: SELinux: policy capability open_perms=1 May 15 16:00:57.252849 kernel: SELinux: policy capability extended_socket_class=1 May 15 16:00:57.252861 kernel: SELinux: policy capability always_check_network=0 May 15 16:00:57.252873 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 16:00:57.252886 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 16:00:57.252902 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 16:00:57.252914 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 16:00:57.252927 kernel: SELinux: policy capability userspace_initial_context=0 May 15 16:00:57.252946 kernel: audit: type=1403 audit(1747324856.137:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 16:00:57.252959 systemd[1]: Successfully loaded SELinux policy in 47.229ms. May 15 16:00:57.252978 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.512ms. May 15 16:00:57.254979 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 16:00:57.255022 systemd[1]: Detected virtualization kvm. May 15 16:00:57.255041 systemd[1]: Detected architecture x86-64. May 15 16:00:57.255059 systemd[1]: Detected first boot. May 15 16:00:57.255076 systemd[1]: Hostname set to . May 15 16:00:57.255101 systemd[1]: Initializing machine ID from VM UUID. May 15 16:00:57.255115 zram_generator::config[1103]: No configuration found. May 15 16:00:57.255136 kernel: Guest personality initialized and is inactive May 15 16:00:57.255150 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 16:00:57.255163 kernel: Initialized host personality May 15 16:00:57.255174 kernel: NET: Registered PF_VSOCK protocol family May 15 16:00:57.255186 systemd[1]: Populated /etc with preset unit settings. May 15 16:00:57.255205 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 16:00:57.255230 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 16:00:57.255248 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 16:00:57.255264 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 16:00:57.255286 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 16:00:57.255305 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 16:00:57.255324 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 16:00:57.255341 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 16:00:57.255354 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 16:00:57.255367 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 16:00:57.255387 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 16:00:57.255424 systemd[1]: Created slice user.slice - User and Session Slice. May 15 16:00:57.255440 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 16:00:57.255457 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 16:00:57.255471 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 16:00:57.255491 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 16:00:57.255515 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 16:00:57.255531 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 16:00:57.255545 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 16:00:57.255558 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 16:00:57.255570 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 16:00:57.255582 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 16:00:57.255595 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 16:00:57.255607 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 16:00:57.255620 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 16:00:57.255636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 16:00:57.255654 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 16:00:57.255666 systemd[1]: Reached target slices.target - Slice Units. May 15 16:00:57.255678 systemd[1]: Reached target swap.target - Swaps. May 15 16:00:57.255691 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 16:00:57.255703 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 16:00:57.255715 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 16:00:57.255736 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 16:00:57.255748 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 16:00:57.255765 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 16:00:57.255777 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 16:00:57.255789 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 16:00:57.255802 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 16:00:57.255815 systemd[1]: Mounting media.mount - External Media Directory... May 15 16:00:57.255833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:00:57.255848 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 16:00:57.255866 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 16:00:57.255882 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 16:00:57.255898 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 16:00:57.255912 systemd[1]: Reached target machines.target - Containers. May 15 16:00:57.255924 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 16:00:57.255938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 16:00:57.255953 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 16:00:57.255970 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 16:00:57.255999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 16:00:57.256014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 16:00:57.256031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 16:00:57.256049 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 16:00:57.256073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 16:00:57.256092 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 16:00:57.256111 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 16:00:57.256129 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 16:00:57.256147 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 16:00:57.256165 systemd[1]: Stopped systemd-fsck-usr.service. May 15 16:00:57.256185 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 16:00:57.256209 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 16:00:57.256229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 16:00:57.256252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 16:00:57.256272 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 16:00:57.256288 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 16:00:57.256304 kernel: loop: module loaded May 15 16:00:57.256317 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 16:00:57.256329 systemd[1]: verity-setup.service: Deactivated successfully. May 15 16:00:57.256342 systemd[1]: Stopped verity-setup.service. May 15 16:00:57.256356 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:00:57.256371 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 16:00:57.256385 kernel: fuse: init (API version 7.41) May 15 16:00:57.256398 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 16:00:57.256410 systemd[1]: Mounted media.mount - External Media Directory. May 15 16:00:57.256432 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 16:00:57.256444 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 16:00:57.256458 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 16:00:57.256471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 16:00:57.256483 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 16:00:57.256499 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 16:00:57.256512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 16:00:57.256524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 16:00:57.256537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 16:00:57.256550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 16:00:57.256562 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 16:00:57.256575 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 16:00:57.256588 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 16:00:57.256608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 16:00:57.256622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 16:00:57.256635 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 16:00:57.256648 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 16:00:57.256660 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 16:00:57.256673 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 16:00:57.256689 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 16:00:57.256702 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 16:00:57.256812 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 16:00:57.256839 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 16:00:57.256862 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 16:00:57.256880 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 16:00:57.256899 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 16:00:57.256920 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 16:00:57.256941 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 16:00:57.256962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 16:00:57.256976 kernel: ACPI: bus type drm_connector registered May 15 16:00:57.257038 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 16:00:57.257059 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 16:00:57.257126 systemd-journald[1173]: Collecting audit messages is disabled. May 15 16:00:57.257155 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 16:00:57.257170 systemd-journald[1173]: Journal started May 15 16:00:57.257202 systemd-journald[1173]: Runtime Journal (/run/log/journal/08a45def8a7c45ab9e01136f86d9e881) is 4.9M, max 39.5M, 34.6M free. May 15 16:00:56.838102 systemd[1]: Queued start job for default target multi-user.target. May 15 16:00:56.862861 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 16:00:56.863382 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 16:00:57.271017 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 16:00:57.276565 systemd[1]: Started systemd-journald.service - Journal Service. May 15 16:00:57.280418 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 16:00:57.283198 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 16:00:57.285046 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 16:00:57.286412 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 16:00:57.309621 kernel: loop0: detected capacity change from 0 to 146240 May 15 16:00:57.342457 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 16:00:57.344348 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. May 15 16:00:57.344363 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. May 15 16:00:57.349101 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 16:00:57.355547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 16:00:57.375351 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 16:00:57.376286 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 16:00:57.378399 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 16:00:57.385300 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 16:00:57.388380 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 16:00:57.411079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 16:00:57.426362 kernel: loop1: detected capacity change from 0 to 210664 May 15 16:00:57.432017 systemd-journald[1173]: Time spent on flushing to /var/log/journal/08a45def8a7c45ab9e01136f86d9e881 is 32.707ms for 1019 entries. May 15 16:00:57.432017 systemd-journald[1173]: System Journal (/var/log/journal/08a45def8a7c45ab9e01136f86d9e881) is 8M, max 195.6M, 187.6M free. May 15 16:00:57.469591 systemd-journald[1173]: Received client request to flush runtime journal. May 15 16:00:57.472946 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 16:00:57.479634 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 16:00:57.499036 kernel: loop2: detected capacity change from 0 to 113872 May 15 16:00:57.525698 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 16:00:57.530058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 16:00:57.548029 kernel: loop3: detected capacity change from 0 to 8 May 15 16:00:57.582084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 16:00:57.585732 kernel: loop4: detected capacity change from 0 to 146240 May 15 16:00:57.625864 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 15 16:00:57.625888 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 15 16:00:57.631039 kernel: loop5: detected capacity change from 0 to 210664 May 15 16:00:57.644669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 16:00:57.654043 kernel: loop6: detected capacity change from 0 to 113872 May 15 16:00:57.674025 kernel: loop7: detected capacity change from 0 to 8 May 15 16:00:57.676603 (sd-merge)[1253]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 15 16:00:57.677641 (sd-merge)[1253]: Merged extensions into '/usr'. May 15 16:00:57.685335 systemd[1]: Reload requested from client PID 1210 ('systemd-sysext') (unit systemd-sysext.service)... May 15 16:00:57.685721 systemd[1]: Reloading... May 15 16:00:57.905028 zram_generator::config[1282]: No configuration found. May 15 16:00:58.072572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 16:00:58.189685 ldconfig[1207]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 16:00:58.202536 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 16:00:58.202901 systemd[1]: Reloading finished in 516 ms. May 15 16:00:58.217909 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 16:00:58.222658 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 16:00:58.235231 systemd[1]: Starting ensure-sysext.service... May 15 16:00:58.238351 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 16:00:58.291718 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 16:00:58.292172 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... May 15 16:00:58.292193 systemd[1]: Reloading... May 15 16:00:58.292465 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 16:00:58.293042 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 16:00:58.293415 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 16:00:58.294768 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 16:00:58.295224 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 15 16:00:58.295323 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 15 16:00:58.300805 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 15 16:00:58.300827 systemd-tmpfiles[1326]: Skipping /boot May 15 16:00:58.346205 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 15 16:00:58.346223 systemd-tmpfiles[1326]: Skipping /boot May 15 16:00:58.455063 zram_generator::config[1362]: No configuration found. May 15 16:00:58.594964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 16:00:58.736682 systemd[1]: Reloading finished in 443 ms. May 15 16:00:58.747797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 16:00:58.755339 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 16:00:58.757876 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 16:00:58.761315 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 16:00:58.768359 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 16:00:58.778155 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 16:00:58.810746 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 16:00:58.825580 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 16:00:58.863849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:00:58.864223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 16:00:58.868268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 16:00:58.875298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 16:00:58.884586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 16:00:58.885209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 16:00:58.885373 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 16:00:58.885500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:00:58.890530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:00:58.890727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 16:00:58.890904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 16:00:58.891672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 16:00:58.891859 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:00:58.898213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:00:58.898582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 16:00:58.902881 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 16:00:58.903751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 16:00:58.903929 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 16:00:58.904152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:00:58.907081 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 16:00:58.909939 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 16:00:58.913430 systemd[1]: Finished ensure-sysext.service. May 15 16:00:58.915576 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 16:00:58.923249 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 16:00:58.938139 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 16:00:58.944250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 16:00:58.949001 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 16:00:58.950421 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 16:00:58.952480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 16:00:58.952795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 16:00:58.969883 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 16:00:58.976216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 16:00:58.998765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 16:00:58.999121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 16:00:59.000105 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 16:00:59.000333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 16:00:59.002973 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 16:00:59.004198 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 16:00:59.012723 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 16:00:59.032855 augenrules[1445]: No rules May 15 16:00:59.038844 systemd[1]: audit-rules.service: Deactivated successfully. May 15 16:00:59.039195 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 16:00:59.051695 systemd-udevd[1428]: Using default interface naming scheme 'v255'. May 15 16:00:59.104116 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 16:00:59.109305 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 16:00:59.111208 systemd-resolved[1398]: Positive Trust Anchors: May 15 16:00:59.111223 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 16:00:59.111260 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 16:00:59.120001 systemd-resolved[1398]: Using system hostname 'ci-4334.0.0-a-32b0bb88bb'. May 15 16:00:59.124175 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 16:00:59.124722 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 16:00:59.130085 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 16:00:59.130735 systemd[1]: Reached target sysinit.target - System Initialization. May 15 16:00:59.131355 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 16:00:59.131886 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 16:00:59.132429 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 16:00:59.132938 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 16:00:59.133475 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 16:00:59.133504 systemd[1]: Reached target paths.target - Path Units. May 15 16:00:59.134060 systemd[1]: Reached target time-set.target - System Time Set. May 15 16:00:59.135167 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 16:00:59.135762 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 16:00:59.136267 systemd[1]: Reached target timers.target - Timer Units. May 15 16:00:59.138477 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 16:00:59.140956 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 16:00:59.147898 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 16:00:59.149355 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 16:00:59.150188 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 16:00:59.161954 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 16:00:59.163864 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 16:00:59.166955 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 16:00:59.172895 systemd[1]: Reached target sockets.target - Socket Units. May 15 16:00:59.173415 systemd[1]: Reached target basic.target - Basic System. May 15 16:00:59.174122 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 16:00:59.174180 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 16:00:59.178319 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 16:00:59.180122 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 16:00:59.185297 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 16:00:59.189267 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 16:00:59.192383 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 16:00:59.195128 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 16:00:59.206396 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 16:00:59.216422 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 16:00:59.227296 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 16:00:59.236316 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 16:00:59.251872 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 16:00:59.259327 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 16:00:59.260761 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 16:00:59.278286 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 16:00:59.282104 systemd[1]: Starting update-engine.service - Update Engine... May 15 16:00:59.291809 jq[1482]: false May 15 16:00:59.291199 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 16:00:59.301074 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 16:00:59.301838 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 16:00:59.309307 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Refreshing passwd entry cache May 15 16:00:59.310042 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 16:00:59.311264 oslogin_cache_refresh[1485]: Refreshing passwd entry cache May 15 16:00:59.321523 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Failure getting users, quitting May 15 16:00:59.321523 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 16:00:59.319957 oslogin_cache_refresh[1485]: Failure getting users, quitting May 15 16:00:59.319982 oslogin_cache_refresh[1485]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 16:00:59.324188 oslogin_cache_refresh[1485]: Refreshing group entry cache May 15 16:00:59.326171 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Refreshing group entry cache May 15 16:00:59.326952 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Failure getting groups, quitting May 15 16:00:59.327041 oslogin_cache_refresh[1485]: Failure getting groups, quitting May 15 16:00:59.327104 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 16:00:59.327151 oslogin_cache_refresh[1485]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 16:00:59.336603 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 16:00:59.345185 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 16:00:59.346244 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 16:00:59.347702 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 16:00:59.360538 jq[1494]: true May 15 16:00:59.370047 update_engine[1493]: I20250515 16:00:59.369042 1493 main.cc:92] Flatcar Update Engine starting May 15 16:00:59.403040 tar[1496]: linux-amd64/helm May 15 16:00:59.417595 coreos-metadata[1479]: May 15 16:00:59.416 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 16:00:59.420020 coreos-metadata[1479]: May 15 16:00:59.417 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 15 16:00:59.422091 jq[1509]: true May 15 16:00:59.425690 systemd-networkd[1454]: lo: Link UP May 15 16:00:59.425701 systemd-networkd[1454]: lo: Gained carrier May 15 16:00:59.428589 systemd-networkd[1454]: Enumeration completed May 15 16:00:59.428896 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 16:00:59.429943 systemd[1]: Reached target network.target - Network. May 15 16:00:59.434252 systemd[1]: Starting containerd.service - containerd container runtime... May 15 16:00:59.438051 extend-filesystems[1484]: Found loop4 May 15 16:00:59.438051 extend-filesystems[1484]: Found loop5 May 15 16:00:59.438051 extend-filesystems[1484]: Found loop6 May 15 16:00:59.438051 extend-filesystems[1484]: Found loop7 May 15 16:00:59.438051 extend-filesystems[1484]: Found vda May 15 16:00:59.438051 extend-filesystems[1484]: Found vda1 May 15 16:00:59.438051 extend-filesystems[1484]: Found vda2 May 15 16:00:59.438051 extend-filesystems[1484]: Found vda3 May 15 16:00:59.438051 extend-filesystems[1484]: Found usr May 15 16:00:59.438051 extend-filesystems[1484]: Found vda4 May 15 16:00:59.438051 extend-filesystems[1484]: Found vda6 May 15 16:00:59.438051 extend-filesystems[1484]: Found vda7 May 15 16:00:59.438051 extend-filesystems[1484]: Found vda9 May 15 16:00:59.438051 extend-filesystems[1484]: Found vdb May 15 16:00:59.437610 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 16:00:59.472076 update_engine[1493]: I20250515 16:00:59.464406 1493 update_check_scheduler.cc:74] Next update check in 2m15s May 15 16:00:59.439889 dbus-daemon[1480]: [system] SELinux support is enabled May 15 16:00:59.440249 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 16:00:59.441234 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 16:00:59.449504 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 16:00:59.449793 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 16:00:59.451526 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 16:00:59.451564 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 16:00:59.452070 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 16:00:59.452089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 16:00:59.453194 systemd[1]: motdgen.service: Deactivated successfully. May 15 16:00:59.453423 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 16:00:59.462353 systemd[1]: Started update-engine.service - Update Engine. May 15 16:00:59.470355 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 16:00:59.530658 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 16:00:59.560572 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 16:00:59.600623 bash[1541]: Updated "/home/core/.ssh/authorized_keys" May 15 16:00:59.602328 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 16:00:59.610305 systemd[1]: Starting sshkeys.service... May 15 16:00:59.682362 systemd-logind[1490]: New seat seat0. May 15 16:00:59.683595 systemd[1]: Started systemd-logind.service - User Login Management. May 15 16:00:59.709043 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 16:00:59.713704 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 16:00:59.971485 coreos-metadata[1544]: May 15 16:00:59.959 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 16:00:59.980047 coreos-metadata[1544]: May 15 16:00:59.976 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 15 16:01:00.139457 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 15 16:01:00.155587 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 15 16:01:00.158384 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 16:01:00.193782 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 16:01:00.245562 kernel: ISO 9660 Extensions: RRIP_1991A May 15 16:01:00.251398 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 15 16:01:00.254319 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 15 16:01:00.306106 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 16:01:00.312574 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 16:01:00.341230 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 16:01:00.350681 systemd-networkd[1454]: eth0: Configuring with /run/systemd/network/10-a6:7f:23:e6:b1:18.network. May 15 16:01:00.358306 systemd-networkd[1454]: eth0: Link UP May 15 16:01:00.359437 systemd-networkd[1454]: eth0: Gained carrier May 15 16:01:00.366615 containerd[1533]: time="2025-05-15T16:01:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 16:01:00.368034 containerd[1533]: time="2025-05-15T16:01:00.367201942Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 16:01:00.376146 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:00.384604 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 16:01:00.414935 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 16:01:00.419031 coreos-metadata[1479]: May 15 16:01:00.418 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 15 16:01:00.439053 containerd[1533]: time="2025-05-15T16:01:00.438181428Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.342µs" May 15 16:01:00.440084 containerd[1533]: time="2025-05-15T16:01:00.439245646Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 16:01:00.440084 containerd[1533]: time="2025-05-15T16:01:00.439305080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 16:01:00.440084 containerd[1533]: time="2025-05-15T16:01:00.439548730Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 16:01:00.440084 containerd[1533]: time="2025-05-15T16:01:00.439575980Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 16:01:00.440084 containerd[1533]: time="2025-05-15T16:01:00.439612939Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 16:01:00.440084 containerd[1533]: time="2025-05-15T16:01:00.439723605Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 16:01:00.440084 containerd[1533]: time="2025-05-15T16:01:00.439743550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442253314Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442297177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442322418Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442336836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442520104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442816603Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442865455Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442883917Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 16:01:00.443013 containerd[1533]: time="2025-05-15T16:01:00.442941315Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 16:01:00.445537 containerd[1533]: time="2025-05-15T16:01:00.445490900Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 16:01:00.445969 containerd[1533]: time="2025-05-15T16:01:00.445933396Z" level=info msg="metadata content store policy set" policy=shared May 15 16:01:00.451780 containerd[1533]: time="2025-05-15T16:01:00.451715064Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 16:01:00.452061 containerd[1533]: time="2025-05-15T16:01:00.452025344Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 16:01:00.452172 containerd[1533]: time="2025-05-15T16:01:00.452154148Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 16:01:00.452331 containerd[1533]: time="2025-05-15T16:01:00.452311537Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 16:01:00.452437 containerd[1533]: time="2025-05-15T16:01:00.452418839Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 16:01:00.452701 containerd[1533]: time="2025-05-15T16:01:00.452671755Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 16:01:00.452811 containerd[1533]: time="2025-05-15T16:01:00.452790670Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 16:01:00.452900 containerd[1533]: time="2025-05-15T16:01:00.452882908Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 16:01:00.453014 containerd[1533]: time="2025-05-15T16:01:00.452958359Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 16:01:00.453014 containerd[1533]: time="2025-05-15T16:01:00.452979647Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453102510Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453126904Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453315293Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453354817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453380727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453399061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453415269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453429793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453446243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453461330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453478709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453502770Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453519121Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453607740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 16:01:00.454013 containerd[1533]: time="2025-05-15T16:01:00.453633314Z" level=info msg="Start snapshots syncer" May 15 16:01:00.454600 containerd[1533]: time="2025-05-15T16:01:00.453686126Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 16:01:00.457949 containerd[1533]: time="2025-05-15T16:01:00.457018455Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 16:01:00.457949 containerd[1533]: time="2025-05-15T16:01:00.457536426Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461167811Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461471983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461530434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461563460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461587276Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461618530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461643645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461667390Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461720758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461743333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461775759Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461843872Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461879758Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 16:01:00.462203 containerd[1533]: time="2025-05-15T16:01:00.461896709Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 16:01:00.464894 containerd[1533]: time="2025-05-15T16:01:00.461919526Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 16:01:00.464894 containerd[1533]: time="2025-05-15T16:01:00.461939449Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 16:01:00.464894 containerd[1533]: time="2025-05-15T16:01:00.461957436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 16:01:00.467516 coreos-metadata[1479]: May 15 16:01:00.467 INFO Fetch successful May 15 16:01:00.469050 containerd[1533]: time="2025-05-15T16:01:00.468069793Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 16:01:00.469050 containerd[1533]: time="2025-05-15T16:01:00.468169079Z" level=info msg="runtime interface created" May 15 16:01:00.469050 containerd[1533]: time="2025-05-15T16:01:00.468186001Z" level=info msg="created NRI interface" May 15 16:01:00.469050 containerd[1533]: time="2025-05-15T16:01:00.468211792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 16:01:00.469050 containerd[1533]: time="2025-05-15T16:01:00.468241160Z" level=info msg="Connect containerd service" May 15 16:01:00.469050 containerd[1533]: time="2025-05-15T16:01:00.468332746Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 16:01:00.477839 containerd[1533]: time="2025-05-15T16:01:00.474631724Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 16:01:00.550515 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 16:01:00.564068 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 16:01:00.604874 systemd-networkd[1454]: eth1: Configuring with /run/systemd/network/10-7e:38:96:1d:14:ef.network. May 15 16:01:00.606467 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:00.606776 systemd-networkd[1454]: eth1: Link UP May 15 16:01:00.607693 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:00.608506 systemd-networkd[1454]: eth1: Gained carrier May 15 16:01:00.614170 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:00.617052 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 16:01:00.621112 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 16:01:00.622244 kernel: ACPI: button: Power Button [PWRF] May 15 16:01:00.624190 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 16:01:00.658109 kernel: mousedev: PS/2 mouse device common for all mice May 15 16:01:00.662703 systemd[1]: issuegen.service: Deactivated successfully. May 15 16:01:00.663596 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 16:01:00.674654 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 16:01:00.737027 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 15 16:01:00.738804 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 16:01:00.767251 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 16:01:00.774527 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 16:01:00.780811 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 16:01:00.782527 systemd[1]: Reached target getty.target - Login Prompts. May 15 16:01:00.898744 containerd[1533]: time="2025-05-15T16:01:00.898612099Z" level=info msg="Start subscribing containerd event" May 15 16:01:00.898971 containerd[1533]: time="2025-05-15T16:01:00.898926759Z" level=info msg="Start recovering state" May 15 16:01:00.899425 containerd[1533]: time="2025-05-15T16:01:00.899292049Z" level=info msg="Start event monitor" May 15 16:01:00.899575 containerd[1533]: time="2025-05-15T16:01:00.899555368Z" level=info msg="Start cni network conf syncer for default" May 15 16:01:00.899708 containerd[1533]: time="2025-05-15T16:01:00.899687536Z" level=info msg="Start streaming server" May 15 16:01:00.900059 containerd[1533]: time="2025-05-15T16:01:00.899975498Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 16:01:00.900164 containerd[1533]: time="2025-05-15T16:01:00.900148121Z" level=info msg="runtime interface starting up..." May 15 16:01:00.900248 containerd[1533]: time="2025-05-15T16:01:00.900234773Z" level=info msg="starting plugins..." May 15 16:01:00.900359 containerd[1533]: time="2025-05-15T16:01:00.900341731Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 16:01:00.900753 containerd[1533]: time="2025-05-15T16:01:00.900707162Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 16:01:00.900840 containerd[1533]: time="2025-05-15T16:01:00.900806996Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 16:01:00.901375 containerd[1533]: time="2025-05-15T16:01:00.901349724Z" level=info msg="containerd successfully booted in 0.536940s" May 15 16:01:00.901448 systemd[1]: Started containerd.service - containerd container runtime. May 15 16:01:00.977290 coreos-metadata[1544]: May 15 16:01:00.977 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 15 16:01:00.992910 coreos-metadata[1544]: May 15 16:01:00.992 INFO Fetch successful May 15 16:01:00.995633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:01:01.003043 unknown[1544]: wrote ssh authorized keys file for user: core May 15 16:01:01.065953 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 16:01:01.077263 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) May 15 16:01:01.099026 update-ssh-keys[1623]: Updated "/home/core/.ssh/authorized_keys" May 15 16:01:01.103348 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 16:01:01.109298 systemd[1]: Finished sshkeys.service. May 15 16:01:01.223594 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 15 16:01:01.223750 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 15 16:01:01.329735 kernel: Console: switching to colour dummy device 80x25 May 15 16:01:01.329856 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 15 16:01:01.329878 kernel: [drm] features: -context_init May 15 16:01:01.329938 kernel: [drm] number of scanouts: 1 May 15 16:01:01.329965 kernel: [drm] number of cap sets: 0 May 15 16:01:01.302253 systemd-vconsole-setup[1622]: KD_FONT_OP_SET failed, fonts will not be copied to tty2: Function not implemented May 15 16:01:01.302317 systemd-vconsole-setup[1622]: KD_FONT_OP_SET failed, fonts will not be copied to tty3: Function not implemented May 15 16:01:01.302362 systemd-vconsole-setup[1622]: KD_FONT_OP_SET failed, fonts will not be copied to tty4: Function not implemented May 15 16:01:01.302404 systemd-vconsole-setup[1622]: KD_FONT_OP_SET failed, fonts will not be copied to tty5: Function not implemented May 15 16:01:01.302441 systemd-vconsole-setup[1622]: KD_FONT_OP_SET failed, fonts will not be copied to tty6: Function not implemented May 15 16:01:01.304740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:01:01.344085 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 15 16:01:01.368616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 16:01:01.370035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:01:01.370511 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:01:01.374114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:01:01.378382 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 16:01:01.430915 tar[1496]: linux-amd64/LICENSE May 15 16:01:01.430915 tar[1496]: linux-amd64/README.md May 15 16:01:01.460870 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 16:01:01.464066 kernel: EDAC MC: Ver: 3.0.0 May 15 16:01:01.485839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:01:02.112377 systemd-networkd[1454]: eth0: Gained IPv6LL May 15 16:01:02.113825 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:02.116228 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 16:01:02.118650 systemd[1]: Reached target network-online.target - Network is Online. May 15 16:01:02.122836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:01:02.128479 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 16:01:02.182294 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 16:01:02.368258 systemd-networkd[1454]: eth1: Gained IPv6LL May 15 16:01:02.369092 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:03.468922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:01:03.469904 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 16:01:03.471829 systemd[1]: Startup finished in 3.340s (kernel) + 6.514s (initrd) + 7.379s (userspace) = 17.233s. May 15 16:01:03.479006 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 16:01:03.972124 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 16:01:03.975408 systemd[1]: Started sshd@0-146.190.42.225:22-139.178.68.195:43808.service - OpenSSH per-connection server daemon (139.178.68.195:43808). May 15 16:01:04.077079 sshd[1671]: Accepted publickey for core from 139.178.68.195 port 43808 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:01:04.083511 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:01:04.097536 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 16:01:04.102385 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 16:01:04.120556 systemd-logind[1490]: New session 1 of user core. May 15 16:01:04.146883 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 16:01:04.151479 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 16:01:04.168801 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 16:01:04.173676 systemd-logind[1490]: New session c1 of user core. May 15 16:01:04.376962 systemd[1676]: Queued start job for default target default.target. May 15 16:01:04.389741 systemd[1676]: Created slice app.slice - User Application Slice. May 15 16:01:04.390276 systemd[1676]: Reached target paths.target - Paths. May 15 16:01:04.390344 systemd[1676]: Reached target timers.target - Timers. May 15 16:01:04.392085 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 16:01:04.421828 kubelet[1660]: E0515 16:01:04.421724 1660 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 16:01:04.426264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 16:01:04.426447 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 16:01:04.427334 systemd[1]: kubelet.service: Consumed 1.429s CPU time, 243.2M memory peak. May 15 16:01:04.436167 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 16:01:04.436352 systemd[1676]: Reached target sockets.target - Sockets. May 15 16:01:04.436448 systemd[1676]: Reached target basic.target - Basic System. May 15 16:01:04.436513 systemd[1676]: Reached target default.target - Main User Target. May 15 16:01:04.436562 systemd[1676]: Startup finished in 248ms. May 15 16:01:04.436843 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 16:01:04.446331 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 16:01:04.520422 systemd[1]: Started sshd@1-146.190.42.225:22-139.178.68.195:43820.service - OpenSSH per-connection server daemon (139.178.68.195:43820). May 15 16:01:04.581162 sshd[1689]: Accepted publickey for core from 139.178.68.195 port 43820 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:01:04.583086 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:01:04.593104 systemd-logind[1490]: New session 2 of user core. May 15 16:01:04.602330 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 16:01:04.666920 sshd[1691]: Connection closed by 139.178.68.195 port 43820 May 15 16:01:04.666307 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 15 16:01:04.682631 systemd[1]: sshd@1-146.190.42.225:22-139.178.68.195:43820.service: Deactivated successfully. May 15 16:01:04.685441 systemd[1]: session-2.scope: Deactivated successfully. May 15 16:01:04.690854 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. May 15 16:01:04.695734 systemd[1]: Started sshd@2-146.190.42.225:22-139.178.68.195:43836.service - OpenSSH per-connection server daemon (139.178.68.195:43836). May 15 16:01:04.699659 systemd-logind[1490]: Removed session 2. May 15 16:01:04.767694 sshd[1697]: Accepted publickey for core from 139.178.68.195 port 43836 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:01:04.770364 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:01:04.778648 systemd-logind[1490]: New session 3 of user core. May 15 16:01:04.792869 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 16:01:04.852879 sshd[1699]: Connection closed by 139.178.68.195 port 43836 May 15 16:01:04.853630 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 15 16:01:04.872066 systemd[1]: sshd@2-146.190.42.225:22-139.178.68.195:43836.service: Deactivated successfully. May 15 16:01:04.874675 systemd[1]: session-3.scope: Deactivated successfully. May 15 16:01:04.875963 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. May 15 16:01:04.881029 systemd[1]: Started sshd@3-146.190.42.225:22-139.178.68.195:43842.service - OpenSSH per-connection server daemon (139.178.68.195:43842). May 15 16:01:04.883027 systemd-logind[1490]: Removed session 3. May 15 16:01:04.947077 sshd[1705]: Accepted publickey for core from 139.178.68.195 port 43842 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:01:04.949432 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:01:04.964953 systemd-logind[1490]: New session 4 of user core. May 15 16:01:04.970341 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 16:01:05.036829 sshd[1707]: Connection closed by 139.178.68.195 port 43842 May 15 16:01:05.037746 sshd-session[1705]: pam_unix(sshd:session): session closed for user core May 15 16:01:05.059706 systemd[1]: sshd@3-146.190.42.225:22-139.178.68.195:43842.service: Deactivated successfully. May 15 16:01:05.063757 systemd[1]: session-4.scope: Deactivated successfully. May 15 16:01:05.065946 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. May 15 16:01:05.074918 systemd[1]: Started sshd@4-146.190.42.225:22-139.178.68.195:43850.service - OpenSSH per-connection server daemon (139.178.68.195:43850). May 15 16:01:05.076211 systemd-logind[1490]: Removed session 4. May 15 16:01:05.141932 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 43850 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:01:05.144375 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:01:05.152546 systemd-logind[1490]: New session 5 of user core. May 15 16:01:05.161324 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 16:01:05.242173 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 16:01:05.242623 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 16:01:05.259904 sudo[1716]: pam_unix(sudo:session): session closed for user root May 15 16:01:05.264902 sshd[1715]: Connection closed by 139.178.68.195 port 43850 May 15 16:01:05.266207 sshd-session[1713]: pam_unix(sshd:session): session closed for user core May 15 16:01:05.280736 systemd[1]: sshd@4-146.190.42.225:22-139.178.68.195:43850.service: Deactivated successfully. May 15 16:01:05.283248 systemd[1]: session-5.scope: Deactivated successfully. May 15 16:01:05.284490 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. May 15 16:01:05.289120 systemd[1]: Started sshd@5-146.190.42.225:22-139.178.68.195:43854.service - OpenSSH per-connection server daemon (139.178.68.195:43854). May 15 16:01:05.291519 systemd-logind[1490]: Removed session 5. May 15 16:01:05.375211 sshd[1722]: Accepted publickey for core from 139.178.68.195 port 43854 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:01:05.377561 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:01:05.385157 systemd-logind[1490]: New session 6 of user core. May 15 16:01:05.394457 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 16:01:05.462549 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 16:01:05.463264 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 16:01:05.471321 sudo[1726]: pam_unix(sudo:session): session closed for user root May 15 16:01:05.481068 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 16:01:05.481512 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 16:01:05.498508 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 16:01:05.562415 augenrules[1748]: No rules May 15 16:01:05.564760 systemd[1]: audit-rules.service: Deactivated successfully. May 15 16:01:05.565552 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 16:01:05.567893 sudo[1725]: pam_unix(sudo:session): session closed for user root May 15 16:01:05.571317 sshd[1724]: Connection closed by 139.178.68.195 port 43854 May 15 16:01:05.571949 sshd-session[1722]: pam_unix(sshd:session): session closed for user core May 15 16:01:05.583734 systemd[1]: sshd@5-146.190.42.225:22-139.178.68.195:43854.service: Deactivated successfully. May 15 16:01:05.586329 systemd[1]: session-6.scope: Deactivated successfully. May 15 16:01:05.588092 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. May 15 16:01:05.592212 systemd[1]: Started sshd@6-146.190.42.225:22-139.178.68.195:43868.service - OpenSSH per-connection server daemon (139.178.68.195:43868). May 15 16:01:05.594194 systemd-logind[1490]: Removed session 6. May 15 16:01:05.661472 sshd[1757]: Accepted publickey for core from 139.178.68.195 port 43868 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:01:05.664245 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:01:05.675436 systemd-logind[1490]: New session 7 of user core. May 15 16:01:05.686389 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 16:01:05.748052 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 16:01:05.748821 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 16:01:06.347421 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 16:01:06.370080 (dockerd)[1779]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 16:01:06.800426 dockerd[1779]: time="2025-05-15T16:01:06.798217556Z" level=info msg="Starting up" May 15 16:01:06.802925 dockerd[1779]: time="2025-05-15T16:01:06.802490007Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 16:01:06.901021 dockerd[1779]: time="2025-05-15T16:01:06.900792802Z" level=info msg="Loading containers: start." May 15 16:01:06.912059 kernel: Initializing XFRM netlink socket May 15 16:01:07.197837 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:07.199781 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:07.210126 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:07.265037 systemd-networkd[1454]: docker0: Link UP May 15 16:01:07.265517 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. May 15 16:01:07.269374 dockerd[1779]: time="2025-05-15T16:01:07.269310053Z" level=info msg="Loading containers: done." May 15 16:01:07.294716 dockerd[1779]: time="2025-05-15T16:01:07.294631117Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 16:01:07.294933 dockerd[1779]: time="2025-05-15T16:01:07.294739077Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 16:01:07.294933 dockerd[1779]: time="2025-05-15T16:01:07.294857657Z" level=info msg="Initializing buildkit" May 15 16:01:07.297545 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4274952372-merged.mount: Deactivated successfully. May 15 16:01:07.330957 dockerd[1779]: time="2025-05-15T16:01:07.330877354Z" level=info msg="Completed buildkit initialization" May 15 16:01:07.343911 dockerd[1779]: time="2025-05-15T16:01:07.343777244Z" level=info msg="Daemon has completed initialization" May 15 16:01:07.344482 dockerd[1779]: time="2025-05-15T16:01:07.344190193Z" level=info msg="API listen on /run/docker.sock" May 15 16:01:07.344858 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 16:01:08.506857 containerd[1533]: time="2025-05-15T16:01:08.506795740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 16:01:09.075060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1507606297.mount: Deactivated successfully. May 15 16:01:10.718518 containerd[1533]: time="2025-05-15T16:01:10.718446489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:10.719700 containerd[1533]: time="2025-05-15T16:01:10.719653018Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 15 16:01:10.720423 containerd[1533]: time="2025-05-15T16:01:10.719778395Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:10.723825 containerd[1533]: time="2025-05-15T16:01:10.723757425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:10.726220 containerd[1533]: time="2025-05-15T16:01:10.726144123Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.219285885s" May 15 16:01:10.726220 containerd[1533]: time="2025-05-15T16:01:10.726201290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 16:01:10.756912 containerd[1533]: time="2025-05-15T16:01:10.756857067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 16:01:12.677441 containerd[1533]: time="2025-05-15T16:01:12.675884570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:12.677441 containerd[1533]: time="2025-05-15T16:01:12.677008550Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 15 16:01:12.677441 containerd[1533]: time="2025-05-15T16:01:12.677371355Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:12.680656 containerd[1533]: time="2025-05-15T16:01:12.680602995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:12.682265 containerd[1533]: time="2025-05-15T16:01:12.682207801Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.925297711s" May 15 16:01:12.682265 containerd[1533]: time="2025-05-15T16:01:12.682263570Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 16:01:12.706292 containerd[1533]: time="2025-05-15T16:01:12.706241583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 16:01:14.162033 containerd[1533]: time="2025-05-15T16:01:14.161935464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:14.164442 containerd[1533]: time="2025-05-15T16:01:14.164365254Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 15 16:01:14.164864 containerd[1533]: time="2025-05-15T16:01:14.164799217Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:14.167720 containerd[1533]: time="2025-05-15T16:01:14.167639776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:14.169661 containerd[1533]: time="2025-05-15T16:01:14.169402537Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.463107363s" May 15 16:01:14.169661 containerd[1533]: time="2025-05-15T16:01:14.169456559Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 16:01:14.197268 containerd[1533]: time="2025-05-15T16:01:14.197221445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 16:01:14.448312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 16:01:14.450822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:01:14.632618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:01:14.646702 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 16:01:14.758122 kubelet[2086]: E0515 16:01:14.756832 2086 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 16:01:14.763512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 16:01:14.763805 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 16:01:14.765684 systemd[1]: kubelet.service: Consumed 236ms CPU time, 96.3M memory peak. May 15 16:01:15.298167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37050410.mount: Deactivated successfully. May 15 16:01:15.823769 containerd[1533]: time="2025-05-15T16:01:15.823154949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:15.823769 containerd[1533]: time="2025-05-15T16:01:15.823734670Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 15 16:01:15.824290 containerd[1533]: time="2025-05-15T16:01:15.824262452Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:15.826080 containerd[1533]: time="2025-05-15T16:01:15.826035235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:15.827113 containerd[1533]: time="2025-05-15T16:01:15.827065843Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.629602567s" May 15 16:01:15.827290 containerd[1533]: time="2025-05-15T16:01:15.827268174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 16:01:15.855908 containerd[1533]: time="2025-05-15T16:01:15.855868815Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 16:01:15.858418 systemd-resolved[1398]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 15 16:01:16.311615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339585637.mount: Deactivated successfully. May 15 16:01:17.162814 containerd[1533]: time="2025-05-15T16:01:17.162736317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:17.163755 containerd[1533]: time="2025-05-15T16:01:17.163714788Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 16:01:17.164792 containerd[1533]: time="2025-05-15T16:01:17.164746464Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:17.168535 containerd[1533]: time="2025-05-15T16:01:17.168316344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:17.170517 containerd[1533]: time="2025-05-15T16:01:17.170102410Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.313995292s" May 15 16:01:17.170517 containerd[1533]: time="2025-05-15T16:01:17.170157157Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 16:01:17.198492 containerd[1533]: time="2025-05-15T16:01:17.198378180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 16:01:17.604637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809770382.mount: Deactivated successfully. May 15 16:01:17.609479 containerd[1533]: time="2025-05-15T16:01:17.609397383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:17.611403 containerd[1533]: time="2025-05-15T16:01:17.611311431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 15 16:01:17.611992 containerd[1533]: time="2025-05-15T16:01:17.611942120Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:17.615534 containerd[1533]: time="2025-05-15T16:01:17.615462883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:17.617604 containerd[1533]: time="2025-05-15T16:01:17.617340685Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 418.558202ms" May 15 16:01:17.617604 containerd[1533]: time="2025-05-15T16:01:17.617390353Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 16:01:17.642934 containerd[1533]: time="2025-05-15T16:01:17.642739698Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 16:01:18.100853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343885429.mount: Deactivated successfully. May 15 16:01:18.944208 systemd-resolved[1398]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 15 16:01:20.138709 containerd[1533]: time="2025-05-15T16:01:20.138628736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:20.140056 containerd[1533]: time="2025-05-15T16:01:20.139977313Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 15 16:01:20.140891 containerd[1533]: time="2025-05-15T16:01:20.140799057Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:20.145691 containerd[1533]: time="2025-05-15T16:01:20.145591980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:20.147933 containerd[1533]: time="2025-05-15T16:01:20.147040660Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.50425529s" May 15 16:01:20.147933 containerd[1533]: time="2025-05-15T16:01:20.147104028Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 16:01:24.314088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:01:24.314822 systemd[1]: kubelet.service: Consumed 236ms CPU time, 96.3M memory peak. May 15 16:01:24.317826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:01:24.347134 systemd[1]: Reload requested from client PID 2302 ('systemctl') (unit session-7.scope)... May 15 16:01:24.347152 systemd[1]: Reloading... May 15 16:01:24.523258 zram_generator::config[2347]: No configuration found. May 15 16:01:24.664426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 16:01:24.825506 systemd[1]: Reloading finished in 477 ms. May 15 16:01:24.897847 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 16:01:24.898197 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 16:01:24.898584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:01:24.898644 systemd[1]: kubelet.service: Consumed 117ms CPU time, 83.5M memory peak. May 15 16:01:24.900840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:01:25.088073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:01:25.109121 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 16:01:25.182863 kubelet[2398]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 16:01:25.182863 kubelet[2398]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 16:01:25.182863 kubelet[2398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 16:01:25.184545 kubelet[2398]: I0515 16:01:25.184348 2398 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 16:01:25.525764 kubelet[2398]: I0515 16:01:25.525710 2398 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 16:01:25.525764 kubelet[2398]: I0515 16:01:25.525755 2398 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 16:01:25.526152 kubelet[2398]: I0515 16:01:25.526112 2398 server.go:927] "Client rotation is on, will bootstrap in background" May 15 16:01:25.548489 kubelet[2398]: I0515 16:01:25.548011 2398 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 16:01:25.548682 kubelet[2398]: E0515 16:01:25.548648 2398 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.42.225:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.567922 kubelet[2398]: I0515 16:01:25.567885 2398 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 16:01:25.568459 kubelet[2398]: I0515 16:01:25.568404 2398 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 16:01:25.568745 kubelet[2398]: I0515 16:01:25.568555 2398 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-32b0bb88bb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 16:01:25.569697 kubelet[2398]: I0515 16:01:25.569493 2398 topology_manager.go:138] "Creating topology manager with none policy" May 15 16:01:25.569697 kubelet[2398]: I0515 16:01:25.569524 2398 container_manager_linux.go:301] "Creating device plugin manager" May 15 16:01:25.570539 kubelet[2398]: I0515 16:01:25.570513 2398 state_mem.go:36] "Initialized new in-memory state store" May 15 16:01:25.571509 kubelet[2398]: I0515 16:01:25.571480 2398 kubelet.go:400] "Attempting to sync node with API server" May 15 16:01:25.572413 kubelet[2398]: I0515 16:01:25.572084 2398 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 16:01:25.572413 kubelet[2398]: I0515 16:01:25.572131 2398 kubelet.go:312] "Adding apiserver pod source" May 15 16:01:25.572413 kubelet[2398]: I0515 16:01:25.572152 2398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 16:01:25.572585 kubelet[2398]: W0515 16:01:25.572542 2398 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.42.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-32b0bb88bb&limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.572642 kubelet[2398]: E0515 16:01:25.572614 2398 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.42.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-32b0bb88bb&limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.581545 kubelet[2398]: W0515 16:01:25.581445 2398 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.42.225:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.581545 kubelet[2398]: E0515 16:01:25.581526 2398 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.42.225:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.582220 kubelet[2398]: I0515 16:01:25.582177 2398 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 16:01:25.583787 kubelet[2398]: I0515 16:01:25.583686 2398 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 16:01:25.583787 kubelet[2398]: W0515 16:01:25.583789 2398 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 16:01:25.585218 kubelet[2398]: I0515 16:01:25.584587 2398 server.go:1264] "Started kubelet" May 15 16:01:25.586335 kubelet[2398]: I0515 16:01:25.586281 2398 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 16:01:25.588296 kubelet[2398]: I0515 16:01:25.587783 2398 server.go:455] "Adding debug handlers to kubelet server" May 15 16:01:25.590898 kubelet[2398]: I0515 16:01:25.590843 2398 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 16:01:25.591291 kubelet[2398]: I0515 16:01:25.591272 2398 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 16:01:25.591973 kubelet[2398]: E0515 16:01:25.591730 2398 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.42.225:6443/api/v1/namespaces/default/events\": dial tcp 146.190.42.225:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-32b0bb88bb.183fbeb9c877b50e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-32b0bb88bb,UID:ci-4334.0.0-a-32b0bb88bb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-32b0bb88bb,},FirstTimestamp:2025-05-15 16:01:25.584557326 +0000 UTC m=+0.469935018,LastTimestamp:2025-05-15 16:01:25.584557326 +0000 UTC m=+0.469935018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-32b0bb88bb,}" May 15 16:01:25.594200 kubelet[2398]: I0515 16:01:25.592881 2398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 16:01:25.600394 kubelet[2398]: E0515 16:01:25.599064 2398 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-32b0bb88bb\" not found" May 15 16:01:25.600394 kubelet[2398]: I0515 16:01:25.599162 2398 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 16:01:25.600394 kubelet[2398]: I0515 16:01:25.599318 2398 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 16:01:25.600394 kubelet[2398]: I0515 16:01:25.599415 2398 reconciler.go:26] "Reconciler: start to sync state" May 15 16:01:25.601117 kubelet[2398]: W0515 16:01:25.601053 2398 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.42.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.601276 kubelet[2398]: E0515 16:01:25.601261 2398 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.42.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.603598 kubelet[2398]: E0515 16:01:25.603546 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.42.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-32b0bb88bb?timeout=10s\": dial tcp 146.190.42.225:6443: connect: connection refused" interval="200ms" May 15 16:01:25.608753 kubelet[2398]: I0515 16:01:25.608583 2398 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 16:01:25.612008 kubelet[2398]: I0515 16:01:25.611604 2398 factory.go:221] Registration of the containerd container factory successfully May 15 16:01:25.612008 kubelet[2398]: I0515 16:01:25.611629 2398 factory.go:221] Registration of the systemd container factory successfully May 15 16:01:25.612874 kubelet[2398]: E0515 16:01:25.612848 2398 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 16:01:25.627299 kubelet[2398]: I0515 16:01:25.627104 2398 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 16:01:25.629015 kubelet[2398]: I0515 16:01:25.628967 2398 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 16:01:25.629144 kubelet[2398]: I0515 16:01:25.629136 2398 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 16:01:25.629250 kubelet[2398]: I0515 16:01:25.629240 2398 kubelet.go:2337] "Starting kubelet main sync loop" May 15 16:01:25.629354 kubelet[2398]: E0515 16:01:25.629336 2398 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 16:01:25.643006 kubelet[2398]: W0515 16:01:25.642910 2398 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.42.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.643150 kubelet[2398]: E0515 16:01:25.643037 2398 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.42.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:25.651236 kubelet[2398]: I0515 16:01:25.651200 2398 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 16:01:25.651236 kubelet[2398]: I0515 16:01:25.651226 2398 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 16:01:25.651397 kubelet[2398]: I0515 16:01:25.651273 2398 state_mem.go:36] "Initialized new in-memory state store" May 15 16:01:25.653222 kubelet[2398]: I0515 16:01:25.653177 2398 policy_none.go:49] "None policy: Start" May 15 16:01:25.654175 kubelet[2398]: I0515 16:01:25.654147 2398 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 16:01:25.654284 kubelet[2398]: I0515 16:01:25.654187 2398 state_mem.go:35] "Initializing new in-memory state store" May 15 16:01:25.663430 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 16:01:25.681665 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 16:01:25.686522 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 16:01:25.696687 kubelet[2398]: I0515 16:01:25.696630 2398 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 16:01:25.696901 kubelet[2398]: I0515 16:01:25.696855 2398 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 16:01:25.697010 kubelet[2398]: I0515 16:01:25.696997 2398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 16:01:25.700746 kubelet[2398]: E0515 16:01:25.700689 2398 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-32b0bb88bb\" not found" May 15 16:01:25.701755 kubelet[2398]: I0515 16:01:25.701276 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.701755 kubelet[2398]: E0515 16:01:25.701649 2398 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.42.225:6443/api/v1/nodes\": dial tcp 146.190.42.225:6443: connect: connection refused" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.729921 kubelet[2398]: I0515 16:01:25.729793 2398 topology_manager.go:215] "Topology Admit Handler" podUID="1e1f054a0579b9dedf626c1ffb620660" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.731755 kubelet[2398]: I0515 16:01:25.731280 2398 topology_manager.go:215] "Topology Admit Handler" podUID="e2759de2c692773a769331d8391ed912" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.732802 kubelet[2398]: I0515 16:01:25.732761 2398 topology_manager.go:215] "Topology Admit Handler" podUID="a1f1e45da289edae29bf19b62b2773d7" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.742619 systemd[1]: Created slice kubepods-burstable-pod1e1f054a0579b9dedf626c1ffb620660.slice - libcontainer container kubepods-burstable-pod1e1f054a0579b9dedf626c1ffb620660.slice. May 15 16:01:25.759088 systemd[1]: Created slice kubepods-burstable-poda1f1e45da289edae29bf19b62b2773d7.slice - libcontainer container kubepods-burstable-poda1f1e45da289edae29bf19b62b2773d7.slice. May 15 16:01:25.774358 systemd[1]: Created slice kubepods-burstable-pode2759de2c692773a769331d8391ed912.slice - libcontainer container kubepods-burstable-pode2759de2c692773a769331d8391ed912.slice. May 15 16:01:25.805113 kubelet[2398]: E0515 16:01:25.804922 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.42.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-32b0bb88bb?timeout=10s\": dial tcp 146.190.42.225:6443: connect: connection refused" interval="400ms" May 15 16:01:25.901574 kubelet[2398]: I0515 16:01:25.901325 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e1f054a0579b9dedf626c1ffb620660-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-32b0bb88bb\" (UID: \"1e1f054a0579b9dedf626c1ffb620660\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.901574 kubelet[2398]: I0515 16:01:25.901376 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.901574 kubelet[2398]: I0515 16:01:25.901398 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.901574 kubelet[2398]: I0515 16:01:25.901417 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1f1e45da289edae29bf19b62b2773d7-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-32b0bb88bb\" (UID: \"a1f1e45da289edae29bf19b62b2773d7\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.901574 kubelet[2398]: I0515 16:01:25.901434 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e1f054a0579b9dedf626c1ffb620660-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-32b0bb88bb\" (UID: \"1e1f054a0579b9dedf626c1ffb620660\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.901827 kubelet[2398]: I0515 16:01:25.901452 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e1f054a0579b9dedf626c1ffb620660-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-32b0bb88bb\" (UID: \"1e1f054a0579b9dedf626c1ffb620660\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.901827 kubelet[2398]: I0515 16:01:25.901466 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.901827 kubelet[2398]: I0515 16:01:25.901481 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.901827 kubelet[2398]: I0515 16:01:25.901497 2398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.903094 kubelet[2398]: I0515 16:01:25.903067 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:25.903439 kubelet[2398]: E0515 16:01:25.903414 2398 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.42.225:6443/api/v1/nodes\": dial tcp 146.190.42.225:6443: connect: connection refused" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:26.055858 kubelet[2398]: E0515 16:01:26.055706 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:26.057835 containerd[1533]: time="2025-05-15T16:01:26.057748996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-32b0bb88bb,Uid:1e1f054a0579b9dedf626c1ffb620660,Namespace:kube-system,Attempt:0,}" May 15 16:01:26.063914 systemd-resolved[1398]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. May 15 16:01:26.071036 kubelet[2398]: E0515 16:01:26.070896 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:26.077326 containerd[1533]: time="2025-05-15T16:01:26.077268203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-32b0bb88bb,Uid:a1f1e45da289edae29bf19b62b2773d7,Namespace:kube-system,Attempt:0,}" May 15 16:01:26.077879 kubelet[2398]: E0515 16:01:26.077843 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:26.078362 containerd[1533]: time="2025-05-15T16:01:26.078330114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-32b0bb88bb,Uid:e2759de2c692773a769331d8391ed912,Namespace:kube-system,Attempt:0,}" May 15 16:01:26.205753 kubelet[2398]: E0515 16:01:26.205643 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.42.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-32b0bb88bb?timeout=10s\": dial tcp 146.190.42.225:6443: connect: connection refused" interval="800ms" May 15 16:01:26.305349 kubelet[2398]: I0515 16:01:26.305272 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:26.305692 kubelet[2398]: E0515 16:01:26.305658 2398 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.42.225:6443/api/v1/nodes\": dial tcp 146.190.42.225:6443: connect: connection refused" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:26.496188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239869602.mount: Deactivated successfully. May 15 16:01:26.500363 containerd[1533]: time="2025-05-15T16:01:26.500290229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 16:01:26.501761 containerd[1533]: time="2025-05-15T16:01:26.501720553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 16:01:26.502530 containerd[1533]: time="2025-05-15T16:01:26.502488243Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 16:01:26.505190 containerd[1533]: time="2025-05-15T16:01:26.505126085Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 16:01:26.506230 containerd[1533]: time="2025-05-15T16:01:26.506185902Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 16:01:26.507543 containerd[1533]: time="2025-05-15T16:01:26.507346938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 15 16:01:26.507543 containerd[1533]: time="2025-05-15T16:01:26.507491843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 16:01:26.507543 containerd[1533]: time="2025-05-15T16:01:26.507512669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 15 16:01:26.507925 containerd[1533]: time="2025-05-15T16:01:26.507897270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 445.460615ms" May 15 16:01:26.511632 containerd[1533]: time="2025-05-15T16:01:26.511103981Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 431.174836ms" May 15 16:01:26.515274 containerd[1533]: time="2025-05-15T16:01:26.515234486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 435.741518ms" May 15 16:01:26.635631 containerd[1533]: time="2025-05-15T16:01:26.635564314Z" level=info msg="connecting to shim 09d105ea69e47a8cf2dc2b7592e5d603169fa9de47006d8871fc8b8b44af9d50" address="unix:///run/containerd/s/c91b2a3dcb886c0eb0cd6434c6daf3e81ba2b59a677305a699be7e8cf1c4d4a3" namespace=k8s.io protocol=ttrpc version=3 May 15 16:01:26.636035 containerd[1533]: time="2025-05-15T16:01:26.636003733Z" level=info msg="connecting to shim 13f1b5ed6458843d925e8a8f854ed84b95e13bc26c8d268a4855af9f5fe57e8a" address="unix:///run/containerd/s/38ee76e169a54845463955edc2b3a325809f59efb6a89e4a0cfe838ab13c34d8" namespace=k8s.io protocol=ttrpc version=3 May 15 16:01:26.640532 containerd[1533]: time="2025-05-15T16:01:26.640477698Z" level=info msg="connecting to shim 31220fe2766e8f4540a4e80f7cf60a1011d60bfabd8981945a3305b8fce60d46" address="unix:///run/containerd/s/8cbe409b69fb5894d283323c16c3ea8b8e417b7a1e17ba70c581be6b4efa4060" namespace=k8s.io protocol=ttrpc version=3 May 15 16:01:26.713376 kubelet[2398]: W0515 16:01:26.713312 2398 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.42.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:26.713897 kubelet[2398]: E0515 16:01:26.713562 2398 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.42.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:26.757237 systemd[1]: Started cri-containerd-09d105ea69e47a8cf2dc2b7592e5d603169fa9de47006d8871fc8b8b44af9d50.scope - libcontainer container 09d105ea69e47a8cf2dc2b7592e5d603169fa9de47006d8871fc8b8b44af9d50. May 15 16:01:26.759230 systemd[1]: Started cri-containerd-13f1b5ed6458843d925e8a8f854ed84b95e13bc26c8d268a4855af9f5fe57e8a.scope - libcontainer container 13f1b5ed6458843d925e8a8f854ed84b95e13bc26c8d268a4855af9f5fe57e8a. May 15 16:01:26.762241 systemd[1]: Started cri-containerd-31220fe2766e8f4540a4e80f7cf60a1011d60bfabd8981945a3305b8fce60d46.scope - libcontainer container 31220fe2766e8f4540a4e80f7cf60a1011d60bfabd8981945a3305b8fce60d46. May 15 16:01:26.787693 kubelet[2398]: W0515 16:01:26.787601 2398 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.42.225:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:26.787693 kubelet[2398]: E0515 16:01:26.787669 2398 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.42.225:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:26.856487 kubelet[2398]: W0515 16:01:26.856427 2398 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.42.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-32b0bb88bb&limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:26.857066 kubelet[2398]: E0515 16:01:26.856663 2398 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.42.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-32b0bb88bb&limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:26.866020 containerd[1533]: time="2025-05-15T16:01:26.865941631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-32b0bb88bb,Uid:1e1f054a0579b9dedf626c1ffb620660,Namespace:kube-system,Attempt:0,} returns sandbox id \"09d105ea69e47a8cf2dc2b7592e5d603169fa9de47006d8871fc8b8b44af9d50\"" May 15 16:01:26.868091 kubelet[2398]: E0515 16:01:26.868042 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:26.871016 containerd[1533]: time="2025-05-15T16:01:26.870583671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-32b0bb88bb,Uid:a1f1e45da289edae29bf19b62b2773d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"31220fe2766e8f4540a4e80f7cf60a1011d60bfabd8981945a3305b8fce60d46\"" May 15 16:01:26.872784 kubelet[2398]: E0515 16:01:26.872660 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:26.873611 containerd[1533]: time="2025-05-15T16:01:26.873554426Z" level=info msg="CreateContainer within sandbox \"09d105ea69e47a8cf2dc2b7592e5d603169fa9de47006d8871fc8b8b44af9d50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 16:01:26.878769 containerd[1533]: time="2025-05-15T16:01:26.878678877Z" level=info msg="CreateContainer within sandbox \"31220fe2766e8f4540a4e80f7cf60a1011d60bfabd8981945a3305b8fce60d46\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 16:01:26.882530 containerd[1533]: time="2025-05-15T16:01:26.882490904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-32b0bb88bb,Uid:e2759de2c692773a769331d8391ed912,Namespace:kube-system,Attempt:0,} returns sandbox id \"13f1b5ed6458843d925e8a8f854ed84b95e13bc26c8d268a4855af9f5fe57e8a\"" May 15 16:01:26.884233 kubelet[2398]: E0515 16:01:26.884145 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:26.886014 containerd[1533]: time="2025-05-15T16:01:26.885944013Z" level=info msg="Container d988aec5820ce23541fc43405b272d77257608effd463387327f6ac386cc4566: CDI devices from CRI Config.CDIDevices: []" May 15 16:01:26.887359 containerd[1533]: time="2025-05-15T16:01:26.887310067Z" level=info msg="Container cfb2f4bb5e5d32a641f44429970632fd7c3f9721e1319b717d1e5bc712436912: CDI devices from CRI Config.CDIDevices: []" May 15 16:01:26.888510 containerd[1533]: time="2025-05-15T16:01:26.888475427Z" level=info msg="CreateContainer within sandbox \"13f1b5ed6458843d925e8a8f854ed84b95e13bc26c8d268a4855af9f5fe57e8a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 16:01:26.895032 containerd[1533]: time="2025-05-15T16:01:26.894964569Z" level=info msg="CreateContainer within sandbox \"09d105ea69e47a8cf2dc2b7592e5d603169fa9de47006d8871fc8b8b44af9d50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d988aec5820ce23541fc43405b272d77257608effd463387327f6ac386cc4566\"" May 15 16:01:26.897060 containerd[1533]: time="2025-05-15T16:01:26.897024199Z" level=info msg="StartContainer for \"d988aec5820ce23541fc43405b272d77257608effd463387327f6ac386cc4566\"" May 15 16:01:26.898162 containerd[1533]: time="2025-05-15T16:01:26.898103158Z" level=info msg="connecting to shim d988aec5820ce23541fc43405b272d77257608effd463387327f6ac386cc4566" address="unix:///run/containerd/s/c91b2a3dcb886c0eb0cd6434c6daf3e81ba2b59a677305a699be7e8cf1c4d4a3" protocol=ttrpc version=3 May 15 16:01:26.901317 containerd[1533]: time="2025-05-15T16:01:26.901276425Z" level=info msg="CreateContainer within sandbox \"31220fe2766e8f4540a4e80f7cf60a1011d60bfabd8981945a3305b8fce60d46\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cfb2f4bb5e5d32a641f44429970632fd7c3f9721e1319b717d1e5bc712436912\"" May 15 16:01:26.902126 containerd[1533]: time="2025-05-15T16:01:26.901941655Z" level=info msg="StartContainer for \"cfb2f4bb5e5d32a641f44429970632fd7c3f9721e1319b717d1e5bc712436912\"" May 15 16:01:26.902819 containerd[1533]: time="2025-05-15T16:01:26.902791376Z" level=info msg="Container 474eb472c695f335228918ccc581bc05ed4cba6b7dec99d8291aa2aa99c19e02: CDI devices from CRI Config.CDIDevices: []" May 15 16:01:26.905146 containerd[1533]: time="2025-05-15T16:01:26.905106763Z" level=info msg="connecting to shim cfb2f4bb5e5d32a641f44429970632fd7c3f9721e1319b717d1e5bc712436912" address="unix:///run/containerd/s/8cbe409b69fb5894d283323c16c3ea8b8e417b7a1e17ba70c581be6b4efa4060" protocol=ttrpc version=3 May 15 16:01:26.909683 containerd[1533]: time="2025-05-15T16:01:26.909622968Z" level=info msg="CreateContainer within sandbox \"13f1b5ed6458843d925e8a8f854ed84b95e13bc26c8d268a4855af9f5fe57e8a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"474eb472c695f335228918ccc581bc05ed4cba6b7dec99d8291aa2aa99c19e02\"" May 15 16:01:26.911023 containerd[1533]: time="2025-05-15T16:01:26.910492123Z" level=info msg="StartContainer for \"474eb472c695f335228918ccc581bc05ed4cba6b7dec99d8291aa2aa99c19e02\"" May 15 16:01:26.911731 containerd[1533]: time="2025-05-15T16:01:26.911702228Z" level=info msg="connecting to shim 474eb472c695f335228918ccc581bc05ed4cba6b7dec99d8291aa2aa99c19e02" address="unix:///run/containerd/s/38ee76e169a54845463955edc2b3a325809f59efb6a89e4a0cfe838ab13c34d8" protocol=ttrpc version=3 May 15 16:01:26.933350 systemd[1]: Started cri-containerd-cfb2f4bb5e5d32a641f44429970632fd7c3f9721e1319b717d1e5bc712436912.scope - libcontainer container cfb2f4bb5e5d32a641f44429970632fd7c3f9721e1319b717d1e5bc712436912. May 15 16:01:26.947348 systemd[1]: Started cri-containerd-d988aec5820ce23541fc43405b272d77257608effd463387327f6ac386cc4566.scope - libcontainer container d988aec5820ce23541fc43405b272d77257608effd463387327f6ac386cc4566. May 15 16:01:26.961358 systemd[1]: Started cri-containerd-474eb472c695f335228918ccc581bc05ed4cba6b7dec99d8291aa2aa99c19e02.scope - libcontainer container 474eb472c695f335228918ccc581bc05ed4cba6b7dec99d8291aa2aa99c19e02. May 15 16:01:27.007210 kubelet[2398]: E0515 16:01:27.007095 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.42.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-32b0bb88bb?timeout=10s\": dial tcp 146.190.42.225:6443: connect: connection refused" interval="1.6s" May 15 16:01:27.052790 containerd[1533]: time="2025-05-15T16:01:27.051788492Z" level=info msg="StartContainer for \"cfb2f4bb5e5d32a641f44429970632fd7c3f9721e1319b717d1e5bc712436912\" returns successfully" May 15 16:01:27.068922 containerd[1533]: time="2025-05-15T16:01:27.068874249Z" level=info msg="StartContainer for \"474eb472c695f335228918ccc581bc05ed4cba6b7dec99d8291aa2aa99c19e02\" returns successfully" May 15 16:01:27.080223 containerd[1533]: time="2025-05-15T16:01:27.080177277Z" level=info msg="StartContainer for \"d988aec5820ce23541fc43405b272d77257608effd463387327f6ac386cc4566\" returns successfully" May 15 16:01:27.086744 kubelet[2398]: W0515 16:01:27.086571 2398 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.42.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:27.087123 kubelet[2398]: E0515 16:01:27.087076 2398 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.42.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.42.225:6443: connect: connection refused May 15 16:01:27.107478 kubelet[2398]: I0515 16:01:27.107444 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:27.110336 kubelet[2398]: E0515 16:01:27.110282 2398 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.42.225:6443/api/v1/nodes\": dial tcp 146.190.42.225:6443: connect: connection refused" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:27.659541 kubelet[2398]: E0515 16:01:27.659494 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:27.665075 kubelet[2398]: E0515 16:01:27.664470 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:27.683270 kubelet[2398]: E0515 16:01:27.683223 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:28.682777 kubelet[2398]: E0515 16:01:28.682731 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:28.712023 kubelet[2398]: I0515 16:01:28.711923 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:29.608559 kubelet[2398]: E0515 16:01:29.608516 2398 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-32b0bb88bb\" not found" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:29.666310 kubelet[2398]: I0515 16:01:29.666204 2398 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:29.680695 kubelet[2398]: E0515 16:01:29.680652 2398 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-32b0bb88bb\" not found" May 15 16:01:29.781658 kubelet[2398]: E0515 16:01:29.781599 2398 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-32b0bb88bb\" not found" May 15 16:01:29.882461 kubelet[2398]: E0515 16:01:29.882294 2398 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-32b0bb88bb\" not found" May 15 16:01:30.041071 kubelet[2398]: E0515 16:01:30.041021 2398 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:30.041494 kubelet[2398]: E0515 16:01:30.041463 2398 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:30.577340 kubelet[2398]: I0515 16:01:30.577293 2398 apiserver.go:52] "Watching apiserver" May 15 16:01:30.600036 kubelet[2398]: I0515 16:01:30.599995 2398 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 16:01:32.051096 systemd[1]: Reload requested from client PID 2669 ('systemctl') (unit session-7.scope)... May 15 16:01:32.051119 systemd[1]: Reloading... May 15 16:01:32.162045 zram_generator::config[2713]: No configuration found. May 15 16:01:32.299269 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 16:01:32.481494 systemd[1]: Reloading finished in 429 ms. May 15 16:01:32.512588 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:01:32.513062 kubelet[2398]: I0515 16:01:32.512671 2398 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 16:01:32.522027 systemd[1]: kubelet.service: Deactivated successfully. May 15 16:01:32.522423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:01:32.522513 systemd[1]: kubelet.service: Consumed 925ms CPU time, 109.4M memory peak. May 15 16:01:32.525278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:01:32.685829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:01:32.694557 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 16:01:32.770192 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 16:01:32.770766 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 16:01:32.770766 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 16:01:32.772110 kubelet[2763]: I0515 16:01:32.771699 2763 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 16:01:32.779389 kubelet[2763]: I0515 16:01:32.779347 2763 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 16:01:32.779389 kubelet[2763]: I0515 16:01:32.779376 2763 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 16:01:32.779614 kubelet[2763]: I0515 16:01:32.779601 2763 server.go:927] "Client rotation is on, will bootstrap in background" May 15 16:01:32.781242 kubelet[2763]: I0515 16:01:32.781208 2763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 16:01:32.782976 kubelet[2763]: I0515 16:01:32.782883 2763 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 16:01:32.797190 kubelet[2763]: I0515 16:01:32.797048 2763 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 16:01:32.797454 kubelet[2763]: I0515 16:01:32.797320 2763 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 16:01:32.797640 kubelet[2763]: I0515 16:01:32.797369 2763 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-32b0bb88bb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 16:01:32.797746 kubelet[2763]: I0515 16:01:32.797659 2763 topology_manager.go:138] "Creating topology manager with none policy" May 15 16:01:32.797746 kubelet[2763]: I0515 16:01:32.797675 2763 container_manager_linux.go:301] "Creating device plugin manager" May 15 16:01:32.797746 kubelet[2763]: I0515 16:01:32.797731 2763 state_mem.go:36] "Initialized new in-memory state store" May 15 16:01:32.798075 kubelet[2763]: I0515 16:01:32.797885 2763 kubelet.go:400] "Attempting to sync node with API server" May 15 16:01:32.798075 kubelet[2763]: I0515 16:01:32.797909 2763 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 16:01:32.798075 kubelet[2763]: I0515 16:01:32.797938 2763 kubelet.go:312] "Adding apiserver pod source" May 15 16:01:32.798075 kubelet[2763]: I0515 16:01:32.797963 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 16:01:32.801375 kubelet[2763]: I0515 16:01:32.801347 2763 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 16:01:32.801542 kubelet[2763]: I0515 16:01:32.801527 2763 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 16:01:32.801934 kubelet[2763]: I0515 16:01:32.801919 2763 server.go:1264] "Started kubelet" May 15 16:01:32.809177 kubelet[2763]: I0515 16:01:32.808267 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 16:01:32.820960 kubelet[2763]: I0515 16:01:32.819128 2763 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 16:01:32.824337 kubelet[2763]: I0515 16:01:32.823904 2763 server.go:455] "Adding debug handlers to kubelet server" May 15 16:01:32.830035 kubelet[2763]: I0515 16:01:32.824113 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 16:01:32.836259 kubelet[2763]: I0515 16:01:32.830040 2763 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 16:01:32.837077 kubelet[2763]: I0515 16:01:32.830057 2763 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 16:01:32.839038 kubelet[2763]: I0515 16:01:32.836766 2763 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 16:01:32.841117 kubelet[2763]: I0515 16:01:32.838215 2763 reconciler.go:26] "Reconciler: start to sync state" May 15 16:01:32.841117 kubelet[2763]: I0515 16:01:32.839422 2763 factory.go:221] Registration of the systemd container factory successfully May 15 16:01:32.841410 kubelet[2763]: I0515 16:01:32.841294 2763 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 16:01:32.845388 kubelet[2763]: E0515 16:01:32.845358 2763 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 16:01:32.845570 kubelet[2763]: I0515 16:01:32.845539 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 16:01:32.846172 kubelet[2763]: I0515 16:01:32.846152 2763 factory.go:221] Registration of the containerd container factory successfully May 15 16:01:32.856335 kubelet[2763]: I0515 16:01:32.856300 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 16:01:32.857431 kubelet[2763]: I0515 16:01:32.857401 2763 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 16:01:32.857540 kubelet[2763]: I0515 16:01:32.857459 2763 kubelet.go:2337] "Starting kubelet main sync loop" May 15 16:01:32.857577 kubelet[2763]: E0515 16:01:32.857542 2763 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 16:01:32.908827 kubelet[2763]: I0515 16:01:32.908377 2763 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 16:01:32.908827 kubelet[2763]: I0515 16:01:32.908403 2763 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 16:01:32.908827 kubelet[2763]: I0515 16:01:32.908429 2763 state_mem.go:36] "Initialized new in-memory state store" May 15 16:01:32.908827 kubelet[2763]: I0515 16:01:32.908675 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 16:01:32.908827 kubelet[2763]: I0515 16:01:32.908692 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 16:01:32.908827 kubelet[2763]: I0515 16:01:32.908717 2763 policy_none.go:49] "None policy: Start" May 15 16:01:32.909903 kubelet[2763]: I0515 16:01:32.909875 2763 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 16:01:32.909903 kubelet[2763]: I0515 16:01:32.909902 2763 state_mem.go:35] "Initializing new in-memory state store" May 15 16:01:32.910116 kubelet[2763]: I0515 16:01:32.910096 2763 state_mem.go:75] "Updated machine memory state" May 15 16:01:32.915688 kubelet[2763]: I0515 16:01:32.915577 2763 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 16:01:32.915840 kubelet[2763]: I0515 16:01:32.915767 2763 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 16:01:32.915885 kubelet[2763]: I0515 16:01:32.915869 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 16:01:32.935608 kubelet[2763]: I0515 16:01:32.933302 2763 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:32.944950 kubelet[2763]: I0515 16:01:32.944913 2763 kubelet_node_status.go:112] "Node was previously registered" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:32.945151 kubelet[2763]: I0515 16:01:32.945093 2763 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-32b0bb88bb" May 15 16:01:32.957932 kubelet[2763]: I0515 16:01:32.957835 2763 topology_manager.go:215] "Topology Admit Handler" podUID="1e1f054a0579b9dedf626c1ffb620660" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:32.958098 kubelet[2763]: I0515 16:01:32.958015 2763 topology_manager.go:215] "Topology Admit Handler" podUID="e2759de2c692773a769331d8391ed912" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:32.958186 kubelet[2763]: I0515 16:01:32.958107 2763 topology_manager.go:215] "Topology Admit Handler" podUID="a1f1e45da289edae29bf19b62b2773d7" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:32.969311 kubelet[2763]: W0515 16:01:32.969275 2763 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:01:32.971228 kubelet[2763]: W0515 16:01:32.971118 2763 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:01:32.973237 kubelet[2763]: W0515 16:01:32.973209 2763 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:01:33.042433 kubelet[2763]: I0515 16:01:33.041951 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e1f054a0579b9dedf626c1ffb620660-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-32b0bb88bb\" (UID: \"1e1f054a0579b9dedf626c1ffb620660\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.042433 kubelet[2763]: I0515 16:01:33.042011 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.042433 kubelet[2763]: I0515 16:01:33.042042 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.042433 kubelet[2763]: I0515 16:01:33.042063 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.042433 kubelet[2763]: I0515 16:01:33.042082 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1f1e45da289edae29bf19b62b2773d7-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-32b0bb88bb\" (UID: \"a1f1e45da289edae29bf19b62b2773d7\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.042754 kubelet[2763]: I0515 16:01:33.042098 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e1f054a0579b9dedf626c1ffb620660-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-32b0bb88bb\" (UID: \"1e1f054a0579b9dedf626c1ffb620660\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.042754 kubelet[2763]: I0515 16:01:33.042131 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e1f054a0579b9dedf626c1ffb620660-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-32b0bb88bb\" (UID: \"1e1f054a0579b9dedf626c1ffb620660\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.042754 kubelet[2763]: I0515 16:01:33.042154 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.042754 kubelet[2763]: I0515 16:01:33.042185 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e2759de2c692773a769331d8391ed912-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-32b0bb88bb\" (UID: \"e2759de2c692773a769331d8391ed912\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.272701 kubelet[2763]: E0515 16:01:33.272375 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:33.274737 kubelet[2763]: E0515 16:01:33.272694 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:33.277255 kubelet[2763]: E0515 16:01:33.276926 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:33.800019 kubelet[2763]: I0515 16:01:33.799692 2763 apiserver.go:52] "Watching apiserver" May 15 16:01:33.839311 kubelet[2763]: I0515 16:01:33.839253 2763 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 16:01:33.894267 kubelet[2763]: E0515 16:01:33.893747 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:33.895926 kubelet[2763]: E0515 16:01:33.895884 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:33.904933 kubelet[2763]: W0515 16:01:33.904897 2763 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:01:33.905108 kubelet[2763]: E0515 16:01:33.904964 2763 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-32b0bb88bb\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:01:33.906642 kubelet[2763]: E0515 16:01:33.906482 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:33.949356 kubelet[2763]: I0515 16:01:33.949135 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" podStartSLOduration=1.949112703 podStartE2EDuration="1.949112703s" podCreationTimestamp="2025-05-15 16:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:01:33.930029709 +0000 UTC m=+1.228945723" watchObservedRunningTime="2025-05-15 16:01:33.949112703 +0000 UTC m=+1.248028721" May 15 16:01:33.975447 kubelet[2763]: I0515 16:01:33.975333 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" podStartSLOduration=1.975309852 podStartE2EDuration="1.975309852s" podCreationTimestamp="2025-05-15 16:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:01:33.952671279 +0000 UTC m=+1.251587289" watchObservedRunningTime="2025-05-15 16:01:33.975309852 +0000 UTC m=+1.274225868" May 15 16:01:33.997136 kubelet[2763]: I0515 16:01:33.997014 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" podStartSLOduration=1.996978541 podStartE2EDuration="1.996978541s" podCreationTimestamp="2025-05-15 16:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:01:33.976164088 +0000 UTC m=+1.275080100" watchObservedRunningTime="2025-05-15 16:01:33.996978541 +0000 UTC m=+1.295894551" May 15 16:01:34.895292 kubelet[2763]: E0515 16:01:34.895230 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:35.199808 kubelet[2763]: E0515 16:01:35.199766 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:35.898322 kubelet[2763]: E0515 16:01:35.896702 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:37.513940 systemd-timesyncd[1426]: Contacted time server 73.65.80.137:123 (2.flatcar.pool.ntp.org). May 15 16:01:37.514043 systemd-timesyncd[1426]: Initial clock synchronization to Thu 2025-05-15 16:01:37.584593 UTC. May 15 16:01:38.633963 sudo[1760]: pam_unix(sudo:session): session closed for user root May 15 16:01:38.637180 sshd[1759]: Connection closed by 139.178.68.195 port 43868 May 15 16:01:38.638271 sshd-session[1757]: pam_unix(sshd:session): session closed for user core May 15 16:01:38.643704 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. May 15 16:01:38.644385 systemd[1]: sshd@6-146.190.42.225:22-139.178.68.195:43868.service: Deactivated successfully. May 15 16:01:38.647682 systemd[1]: session-7.scope: Deactivated successfully. May 15 16:01:38.648228 systemd[1]: session-7.scope: Consumed 6.697s CPU time, 186.4M memory peak. May 15 16:01:38.651788 systemd-logind[1490]: Removed session 7. May 15 16:01:39.581360 kubelet[2763]: E0515 16:01:39.581285 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:39.904748 kubelet[2763]: E0515 16:01:39.903586 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:44.240187 update_engine[1493]: I20250515 16:01:44.240062 1493 update_attempter.cc:509] Updating boot flags... May 15 16:01:45.177407 kubelet[2763]: E0515 16:01:45.177359 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:45.208502 kubelet[2763]: E0515 16:01:45.208390 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:46.899955 kubelet[2763]: I0515 16:01:46.899583 2763 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 16:01:46.900985 containerd[1533]: time="2025-05-15T16:01:46.900777183Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 16:01:46.901649 kubelet[2763]: I0515 16:01:46.901208 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 16:01:47.639002 kubelet[2763]: I0515 16:01:47.638950 2763 topology_manager.go:215] "Topology Admit Handler" podUID="db94477b-5bdd-4708-bd2e-d986714f8555" podNamespace="kube-system" podName="kube-proxy-rnj6z" May 15 16:01:47.653298 systemd[1]: Created slice kubepods-besteffort-poddb94477b_5bdd_4708_bd2e_d986714f8555.slice - libcontainer container kubepods-besteffort-poddb94477b_5bdd_4708_bd2e_d986714f8555.slice. May 15 16:01:47.744560 kubelet[2763]: I0515 16:01:47.744456 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db94477b-5bdd-4708-bd2e-d986714f8555-xtables-lock\") pod \"kube-proxy-rnj6z\" (UID: \"db94477b-5bdd-4708-bd2e-d986714f8555\") " pod="kube-system/kube-proxy-rnj6z" May 15 16:01:47.744763 kubelet[2763]: I0515 16:01:47.744587 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db94477b-5bdd-4708-bd2e-d986714f8555-lib-modules\") pod \"kube-proxy-rnj6z\" (UID: \"db94477b-5bdd-4708-bd2e-d986714f8555\") " pod="kube-system/kube-proxy-rnj6z" May 15 16:01:47.744763 kubelet[2763]: I0515 16:01:47.744636 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db94477b-5bdd-4708-bd2e-d986714f8555-kube-proxy\") pod \"kube-proxy-rnj6z\" (UID: \"db94477b-5bdd-4708-bd2e-d986714f8555\") " pod="kube-system/kube-proxy-rnj6z" May 15 16:01:47.744763 kubelet[2763]: I0515 16:01:47.744679 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97k9j\" (UniqueName: \"kubernetes.io/projected/db94477b-5bdd-4708-bd2e-d986714f8555-kube-api-access-97k9j\") pod \"kube-proxy-rnj6z\" (UID: \"db94477b-5bdd-4708-bd2e-d986714f8555\") " pod="kube-system/kube-proxy-rnj6z" May 15 16:01:47.956177 kubelet[2763]: I0515 16:01:47.956077 2763 topology_manager.go:215] "Topology Admit Handler" podUID="10c5d861-69f5-41ae-bab2-9fe813c77a00" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-kmq9t" May 15 16:01:47.965370 kubelet[2763]: E0515 16:01:47.964152 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:47.966587 systemd[1]: Created slice kubepods-besteffort-pod10c5d861_69f5_41ae_bab2_9fe813c77a00.slice - libcontainer container kubepods-besteffort-pod10c5d861_69f5_41ae_bab2_9fe813c77a00.slice. May 15 16:01:47.968076 containerd[1533]: time="2025-05-15T16:01:47.967935088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rnj6z,Uid:db94477b-5bdd-4708-bd2e-d986714f8555,Namespace:kube-system,Attempt:0,}" May 15 16:01:47.999957 containerd[1533]: time="2025-05-15T16:01:47.999886372Z" level=info msg="connecting to shim 203a3612e85765ffb913de8cee983196f691f0e5ceadca60a9d5f4a086d0b28b" address="unix:///run/containerd/s/8aca45ada567a81ab78b4448d678dde41876cbf18fe4b61770542d7429862e69" namespace=k8s.io protocol=ttrpc version=3 May 15 16:01:48.047966 kubelet[2763]: I0515 16:01:48.047728 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10c5d861-69f5-41ae-bab2-9fe813c77a00-var-lib-calico\") pod \"tigera-operator-797db67f8-kmq9t\" (UID: \"10c5d861-69f5-41ae-bab2-9fe813c77a00\") " pod="tigera-operator/tigera-operator-797db67f8-kmq9t" May 15 16:01:48.047966 kubelet[2763]: I0515 16:01:48.047795 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrz6l\" (UniqueName: \"kubernetes.io/projected/10c5d861-69f5-41ae-bab2-9fe813c77a00-kube-api-access-mrz6l\") pod \"tigera-operator-797db67f8-kmq9t\" (UID: \"10c5d861-69f5-41ae-bab2-9fe813c77a00\") " pod="tigera-operator/tigera-operator-797db67f8-kmq9t" May 15 16:01:48.048276 systemd[1]: Started cri-containerd-203a3612e85765ffb913de8cee983196f691f0e5ceadca60a9d5f4a086d0b28b.scope - libcontainer container 203a3612e85765ffb913de8cee983196f691f0e5ceadca60a9d5f4a086d0b28b. May 15 16:01:48.081379 containerd[1533]: time="2025-05-15T16:01:48.081251604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rnj6z,Uid:db94477b-5bdd-4708-bd2e-d986714f8555,Namespace:kube-system,Attempt:0,} returns sandbox id \"203a3612e85765ffb913de8cee983196f691f0e5ceadca60a9d5f4a086d0b28b\"" May 15 16:01:48.083060 kubelet[2763]: E0515 16:01:48.083023 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:48.088753 containerd[1533]: time="2025-05-15T16:01:48.088617858Z" level=info msg="CreateContainer within sandbox \"203a3612e85765ffb913de8cee983196f691f0e5ceadca60a9d5f4a086d0b28b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 16:01:48.102748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394576554.mount: Deactivated successfully. May 15 16:01:48.104125 containerd[1533]: time="2025-05-15T16:01:48.104090125Z" level=info msg="Container 86713147e429f9700980d929a027cb960ba0e6effdf14aedb7f9f15af1c27655: CDI devices from CRI Config.CDIDevices: []" May 15 16:01:48.111313 containerd[1533]: time="2025-05-15T16:01:48.111256929Z" level=info msg="CreateContainer within sandbox \"203a3612e85765ffb913de8cee983196f691f0e5ceadca60a9d5f4a086d0b28b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86713147e429f9700980d929a027cb960ba0e6effdf14aedb7f9f15af1c27655\"" May 15 16:01:48.113038 containerd[1533]: time="2025-05-15T16:01:48.112315728Z" level=info msg="StartContainer for \"86713147e429f9700980d929a027cb960ba0e6effdf14aedb7f9f15af1c27655\"" May 15 16:01:48.115442 containerd[1533]: time="2025-05-15T16:01:48.115384409Z" level=info msg="connecting to shim 86713147e429f9700980d929a027cb960ba0e6effdf14aedb7f9f15af1c27655" address="unix:///run/containerd/s/8aca45ada567a81ab78b4448d678dde41876cbf18fe4b61770542d7429862e69" protocol=ttrpc version=3 May 15 16:01:48.137254 systemd[1]: Started cri-containerd-86713147e429f9700980d929a027cb960ba0e6effdf14aedb7f9f15af1c27655.scope - libcontainer container 86713147e429f9700980d929a027cb960ba0e6effdf14aedb7f9f15af1c27655. May 15 16:01:48.194933 containerd[1533]: time="2025-05-15T16:01:48.194818615Z" level=info msg="StartContainer for \"86713147e429f9700980d929a027cb960ba0e6effdf14aedb7f9f15af1c27655\" returns successfully" May 15 16:01:48.281860 containerd[1533]: time="2025-05-15T16:01:48.281252282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-kmq9t,Uid:10c5d861-69f5-41ae-bab2-9fe813c77a00,Namespace:tigera-operator,Attempt:0,}" May 15 16:01:48.324615 containerd[1533]: time="2025-05-15T16:01:48.324435506Z" level=info msg="connecting to shim 7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2" address="unix:///run/containerd/s/2a626e2a668b16cb6c6659c46e04ba0ffad1d65493075573e6542b5d765c0d1f" namespace=k8s.io protocol=ttrpc version=3 May 15 16:01:48.365487 systemd[1]: Started cri-containerd-7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2.scope - libcontainer container 7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2. May 15 16:01:48.435954 containerd[1533]: time="2025-05-15T16:01:48.435905206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-kmq9t,Uid:10c5d861-69f5-41ae-bab2-9fe813c77a00,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\"" May 15 16:01:48.439544 containerd[1533]: time="2025-05-15T16:01:48.439450436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 16:01:48.931641 kubelet[2763]: E0515 16:01:48.931516 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:50.551381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614628169.mount: Deactivated successfully. May 15 16:01:51.187816 containerd[1533]: time="2025-05-15T16:01:51.187093050Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:51.187816 containerd[1533]: time="2025-05-15T16:01:51.187754077Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 16:01:51.188460 containerd[1533]: time="2025-05-15T16:01:51.188055643Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:51.190275 containerd[1533]: time="2025-05-15T16:01:51.190230534Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:51.191039 containerd[1533]: time="2025-05-15T16:01:51.190764580Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.75085781s" May 15 16:01:51.191039 containerd[1533]: time="2025-05-15T16:01:51.190796675Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 16:01:51.198927 containerd[1533]: time="2025-05-15T16:01:51.198866623Z" level=info msg="CreateContainer within sandbox \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 16:01:51.208414 containerd[1533]: time="2025-05-15T16:01:51.208335273Z" level=info msg="Container 38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53: CDI devices from CRI Config.CDIDevices: []" May 15 16:01:51.226062 containerd[1533]: time="2025-05-15T16:01:51.225980260Z" level=info msg="CreateContainer within sandbox \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\"" May 15 16:01:51.228093 containerd[1533]: time="2025-05-15T16:01:51.228052272Z" level=info msg="StartContainer for \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\"" May 15 16:01:51.229364 containerd[1533]: time="2025-05-15T16:01:51.229308740Z" level=info msg="connecting to shim 38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53" address="unix:///run/containerd/s/2a626e2a668b16cb6c6659c46e04ba0ffad1d65493075573e6542b5d765c0d1f" protocol=ttrpc version=3 May 15 16:01:51.265323 systemd[1]: Started cri-containerd-38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53.scope - libcontainer container 38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53. May 15 16:01:51.306405 containerd[1533]: time="2025-05-15T16:01:51.306363326Z" level=info msg="StartContainer for \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" returns successfully" May 15 16:01:51.955620 kubelet[2763]: I0515 16:01:51.955537 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rnj6z" podStartSLOduration=4.955517417 podStartE2EDuration="4.955517417s" podCreationTimestamp="2025-05-15 16:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:01:48.943383918 +0000 UTC m=+16.242299933" watchObservedRunningTime="2025-05-15 16:01:51.955517417 +0000 UTC m=+19.254433433" May 15 16:01:52.876012 kubelet[2763]: I0515 16:01:52.875842 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-kmq9t" podStartSLOduration=3.118252228 podStartE2EDuration="5.875822286s" podCreationTimestamp="2025-05-15 16:01:47 +0000 UTC" firstStartedPulling="2025-05-15 16:01:48.438097373 +0000 UTC m=+15.737013368" lastFinishedPulling="2025-05-15 16:01:51.19566743 +0000 UTC m=+18.494583426" observedRunningTime="2025-05-15 16:01:51.955860007 +0000 UTC m=+19.254776023" watchObservedRunningTime="2025-05-15 16:01:52.875822286 +0000 UTC m=+20.174738302" May 15 16:01:54.583390 kubelet[2763]: I0515 16:01:54.583328 2763 topology_manager.go:215] "Topology Admit Handler" podUID="767d34ab-3299-46dd-add9-09d52538ad17" podNamespace="calico-system" podName="calico-typha-8b9bd54c9-lhz4q" May 15 16:01:54.593750 systemd[1]: Created slice kubepods-besteffort-pod767d34ab_3299_46dd_add9_09d52538ad17.slice - libcontainer container kubepods-besteffort-pod767d34ab_3299_46dd_add9_09d52538ad17.slice. May 15 16:01:54.685039 kubelet[2763]: I0515 16:01:54.684660 2763 topology_manager.go:215] "Topology Admit Handler" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" podNamespace="calico-system" podName="calico-node-68559" May 15 16:01:54.689017 kubelet[2763]: I0515 16:01:54.688468 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/767d34ab-3299-46dd-add9-09d52538ad17-typha-certs\") pod \"calico-typha-8b9bd54c9-lhz4q\" (UID: \"767d34ab-3299-46dd-add9-09d52538ad17\") " pod="calico-system/calico-typha-8b9bd54c9-lhz4q" May 15 16:01:54.689017 kubelet[2763]: I0515 16:01:54.688518 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767d34ab-3299-46dd-add9-09d52538ad17-tigera-ca-bundle\") pod \"calico-typha-8b9bd54c9-lhz4q\" (UID: \"767d34ab-3299-46dd-add9-09d52538ad17\") " pod="calico-system/calico-typha-8b9bd54c9-lhz4q" May 15 16:01:54.689017 kubelet[2763]: I0515 16:01:54.688552 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7qdd\" (UniqueName: \"kubernetes.io/projected/767d34ab-3299-46dd-add9-09d52538ad17-kube-api-access-w7qdd\") pod \"calico-typha-8b9bd54c9-lhz4q\" (UID: \"767d34ab-3299-46dd-add9-09d52538ad17\") " pod="calico-system/calico-typha-8b9bd54c9-lhz4q" May 15 16:01:54.695217 systemd[1]: Created slice kubepods-besteffort-pode007eeab_9069_48bd_be2f_87c5ad02bcf8.slice - libcontainer container kubepods-besteffort-pode007eeab_9069_48bd_be2f_87c5ad02bcf8.slice. May 15 16:01:54.789010 kubelet[2763]: I0515 16:01:54.788772 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-flexvol-driver-host\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.789010 kubelet[2763]: I0515 16:01:54.788853 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-net-dir\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.789010 kubelet[2763]: I0515 16:01:54.788887 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-lib-modules\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.789010 kubelet[2763]: I0515 16:01:54.788912 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-xtables-lock\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.789010 kubelet[2763]: I0515 16:01:54.788934 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-policysync\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.789428 kubelet[2763]: I0515 16:01:54.788976 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-log-dir\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.790165 kubelet[2763]: I0515 16:01:54.789531 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-bin-dir\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.790165 kubelet[2763]: I0515 16:01:54.789572 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjkl\" (UniqueName: \"kubernetes.io/projected/e007eeab-9069-48bd-be2f-87c5ad02bcf8-kube-api-access-gbjkl\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.790165 kubelet[2763]: I0515 16:01:54.789591 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-var-lib-calico\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.790165 kubelet[2763]: I0515 16:01:54.789700 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e007eeab-9069-48bd-be2f-87c5ad02bcf8-tigera-ca-bundle\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.790165 kubelet[2763]: I0515 16:01:54.789725 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e007eeab-9069-48bd-be2f-87c5ad02bcf8-node-certs\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.790373 kubelet[2763]: I0515 16:01:54.789760 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-var-run-calico\") pod \"calico-node-68559\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " pod="calico-system/calico-node-68559" May 15 16:01:54.844135 kubelet[2763]: I0515 16:01:54.843938 2763 topology_manager.go:215] "Topology Admit Handler" podUID="15ff8378-e357-4a15-80de-bc12411a603e" podNamespace="calico-system" podName="csi-node-driver-w2wp6" May 15 16:01:54.846021 kubelet[2763]: E0515 16:01:54.845480 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:01:54.891022 kubelet[2763]: I0515 16:01:54.890884 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/15ff8378-e357-4a15-80de-bc12411a603e-socket-dir\") pod \"csi-node-driver-w2wp6\" (UID: \"15ff8378-e357-4a15-80de-bc12411a603e\") " pod="calico-system/csi-node-driver-w2wp6" May 15 16:01:54.891022 kubelet[2763]: I0515 16:01:54.890971 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15ff8378-e357-4a15-80de-bc12411a603e-kubelet-dir\") pod \"csi-node-driver-w2wp6\" (UID: \"15ff8378-e357-4a15-80de-bc12411a603e\") " pod="calico-system/csi-node-driver-w2wp6" May 15 16:01:54.891022 kubelet[2763]: I0515 16:01:54.891016 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6ng\" (UniqueName: \"kubernetes.io/projected/15ff8378-e357-4a15-80de-bc12411a603e-kube-api-access-cf6ng\") pod \"csi-node-driver-w2wp6\" (UID: \"15ff8378-e357-4a15-80de-bc12411a603e\") " pod="calico-system/csi-node-driver-w2wp6" May 15 16:01:54.891267 kubelet[2763]: I0515 16:01:54.891050 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/15ff8378-e357-4a15-80de-bc12411a603e-varrun\") pod \"csi-node-driver-w2wp6\" (UID: \"15ff8378-e357-4a15-80de-bc12411a603e\") " pod="calico-system/csi-node-driver-w2wp6" May 15 16:01:54.891267 kubelet[2763]: I0515 16:01:54.891107 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/15ff8378-e357-4a15-80de-bc12411a603e-registration-dir\") pod \"csi-node-driver-w2wp6\" (UID: \"15ff8378-e357-4a15-80de-bc12411a603e\") " pod="calico-system/csi-node-driver-w2wp6" May 15 16:01:54.893820 kubelet[2763]: E0515 16:01:54.893666 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.893820 kubelet[2763]: W0515 16:01:54.893690 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.893820 kubelet[2763]: E0515 16:01:54.893730 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.895011 kubelet[2763]: E0515 16:01:54.894619 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.895011 kubelet[2763]: W0515 16:01:54.894643 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.895011 kubelet[2763]: E0515 16:01:54.894682 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.895011 kubelet[2763]: E0515 16:01:54.894921 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.895011 kubelet[2763]: W0515 16:01:54.894930 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.895227 kubelet[2763]: E0515 16:01:54.895153 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.896189 kubelet[2763]: E0515 16:01:54.896161 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.896189 kubelet[2763]: W0515 16:01:54.896180 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.896325 kubelet[2763]: E0515 16:01:54.896201 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.897013 kubelet[2763]: E0515 16:01:54.896739 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.897013 kubelet[2763]: W0515 16:01:54.896762 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.897013 kubelet[2763]: E0515 16:01:54.896885 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.897533 kubelet[2763]: E0515 16:01:54.897428 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.897533 kubelet[2763]: W0515 16:01:54.897441 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.898067 kubelet[2763]: E0515 16:01:54.897971 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.898285 kubelet[2763]: E0515 16:01:54.898266 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.898285 kubelet[2763]: W0515 16:01:54.898279 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.898526 kubelet[2763]: E0515 16:01:54.898509 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.901020 kubelet[2763]: E0515 16:01:54.900400 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:54.901969 containerd[1533]: time="2025-05-15T16:01:54.901932520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b9bd54c9-lhz4q,Uid:767d34ab-3299-46dd-add9-09d52538ad17,Namespace:calico-system,Attempt:0,}" May 15 16:01:54.906512 kubelet[2763]: E0515 16:01:54.906455 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.906512 kubelet[2763]: W0515 16:01:54.906507 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.907441 kubelet[2763]: E0515 16:01:54.906849 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.907775 kubelet[2763]: E0515 16:01:54.906933 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.907775 kubelet[2763]: W0515 16:01:54.907647 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.908008 kubelet[2763]: E0515 16:01:54.907978 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.908088 kubelet[2763]: W0515 16:01:54.908076 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.908301 kubelet[2763]: E0515 16:01:54.908290 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.908360 kubelet[2763]: W0515 16:01:54.908352 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.908413 kubelet[2763]: E0515 16:01:54.908401 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.908483 kubelet[2763]: E0515 16:01:54.908474 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.908790 kubelet[2763]: E0515 16:01:54.908776 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.908951 kubelet[2763]: W0515 16:01:54.908846 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.908951 kubelet[2763]: E0515 16:01:54.908861 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.908951 kubelet[2763]: E0515 16:01:54.908883 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.909155 kubelet[2763]: E0515 16:01:54.909145 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.909202 kubelet[2763]: W0515 16:01:54.909193 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.909397 kubelet[2763]: E0515 16:01:54.909249 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.909545 kubelet[2763]: E0515 16:01:54.909534 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.909602 kubelet[2763]: W0515 16:01:54.909593 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.909879 kubelet[2763]: E0515 16:01:54.909642 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.940729 kubelet[2763]: E0515 16:01:54.940305 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.940729 kubelet[2763]: W0515 16:01:54.940333 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.942565 kubelet[2763]: E0515 16:01:54.942522 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.949713 containerd[1533]: time="2025-05-15T16:01:54.949278547Z" level=info msg="connecting to shim ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97" address="unix:///run/containerd/s/c844e1eff2d8b2e7c606d5652dcb7d2a0f9638fb8ebabd20ff34ff6df5c78321" namespace=k8s.io protocol=ttrpc version=3 May 15 16:01:54.990418 systemd[1]: Started cri-containerd-ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97.scope - libcontainer container ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97. May 15 16:01:54.994533 kubelet[2763]: E0515 16:01:54.994497 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.994533 kubelet[2763]: W0515 16:01:54.994522 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.994711 kubelet[2763]: E0515 16:01:54.994546 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.995390 kubelet[2763]: E0515 16:01:54.995354 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.995937 kubelet[2763]: W0515 16:01:54.995909 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.996248 kubelet[2763]: E0515 16:01:54.995951 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.996298 kubelet[2763]: E0515 16:01:54.996279 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.996328 kubelet[2763]: W0515 16:01:54.996293 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.996362 kubelet[2763]: E0515 16:01:54.996329 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.996893 kubelet[2763]: E0515 16:01:54.996877 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.997069 kubelet[2763]: W0515 16:01:54.997030 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.997764 kubelet[2763]: E0515 16:01:54.997583 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.997894 kubelet[2763]: E0515 16:01:54.997881 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.997954 kubelet[2763]: W0515 16:01:54.997944 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.998058 kubelet[2763]: E0515 16:01:54.998035 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.998330 kubelet[2763]: E0515 16:01:54.998242 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.998330 kubelet[2763]: W0515 16:01:54.998255 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.998330 kubelet[2763]: E0515 16:01:54.998275 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:54.998859 kubelet[2763]: E0515 16:01:54.998840 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:54.999009 kubelet[2763]: W0515 16:01:54.998957 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:54.999088 kubelet[2763]: E0515 16:01:54.999049 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.000225 kubelet[2763]: E0515 16:01:55.000204 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:55.000359 kubelet[2763]: E0515 16:01:55.000340 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.000403 kubelet[2763]: W0515 16:01:55.000360 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.000999 kubelet[2763]: E0515 16:01:55.000932 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.000999 kubelet[2763]: W0515 16:01:55.000948 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.001211 containerd[1533]: time="2025-05-15T16:01:55.001173663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68559,Uid:e007eeab-9069-48bd-be2f-87c5ad02bcf8,Namespace:calico-system,Attempt:0,}" May 15 16:01:55.001577 kubelet[2763]: E0515 16:01:55.001464 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.001577 kubelet[2763]: W0515 16:01:55.001481 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.002167 kubelet[2763]: E0515 16:01:55.001695 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.002167 kubelet[2763]: E0515 16:01:55.001461 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.002167 kubelet[2763]: E0515 16:01:55.001780 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.002823 kubelet[2763]: E0515 16:01:55.002803 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.002823 kubelet[2763]: W0515 16:01:55.002818 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.003692 kubelet[2763]: E0515 16:01:55.003668 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.003946 kubelet[2763]: E0515 16:01:55.003930 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.003946 kubelet[2763]: W0515 16:01:55.003944 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.004157 kubelet[2763]: E0515 16:01:55.003979 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.004644 kubelet[2763]: E0515 16:01:55.004623 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.004700 kubelet[2763]: W0515 16:01:55.004643 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.005068 kubelet[2763]: E0515 16:01:55.005050 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.006146 kubelet[2763]: E0515 16:01:55.006127 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.006146 kubelet[2763]: W0515 16:01:55.006143 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.006348 kubelet[2763]: E0515 16:01:55.006334 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.006478 kubelet[2763]: E0515 16:01:55.006463 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.006478 kubelet[2763]: W0515 16:01:55.006476 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.007193 kubelet[2763]: E0515 16:01:55.007166 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.008914 kubelet[2763]: E0515 16:01:55.008890 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.008914 kubelet[2763]: W0515 16:01:55.008909 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.009094 kubelet[2763]: E0515 16:01:55.008949 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.009194 kubelet[2763]: E0515 16:01:55.009178 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.009194 kubelet[2763]: W0515 16:01:55.009192 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.009323 kubelet[2763]: E0515 16:01:55.009267 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.009515 kubelet[2763]: E0515 16:01:55.009466 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.009515 kubelet[2763]: W0515 16:01:55.009513 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.009734 kubelet[2763]: E0515 16:01:55.009715 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.010216 kubelet[2763]: E0515 16:01:55.010196 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.010216 kubelet[2763]: W0515 16:01:55.010210 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.010374 kubelet[2763]: E0515 16:01:55.010225 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.010499 kubelet[2763]: E0515 16:01:55.010482 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.010499 kubelet[2763]: W0515 16:01:55.010494 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.011522 kubelet[2763]: E0515 16:01:55.011462 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.011661 kubelet[2763]: E0515 16:01:55.011646 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.011701 kubelet[2763]: W0515 16:01:55.011664 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.011845 kubelet[2763]: E0515 16:01:55.011794 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.011898 kubelet[2763]: E0515 16:01:55.011886 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.011934 kubelet[2763]: W0515 16:01:55.011903 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.012040 kubelet[2763]: E0515 16:01:55.011978 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.012226 kubelet[2763]: E0515 16:01:55.012193 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.012226 kubelet[2763]: W0515 16:01:55.012218 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.012973 kubelet[2763]: E0515 16:01:55.012949 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.013296 kubelet[2763]: E0515 16:01:55.013282 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.013349 kubelet[2763]: W0515 16:01:55.013295 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.013349 kubelet[2763]: E0515 16:01:55.013329 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.014190 kubelet[2763]: E0515 16:01:55.014172 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.014228 kubelet[2763]: W0515 16:01:55.014191 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.014228 kubelet[2763]: E0515 16:01:55.014208 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.030424 kubelet[2763]: E0515 16:01:55.030333 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:55.030424 kubelet[2763]: W0515 16:01:55.030358 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:55.030424 kubelet[2763]: E0515 16:01:55.030378 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:55.041210 containerd[1533]: time="2025-05-15T16:01:55.041158081Z" level=info msg="connecting to shim e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924" address="unix:///run/containerd/s/2953eae4c023bfbc715a1f1695e2f7063e0783631c06e552e9ea0feb86c34482" namespace=k8s.io protocol=ttrpc version=3 May 15 16:01:55.091258 systemd[1]: Started cri-containerd-e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924.scope - libcontainer container e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924. May 15 16:01:55.109063 containerd[1533]: time="2025-05-15T16:01:55.107843623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8b9bd54c9-lhz4q,Uid:767d34ab-3299-46dd-add9-09d52538ad17,Namespace:calico-system,Attempt:0,} returns sandbox id \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\"" May 15 16:01:55.112130 kubelet[2763]: E0515 16:01:55.111869 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:55.115970 containerd[1533]: time="2025-05-15T16:01:55.115724358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 16:01:55.157591 containerd[1533]: time="2025-05-15T16:01:55.157546439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68559,Uid:e007eeab-9069-48bd-be2f-87c5ad02bcf8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\"" May 15 16:01:55.158731 kubelet[2763]: E0515 16:01:55.158708 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:56.858441 kubelet[2763]: E0515 16:01:56.857881 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:01:57.919275 containerd[1533]: time="2025-05-15T16:01:57.919213567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:57.920595 containerd[1533]: time="2025-05-15T16:01:57.920353005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 16:01:57.921286 containerd[1533]: time="2025-05-15T16:01:57.921186264Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:57.924671 containerd[1533]: time="2025-05-15T16:01:57.924619945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:01:57.926040 containerd[1533]: time="2025-05-15T16:01:57.925867656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.810097944s" May 15 16:01:57.926040 containerd[1533]: time="2025-05-15T16:01:57.925946498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 16:01:57.928393 containerd[1533]: time="2025-05-15T16:01:57.928326525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 16:01:57.947749 containerd[1533]: time="2025-05-15T16:01:57.947702425Z" level=info msg="CreateContainer within sandbox \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 16:01:57.956016 containerd[1533]: time="2025-05-15T16:01:57.955295470Z" level=info msg="Container cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8: CDI devices from CRI Config.CDIDevices: []" May 15 16:01:57.963312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922815207.mount: Deactivated successfully. May 15 16:01:57.967422 containerd[1533]: time="2025-05-15T16:01:57.967343701Z" level=info msg="CreateContainer within sandbox \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\"" May 15 16:01:57.969018 containerd[1533]: time="2025-05-15T16:01:57.968590371Z" level=info msg="StartContainer for \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\"" May 15 16:01:57.970836 containerd[1533]: time="2025-05-15T16:01:57.970765563Z" level=info msg="connecting to shim cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8" address="unix:///run/containerd/s/c844e1eff2d8b2e7c606d5652dcb7d2a0f9638fb8ebabd20ff34ff6df5c78321" protocol=ttrpc version=3 May 15 16:01:58.000351 systemd[1]: Started cri-containerd-cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8.scope - libcontainer container cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8. May 15 16:01:58.074720 containerd[1533]: time="2025-05-15T16:01:58.074656804Z" level=info msg="StartContainer for \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" returns successfully" May 15 16:01:58.858923 kubelet[2763]: E0515 16:01:58.857912 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:01:58.969498 kubelet[2763]: E0515 16:01:58.969300 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:01:58.984918 kubelet[2763]: I0515 16:01:58.984752 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8b9bd54c9-lhz4q" podStartSLOduration=2.171706611 podStartE2EDuration="4.984732125s" podCreationTimestamp="2025-05-15 16:01:54 +0000 UTC" firstStartedPulling="2025-05-15 16:01:55.114351923 +0000 UTC m=+22.413267934" lastFinishedPulling="2025-05-15 16:01:57.927377441 +0000 UTC m=+25.226293448" observedRunningTime="2025-05-15 16:01:58.984505929 +0000 UTC m=+26.283421947" watchObservedRunningTime="2025-05-15 16:01:58.984732125 +0000 UTC m=+26.283648142" May 15 16:01:59.008470 kubelet[2763]: E0515 16:01:59.008308 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.008470 kubelet[2763]: W0515 16:01:59.008342 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.008470 kubelet[2763]: E0515 16:01:59.008367 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.008765 kubelet[2763]: E0515 16:01:59.008752 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.008943 kubelet[2763]: W0515 16:01:59.008829 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.008943 kubelet[2763]: E0515 16:01:59.008845 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.009096 kubelet[2763]: E0515 16:01:59.009086 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.009157 kubelet[2763]: W0515 16:01:59.009144 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.009217 kubelet[2763]: E0515 16:01:59.009207 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.009440 kubelet[2763]: E0515 16:01:59.009428 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.009636 kubelet[2763]: W0515 16:01:59.009513 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.009636 kubelet[2763]: E0515 16:01:59.009532 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.009760 kubelet[2763]: E0515 16:01:59.009751 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.009815 kubelet[2763]: W0515 16:01:59.009803 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.009920 kubelet[2763]: E0515 16:01:59.009905 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.010173 kubelet[2763]: E0515 16:01:59.010162 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.010358 kubelet[2763]: W0515 16:01:59.010250 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.010358 kubelet[2763]: E0515 16:01:59.010266 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.010564 kubelet[2763]: E0515 16:01:59.010459 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.010564 kubelet[2763]: W0515 16:01:59.010467 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.010564 kubelet[2763]: E0515 16:01:59.010476 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.010722 kubelet[2763]: E0515 16:01:59.010709 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.010784 kubelet[2763]: W0515 16:01:59.010775 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.010829 kubelet[2763]: E0515 16:01:59.010821 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.011149 kubelet[2763]: E0515 16:01:59.011042 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.011149 kubelet[2763]: W0515 16:01:59.011051 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.011149 kubelet[2763]: E0515 16:01:59.011061 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.011314 kubelet[2763]: E0515 16:01:59.011298 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.011401 kubelet[2763]: W0515 16:01:59.011387 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.011460 kubelet[2763]: E0515 16:01:59.011450 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.011762 kubelet[2763]: E0515 16:01:59.011665 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.011762 kubelet[2763]: W0515 16:01:59.011675 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.011762 kubelet[2763]: E0515 16:01:59.011684 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.011925 kubelet[2763]: E0515 16:01:59.011916 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.012149 kubelet[2763]: W0515 16:01:59.012038 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.012149 kubelet[2763]: E0515 16:01:59.012055 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.012347 kubelet[2763]: E0515 16:01:59.012247 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.012347 kubelet[2763]: W0515 16:01:59.012254 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.012347 kubelet[2763]: E0515 16:01:59.012263 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.012514 kubelet[2763]: E0515 16:01:59.012504 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.012571 kubelet[2763]: W0515 16:01:59.012562 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.012704 kubelet[2763]: E0515 16:01:59.012613 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.012801 kubelet[2763]: E0515 16:01:59.012791 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.012848 kubelet[2763]: W0515 16:01:59.012840 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.012905 kubelet[2763]: E0515 16:01:59.012896 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.033376 kubelet[2763]: E0515 16:01:59.033337 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.033376 kubelet[2763]: W0515 16:01:59.033364 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.033376 kubelet[2763]: E0515 16:01:59.033387 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.033713 kubelet[2763]: E0515 16:01:59.033695 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.033713 kubelet[2763]: W0515 16:01:59.033709 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.033782 kubelet[2763]: E0515 16:01:59.033727 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.034026 kubelet[2763]: E0515 16:01:59.034011 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.034077 kubelet[2763]: W0515 16:01:59.034027 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.034077 kubelet[2763]: E0515 16:01:59.034044 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.034299 kubelet[2763]: E0515 16:01:59.034281 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.034299 kubelet[2763]: W0515 16:01:59.034296 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.034391 kubelet[2763]: E0515 16:01:59.034324 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.034543 kubelet[2763]: E0515 16:01:59.034532 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.034543 kubelet[2763]: W0515 16:01:59.034542 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.034619 kubelet[2763]: E0515 16:01:59.034562 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.034726 kubelet[2763]: E0515 16:01:59.034715 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.034726 kubelet[2763]: W0515 16:01:59.034724 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.034859 kubelet[2763]: E0515 16:01:59.034844 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.034923 kubelet[2763]: E0515 16:01:59.034913 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.034923 kubelet[2763]: W0515 16:01:59.034922 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.035099 kubelet[2763]: E0515 16:01:59.035033 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.035142 kubelet[2763]: E0515 16:01:59.035114 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.035142 kubelet[2763]: W0515 16:01:59.035122 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.035142 kubelet[2763]: E0515 16:01:59.035132 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.035269 kubelet[2763]: E0515 16:01:59.035259 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.035269 kubelet[2763]: W0515 16:01:59.035268 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.035331 kubelet[2763]: E0515 16:01:59.035275 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.035499 kubelet[2763]: E0515 16:01:59.035488 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.035534 kubelet[2763]: W0515 16:01:59.035499 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.035534 kubelet[2763]: E0515 16:01:59.035513 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.035738 kubelet[2763]: E0515 16:01:59.035727 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.035738 kubelet[2763]: W0515 16:01:59.035737 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.035799 kubelet[2763]: E0515 16:01:59.035753 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.035998 kubelet[2763]: E0515 16:01:59.035976 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.036041 kubelet[2763]: W0515 16:01:59.036014 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.036041 kubelet[2763]: E0515 16:01:59.036026 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.037085 kubelet[2763]: E0515 16:01:59.037058 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.037085 kubelet[2763]: W0515 16:01:59.037075 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.037085 kubelet[2763]: E0515 16:01:59.037090 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.037278 kubelet[2763]: E0515 16:01:59.037266 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.037278 kubelet[2763]: W0515 16:01:59.037273 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.037335 kubelet[2763]: E0515 16:01:59.037281 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.037446 kubelet[2763]: E0515 16:01:59.037434 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.037446 kubelet[2763]: W0515 16:01:59.037445 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.037542 kubelet[2763]: E0515 16:01:59.037454 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.037594 kubelet[2763]: E0515 16:01:59.037584 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.037594 kubelet[2763]: W0515 16:01:59.037593 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.037666 kubelet[2763]: E0515 16:01:59.037602 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.037757 kubelet[2763]: E0515 16:01:59.037747 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.037757 kubelet[2763]: W0515 16:01:59.037756 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.037815 kubelet[2763]: E0515 16:01:59.037766 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.038209 kubelet[2763]: E0515 16:01:59.038193 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:01:59.038209 kubelet[2763]: W0515 16:01:59.038205 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:01:59.038288 kubelet[2763]: E0515 16:01:59.038217 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:01:59.972812 kubelet[2763]: I0515 16:01:59.971595 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 16:01:59.972812 kubelet[2763]: E0515 16:01:59.972217 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:00.020425 kubelet[2763]: E0515 16:02:00.019747 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.020425 kubelet[2763]: W0515 16:02:00.020252 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.020425 kubelet[2763]: E0515 16:02:00.020303 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.021478 kubelet[2763]: E0515 16:02:00.021299 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.021478 kubelet[2763]: W0515 16:02:00.021330 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.021478 kubelet[2763]: E0515 16:02:00.021366 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.023088 kubelet[2763]: E0515 16:02:00.022951 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.023088 kubelet[2763]: W0515 16:02:00.022975 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.023088 kubelet[2763]: E0515 16:02:00.023056 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.025376 kubelet[2763]: E0515 16:02:00.023323 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.025376 kubelet[2763]: W0515 16:02:00.023340 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.025376 kubelet[2763]: E0515 16:02:00.023355 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.025376 kubelet[2763]: E0515 16:02:00.023546 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.025376 kubelet[2763]: W0515 16:02:00.023555 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.025376 kubelet[2763]: E0515 16:02:00.023566 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.025376 kubelet[2763]: E0515 16:02:00.023699 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.025376 kubelet[2763]: W0515 16:02:00.023705 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.025376 kubelet[2763]: E0515 16:02:00.023713 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.025376 kubelet[2763]: E0515 16:02:00.023894 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.027715 kubelet[2763]: W0515 16:02:00.023902 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.027715 kubelet[2763]: E0515 16:02:00.023915 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.027715 kubelet[2763]: E0515 16:02:00.024215 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.027715 kubelet[2763]: W0515 16:02:00.024233 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.027715 kubelet[2763]: E0515 16:02:00.024244 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.027715 kubelet[2763]: E0515 16:02:00.024402 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.027715 kubelet[2763]: W0515 16:02:00.024409 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.027715 kubelet[2763]: E0515 16:02:00.024416 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.027715 kubelet[2763]: E0515 16:02:00.024574 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.027715 kubelet[2763]: W0515 16:02:00.024581 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.028308 kubelet[2763]: E0515 16:02:00.024590 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.028308 kubelet[2763]: E0515 16:02:00.025161 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.028308 kubelet[2763]: W0515 16:02:00.025171 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.028308 kubelet[2763]: E0515 16:02:00.025185 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.028308 kubelet[2763]: E0515 16:02:00.025439 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.028308 kubelet[2763]: W0515 16:02:00.025450 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.028308 kubelet[2763]: E0515 16:02:00.025463 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.028308 kubelet[2763]: E0515 16:02:00.025921 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.028308 kubelet[2763]: W0515 16:02:00.025937 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.028308 kubelet[2763]: E0515 16:02:00.025951 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.028874 kubelet[2763]: E0515 16:02:00.026669 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.028874 kubelet[2763]: W0515 16:02:00.026680 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.028874 kubelet[2763]: E0515 16:02:00.026695 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.028874 kubelet[2763]: E0515 16:02:00.027122 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.028874 kubelet[2763]: W0515 16:02:00.027138 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.028874 kubelet[2763]: E0515 16:02:00.027155 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.041202 kubelet[2763]: E0515 16:02:00.040917 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.041202 kubelet[2763]: W0515 16:02:00.040952 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.041202 kubelet[2763]: E0515 16:02:00.040982 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.041586 kubelet[2763]: E0515 16:02:00.041567 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.041654 kubelet[2763]: W0515 16:02:00.041643 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.041714 kubelet[2763]: E0515 16:02:00.041704 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.042094 kubelet[2763]: E0515 16:02:00.042060 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.042094 kubelet[2763]: W0515 16:02:00.042086 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.042211 kubelet[2763]: E0515 16:02:00.042112 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.042362 kubelet[2763]: E0515 16:02:00.042343 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.042362 kubelet[2763]: W0515 16:02:00.042361 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.042646 kubelet[2763]: E0515 16:02:00.042391 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.042713 kubelet[2763]: E0515 16:02:00.042700 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.042744 kubelet[2763]: W0515 16:02:00.042715 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.042744 kubelet[2763]: E0515 16:02:00.042736 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.043092 kubelet[2763]: E0515 16:02:00.043072 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.043092 kubelet[2763]: W0515 16:02:00.043088 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.043256 kubelet[2763]: E0515 16:02:00.043215 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.043323 kubelet[2763]: E0515 16:02:00.043291 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.043323 kubelet[2763]: W0515 16:02:00.043302 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.043408 kubelet[2763]: E0515 16:02:00.043386 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.043601 kubelet[2763]: E0515 16:02:00.043582 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.043601 kubelet[2763]: W0515 16:02:00.043599 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.043667 kubelet[2763]: E0515 16:02:00.043619 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.043915 kubelet[2763]: E0515 16:02:00.043896 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.043915 kubelet[2763]: W0515 16:02:00.043914 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.044043 kubelet[2763]: E0515 16:02:00.043943 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.044505 kubelet[2763]: E0515 16:02:00.044469 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.044505 kubelet[2763]: W0515 16:02:00.044490 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.044617 kubelet[2763]: E0515 16:02:00.044515 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.044798 kubelet[2763]: E0515 16:02:00.044776 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.044798 kubelet[2763]: W0515 16:02:00.044794 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.044944 kubelet[2763]: E0515 16:02:00.044825 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.045020 kubelet[2763]: E0515 16:02:00.045006 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.045049 kubelet[2763]: W0515 16:02:00.045020 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.045134 kubelet[2763]: E0515 16:02:00.045111 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.045303 kubelet[2763]: E0515 16:02:00.045284 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.045303 kubelet[2763]: W0515 16:02:00.045300 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.045368 kubelet[2763]: E0515 16:02:00.045319 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.045604 kubelet[2763]: E0515 16:02:00.045586 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.045664 kubelet[2763]: W0515 16:02:00.045648 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.045695 kubelet[2763]: E0515 16:02:00.045676 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.046174 kubelet[2763]: E0515 16:02:00.046070 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.046174 kubelet[2763]: W0515 16:02:00.046085 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.046174 kubelet[2763]: E0515 16:02:00.046102 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.046351 kubelet[2763]: E0515 16:02:00.046333 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.046482 kubelet[2763]: W0515 16:02:00.046352 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.046482 kubelet[2763]: E0515 16:02:00.046377 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.046633 kubelet[2763]: E0515 16:02:00.046619 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.046668 kubelet[2763]: W0515 16:02:00.046635 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.046668 kubelet[2763]: E0515 16:02:00.046648 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.047369 kubelet[2763]: E0515 16:02:00.047351 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:02:00.047369 kubelet[2763]: W0515 16:02:00.047367 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:02:00.047471 kubelet[2763]: E0515 16:02:00.047383 2763 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:02:00.860016 kubelet[2763]: E0515 16:02:00.859127 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:01.704012 containerd[1533]: time="2025-05-15T16:02:01.703004772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:02:01.704895 containerd[1533]: time="2025-05-15T16:02:01.704846429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 16:02:01.705886 containerd[1533]: time="2025-05-15T16:02:01.705837328Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:02:01.708470 containerd[1533]: time="2025-05-15T16:02:01.708417203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:02:01.709849 containerd[1533]: time="2025-05-15T16:02:01.709658047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 3.781285945s" May 15 16:02:01.709849 containerd[1533]: time="2025-05-15T16:02:01.709716442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 16:02:01.714498 containerd[1533]: time="2025-05-15T16:02:01.713876561Z" level=info msg="CreateContainer within sandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 16:02:01.727025 containerd[1533]: time="2025-05-15T16:02:01.722590347Z" level=info msg="Container 03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a: CDI devices from CRI Config.CDIDevices: []" May 15 16:02:01.743822 containerd[1533]: time="2025-05-15T16:02:01.743685623Z" level=info msg="CreateContainer within sandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\"" May 15 16:02:01.745932 containerd[1533]: time="2025-05-15T16:02:01.745879257Z" level=info msg="StartContainer for \"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\"" May 15 16:02:01.749211 containerd[1533]: time="2025-05-15T16:02:01.749130286Z" level=info msg="connecting to shim 03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a" address="unix:///run/containerd/s/2953eae4c023bfbc715a1f1695e2f7063e0783631c06e552e9ea0feb86c34482" protocol=ttrpc version=3 May 15 16:02:01.780247 systemd[1]: Started cri-containerd-03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a.scope - libcontainer container 03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a. May 15 16:02:01.838067 containerd[1533]: time="2025-05-15T16:02:01.838014162Z" level=info msg="StartContainer for \"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\" returns successfully" May 15 16:02:01.954951 systemd[1]: cri-containerd-03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a.scope: Deactivated successfully. May 15 16:02:01.956415 systemd[1]: cri-containerd-03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a.scope: Consumed 46ms CPU time, 7.9M memory peak, 5.1M written to disk. May 15 16:02:01.959728 containerd[1533]: time="2025-05-15T16:02:01.959321201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\" id:\"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\" pid:3425 exited_at:{seconds:1747324921 nanos:958458564}" May 15 16:02:01.959728 containerd[1533]: time="2025-05-15T16:02:01.959410366Z" level=info msg="received exit event container_id:\"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\" id:\"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\" pid:3425 exited_at:{seconds:1747324921 nanos:958458564}" May 15 16:02:01.983227 kubelet[2763]: E0515 16:02:01.983075 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:02.008545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a-rootfs.mount: Deactivated successfully. May 15 16:02:02.859498 kubelet[2763]: E0515 16:02:02.859042 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:02.987692 kubelet[2763]: E0515 16:02:02.987654 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:02.989866 containerd[1533]: time="2025-05-15T16:02:02.989817679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 16:02:04.859367 kubelet[2763]: E0515 16:02:04.858941 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:06.858699 kubelet[2763]: E0515 16:02:06.858565 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:08.070165 kubelet[2763]: I0515 16:02:08.070112 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 16:02:08.075022 kubelet[2763]: E0515 16:02:08.074394 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:08.665103 containerd[1533]: time="2025-05-15T16:02:08.665014509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:02:08.666763 containerd[1533]: time="2025-05-15T16:02:08.666531157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 16:02:08.667722 containerd[1533]: time="2025-05-15T16:02:08.667669121Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:02:08.670087 containerd[1533]: time="2025-05-15T16:02:08.670017420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:02:08.726612 containerd[1533]: time="2025-05-15T16:02:08.726553086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.736685272s" May 15 16:02:08.726612 containerd[1533]: time="2025-05-15T16:02:08.726612386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 16:02:08.732776 containerd[1533]: time="2025-05-15T16:02:08.732720508Z" level=info msg="CreateContainer within sandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 16:02:08.749243 containerd[1533]: time="2025-05-15T16:02:08.749180600Z" level=info msg="Container f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0: CDI devices from CRI Config.CDIDevices: []" May 15 16:02:08.759728 containerd[1533]: time="2025-05-15T16:02:08.759591092Z" level=info msg="CreateContainer within sandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\"" May 15 16:02:08.762052 containerd[1533]: time="2025-05-15T16:02:08.760900088Z" level=info msg="StartContainer for \"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\"" May 15 16:02:08.762649 containerd[1533]: time="2025-05-15T16:02:08.762620242Z" level=info msg="connecting to shim f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0" address="unix:///run/containerd/s/2953eae4c023bfbc715a1f1695e2f7063e0783631c06e552e9ea0feb86c34482" protocol=ttrpc version=3 May 15 16:02:08.794281 systemd[1]: Started cri-containerd-f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0.scope - libcontainer container f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0. May 15 16:02:08.858233 kubelet[2763]: E0515 16:02:08.858184 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:08.866850 containerd[1533]: time="2025-05-15T16:02:08.866796227Z" level=info msg="StartContainer for \"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\" returns successfully" May 15 16:02:09.019965 kubelet[2763]: E0515 16:02:09.019188 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:09.022097 kubelet[2763]: E0515 16:02:09.021612 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:09.436478 systemd[1]: cri-containerd-f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0.scope: Deactivated successfully. May 15 16:02:09.438560 systemd[1]: cri-containerd-f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0.scope: Consumed 584ms CPU time, 143.8M memory peak, 1.5M read from disk, 154M written to disk. May 15 16:02:09.443105 containerd[1533]: time="2025-05-15T16:02:09.443049805Z" level=info msg="received exit event container_id:\"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\" id:\"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\" pid:3486 exited_at:{seconds:1747324929 nanos:442675691}" May 15 16:02:09.445847 containerd[1533]: time="2025-05-15T16:02:09.445787174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\" id:\"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\" pid:3486 exited_at:{seconds:1747324929 nanos:442675691}" May 15 16:02:09.487496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0-rootfs.mount: Deactivated successfully. May 15 16:02:09.510875 kubelet[2763]: I0515 16:02:09.510823 2763 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 16:02:09.548666 kubelet[2763]: I0515 16:02:09.548575 2763 topology_manager.go:215] "Topology Admit Handler" podUID="c4c65cd6-c8cd-4005-9b33-295db8fc6f42" podNamespace="calico-system" podName="calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:09.555629 kubelet[2763]: I0515 16:02:09.555477 2763 topology_manager.go:215] "Topology Admit Handler" podUID="17084be0-dcb1-4553-93ab-fa631e730966" podNamespace="kube-system" podName="coredns-7db6d8ff4d-82kdh" May 15 16:02:09.564177 kubelet[2763]: I0515 16:02:09.563184 2763 topology_manager.go:215] "Topology Admit Handler" podUID="190f3495-fcbc-4417-9f1f-2a56ad306602" podNamespace="calico-apiserver" podName="calico-apiserver-6f8fb6b64-24phr" May 15 16:02:09.565771 systemd[1]: Created slice kubepods-besteffort-podc4c65cd6_c8cd_4005_9b33_295db8fc6f42.slice - libcontainer container kubepods-besteffort-podc4c65cd6_c8cd_4005_9b33_295db8fc6f42.slice. May 15 16:02:09.576258 kubelet[2763]: I0515 16:02:09.576222 2763 topology_manager.go:215] "Topology Admit Handler" podUID="90a5043f-268b-4543-8ae6-e221eca49d05" podNamespace="calico-apiserver" podName="calico-apiserver-6f8fb6b64-svvjj" May 15 16:02:09.576726 kubelet[2763]: I0515 16:02:09.576669 2763 topology_manager.go:215] "Topology Admit Handler" podUID="d86f0cb4-0d25-49dd-9a44-3295d0b01a8e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h2t96" May 15 16:02:09.577516 kubelet[2763]: I0515 16:02:09.577471 2763 topology_manager.go:215] "Topology Admit Handler" podUID="58660def-214b-4e3e-b09a-ee98aeab89f5" podNamespace="calico-apiserver" podName="calico-apiserver-bfb8c6495-dps6g" May 15 16:02:09.589191 systemd[1]: Created slice kubepods-burstable-pod17084be0_dcb1_4553_93ab_fa631e730966.slice - libcontainer container kubepods-burstable-pod17084be0_dcb1_4553_93ab_fa631e730966.slice. May 15 16:02:09.604058 systemd[1]: Created slice kubepods-besteffort-pod190f3495_fcbc_4417_9f1f_2a56ad306602.slice - libcontainer container kubepods-besteffort-pod190f3495_fcbc_4417_9f1f_2a56ad306602.slice. May 15 16:02:09.619550 systemd[1]: Created slice kubepods-besteffort-pod90a5043f_268b_4543_8ae6_e221eca49d05.slice - libcontainer container kubepods-besteffort-pod90a5043f_268b_4543_8ae6_e221eca49d05.slice. May 15 16:02:09.621428 kubelet[2763]: I0515 16:02:09.620494 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d86f0cb4-0d25-49dd-9a44-3295d0b01a8e-config-volume\") pod \"coredns-7db6d8ff4d-h2t96\" (UID: \"d86f0cb4-0d25-49dd-9a44-3295d0b01a8e\") " pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:09.621428 kubelet[2763]: I0515 16:02:09.620549 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58660def-214b-4e3e-b09a-ee98aeab89f5-calico-apiserver-certs\") pod \"calico-apiserver-bfb8c6495-dps6g\" (UID: \"58660def-214b-4e3e-b09a-ee98aeab89f5\") " pod="calico-apiserver/calico-apiserver-bfb8c6495-dps6g" May 15 16:02:09.621428 kubelet[2763]: I0515 16:02:09.620618 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90a5043f-268b-4543-8ae6-e221eca49d05-calico-apiserver-certs\") pod \"calico-apiserver-6f8fb6b64-svvjj\" (UID: \"90a5043f-268b-4543-8ae6-e221eca49d05\") " pod="calico-apiserver/calico-apiserver-6f8fb6b64-svvjj" May 15 16:02:09.621428 kubelet[2763]: I0515 16:02:09.620649 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kwtw\" (UniqueName: \"kubernetes.io/projected/d86f0cb4-0d25-49dd-9a44-3295d0b01a8e-kube-api-access-5kwtw\") pod \"coredns-7db6d8ff4d-h2t96\" (UID: \"d86f0cb4-0d25-49dd-9a44-3295d0b01a8e\") " pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:09.621428 kubelet[2763]: I0515 16:02:09.620674 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/190f3495-fcbc-4417-9f1f-2a56ad306602-calico-apiserver-certs\") pod \"calico-apiserver-6f8fb6b64-24phr\" (UID: \"190f3495-fcbc-4417-9f1f-2a56ad306602\") " pod="calico-apiserver/calico-apiserver-6f8fb6b64-24phr" May 15 16:02:09.621686 kubelet[2763]: I0515 16:02:09.620711 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c65cd6-c8cd-4005-9b33-295db8fc6f42-tigera-ca-bundle\") pod \"calico-kube-controllers-5858fd5ccf-lw59z\" (UID: \"c4c65cd6-c8cd-4005-9b33-295db8fc6f42\") " pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:09.621686 kubelet[2763]: I0515 16:02:09.620729 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnvf\" (UniqueName: \"kubernetes.io/projected/c4c65cd6-c8cd-4005-9b33-295db8fc6f42-kube-api-access-cjnvf\") pod \"calico-kube-controllers-5858fd5ccf-lw59z\" (UID: \"c4c65cd6-c8cd-4005-9b33-295db8fc6f42\") " pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:09.621686 kubelet[2763]: I0515 16:02:09.620748 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54n8h\" (UniqueName: \"kubernetes.io/projected/190f3495-fcbc-4417-9f1f-2a56ad306602-kube-api-access-54n8h\") pod \"calico-apiserver-6f8fb6b64-24phr\" (UID: \"190f3495-fcbc-4417-9f1f-2a56ad306602\") " pod="calico-apiserver/calico-apiserver-6f8fb6b64-24phr" May 15 16:02:09.622554 kubelet[2763]: I0515 16:02:09.622065 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r8kf\" (UniqueName: \"kubernetes.io/projected/58660def-214b-4e3e-b09a-ee98aeab89f5-kube-api-access-6r8kf\") pod \"calico-apiserver-bfb8c6495-dps6g\" (UID: \"58660def-214b-4e3e-b09a-ee98aeab89f5\") " pod="calico-apiserver/calico-apiserver-bfb8c6495-dps6g" May 15 16:02:09.622554 kubelet[2763]: I0515 16:02:09.622113 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17084be0-dcb1-4553-93ab-fa631e730966-config-volume\") pod \"coredns-7db6d8ff4d-82kdh\" (UID: \"17084be0-dcb1-4553-93ab-fa631e730966\") " pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:09.622554 kubelet[2763]: I0515 16:02:09.622136 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7jc\" (UniqueName: \"kubernetes.io/projected/17084be0-dcb1-4553-93ab-fa631e730966-kube-api-access-tz7jc\") pod \"coredns-7db6d8ff4d-82kdh\" (UID: \"17084be0-dcb1-4553-93ab-fa631e730966\") " pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:09.622554 kubelet[2763]: I0515 16:02:09.622329 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rb7s\" (UniqueName: \"kubernetes.io/projected/90a5043f-268b-4543-8ae6-e221eca49d05-kube-api-access-7rb7s\") pod \"calico-apiserver-6f8fb6b64-svvjj\" (UID: \"90a5043f-268b-4543-8ae6-e221eca49d05\") " pod="calico-apiserver/calico-apiserver-6f8fb6b64-svvjj" May 15 16:02:09.632065 systemd[1]: Created slice kubepods-besteffort-pod58660def_214b_4e3e_b09a_ee98aeab89f5.slice - libcontainer container kubepods-besteffort-pod58660def_214b_4e3e_b09a_ee98aeab89f5.slice. May 15 16:02:09.640729 systemd[1]: Created slice kubepods-burstable-podd86f0cb4_0d25_49dd_9a44_3295d0b01a8e.slice - libcontainer container kubepods-burstable-podd86f0cb4_0d25_49dd_9a44_3295d0b01a8e.slice. May 15 16:02:09.876643 containerd[1533]: time="2025-05-15T16:02:09.876587917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5858fd5ccf-lw59z,Uid:c4c65cd6-c8cd-4005-9b33-295db8fc6f42,Namespace:calico-system,Attempt:0,}" May 15 16:02:09.898100 kubelet[2763]: E0515 16:02:09.898029 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:09.903063 containerd[1533]: time="2025-05-15T16:02:09.903022366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,}" May 15 16:02:09.913841 containerd[1533]: time="2025-05-15T16:02:09.913538632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8fb6b64-24phr,Uid:190f3495-fcbc-4417-9f1f-2a56ad306602,Namespace:calico-apiserver,Attempt:0,}" May 15 16:02:09.949865 kubelet[2763]: E0515 16:02:09.948488 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:09.958733 containerd[1533]: time="2025-05-15T16:02:09.957625572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,}" May 15 16:02:09.969529 containerd[1533]: time="2025-05-15T16:02:09.969479439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8fb6b64-svvjj,Uid:90a5043f-268b-4543-8ae6-e221eca49d05,Namespace:calico-apiserver,Attempt:0,}" May 15 16:02:09.970385 containerd[1533]: time="2025-05-15T16:02:09.970327423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb8c6495-dps6g,Uid:58660def-214b-4e3e-b09a-ee98aeab89f5,Namespace:calico-apiserver,Attempt:0,}" May 15 16:02:10.088037 kubelet[2763]: E0515 16:02:10.087290 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:10.096365 containerd[1533]: time="2025-05-15T16:02:10.096241241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 16:02:10.303703 containerd[1533]: time="2025-05-15T16:02:10.303649443Z" level=error msg="Failed to destroy network for sandbox \"b482170ae5a71a6c1de59fd8f46d9d8bd1f1e027bd58dc2e7b805995cb34745f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.307306 containerd[1533]: time="2025-05-15T16:02:10.307236325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b482170ae5a71a6c1de59fd8f46d9d8bd1f1e027bd58dc2e7b805995cb34745f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.309361 kubelet[2763]: E0515 16:02:10.309303 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b482170ae5a71a6c1de59fd8f46d9d8bd1f1e027bd58dc2e7b805995cb34745f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.310747 kubelet[2763]: E0515 16:02:10.309606 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b482170ae5a71a6c1de59fd8f46d9d8bd1f1e027bd58dc2e7b805995cb34745f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:10.310747 kubelet[2763]: E0515 16:02:10.309647 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b482170ae5a71a6c1de59fd8f46d9d8bd1f1e027bd58dc2e7b805995cb34745f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:10.310747 kubelet[2763]: E0515 16:02:10.309723 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b482170ae5a71a6c1de59fd8f46d9d8bd1f1e027bd58dc2e7b805995cb34745f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2t96" podUID="d86f0cb4-0d25-49dd-9a44-3295d0b01a8e" May 15 16:02:10.322078 containerd[1533]: time="2025-05-15T16:02:10.321955420Z" level=error msg="Failed to destroy network for sandbox \"75859a63ae25709f01646dd9da0dd3654ec1535a67ee787a84343ecb8ac234dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.324934 containerd[1533]: time="2025-05-15T16:02:10.323604463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8fb6b64-24phr,Uid:190f3495-fcbc-4417-9f1f-2a56ad306602,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75859a63ae25709f01646dd9da0dd3654ec1535a67ee787a84343ecb8ac234dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.325769 kubelet[2763]: E0515 16:02:10.325353 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75859a63ae25709f01646dd9da0dd3654ec1535a67ee787a84343ecb8ac234dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.325769 kubelet[2763]: E0515 16:02:10.325435 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75859a63ae25709f01646dd9da0dd3654ec1535a67ee787a84343ecb8ac234dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8fb6b64-24phr" May 15 16:02:10.325769 kubelet[2763]: E0515 16:02:10.325458 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75859a63ae25709f01646dd9da0dd3654ec1535a67ee787a84343ecb8ac234dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8fb6b64-24phr" May 15 16:02:10.327095 kubelet[2763]: E0515 16:02:10.325498 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8fb6b64-24phr_calico-apiserver(190f3495-fcbc-4417-9f1f-2a56ad306602)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8fb6b64-24phr_calico-apiserver(190f3495-fcbc-4417-9f1f-2a56ad306602)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75859a63ae25709f01646dd9da0dd3654ec1535a67ee787a84343ecb8ac234dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8fb6b64-24phr" podUID="190f3495-fcbc-4417-9f1f-2a56ad306602" May 15 16:02:10.329600 containerd[1533]: time="2025-05-15T16:02:10.329517693Z" level=error msg="Failed to destroy network for sandbox \"ff92a1c0fb70b219a305b4134f7114894dc06e1661b986898a3442a07d026b2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.331699 containerd[1533]: time="2025-05-15T16:02:10.331639344Z" level=error msg="Failed to destroy network for sandbox \"ef3d00dbe9c8b17241096699e8b05fa5b356e96c693805f75a74f3ed7bc49857\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.332778 containerd[1533]: time="2025-05-15T16:02:10.332472656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8fb6b64-svvjj,Uid:90a5043f-268b-4543-8ae6-e221eca49d05,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff92a1c0fb70b219a305b4134f7114894dc06e1661b986898a3442a07d026b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.333737 containerd[1533]: time="2025-05-15T16:02:10.333687194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef3d00dbe9c8b17241096699e8b05fa5b356e96c693805f75a74f3ed7bc49857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.335472 kubelet[2763]: E0515 16:02:10.333970 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef3d00dbe9c8b17241096699e8b05fa5b356e96c693805f75a74f3ed7bc49857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.335472 kubelet[2763]: E0515 16:02:10.334086 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef3d00dbe9c8b17241096699e8b05fa5b356e96c693805f75a74f3ed7bc49857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:10.335472 kubelet[2763]: E0515 16:02:10.334113 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef3d00dbe9c8b17241096699e8b05fa5b356e96c693805f75a74f3ed7bc49857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:10.335655 kubelet[2763]: E0515 16:02:10.334176 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef3d00dbe9c8b17241096699e8b05fa5b356e96c693805f75a74f3ed7bc49857\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-82kdh" podUID="17084be0-dcb1-4553-93ab-fa631e730966" May 15 16:02:10.335655 kubelet[2763]: E0515 16:02:10.334786 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff92a1c0fb70b219a305b4134f7114894dc06e1661b986898a3442a07d026b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.335655 kubelet[2763]: E0515 16:02:10.334831 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff92a1c0fb70b219a305b4134f7114894dc06e1661b986898a3442a07d026b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8fb6b64-svvjj" May 15 16:02:10.335834 kubelet[2763]: E0515 16:02:10.334850 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff92a1c0fb70b219a305b4134f7114894dc06e1661b986898a3442a07d026b2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8fb6b64-svvjj" May 15 16:02:10.335834 kubelet[2763]: E0515 16:02:10.334890 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8fb6b64-svvjj_calico-apiserver(90a5043f-268b-4543-8ae6-e221eca49d05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8fb6b64-svvjj_calico-apiserver(90a5043f-268b-4543-8ae6-e221eca49d05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff92a1c0fb70b219a305b4134f7114894dc06e1661b986898a3442a07d026b2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8fb6b64-svvjj" podUID="90a5043f-268b-4543-8ae6-e221eca49d05" May 15 16:02:10.337944 containerd[1533]: time="2025-05-15T16:02:10.337580144Z" level=error msg="Failed to destroy network for sandbox \"40ae7857eb5f3015617d785e1eb64bc284842b595722eceb671fce50684ad5a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.339569 containerd[1533]: time="2025-05-15T16:02:10.339506631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5858fd5ccf-lw59z,Uid:c4c65cd6-c8cd-4005-9b33-295db8fc6f42,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ae7857eb5f3015617d785e1eb64bc284842b595722eceb671fce50684ad5a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.341443 kubelet[2763]: E0515 16:02:10.340812 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ae7857eb5f3015617d785e1eb64bc284842b595722eceb671fce50684ad5a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.341443 kubelet[2763]: E0515 16:02:10.340892 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ae7857eb5f3015617d785e1eb64bc284842b595722eceb671fce50684ad5a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:10.341443 kubelet[2763]: E0515 16:02:10.340920 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ae7857eb5f3015617d785e1eb64bc284842b595722eceb671fce50684ad5a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:10.341872 kubelet[2763]: E0515 16:02:10.341111 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5858fd5ccf-lw59z_calico-system(c4c65cd6-c8cd-4005-9b33-295db8fc6f42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5858fd5ccf-lw59z_calico-system(c4c65cd6-c8cd-4005-9b33-295db8fc6f42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40ae7857eb5f3015617d785e1eb64bc284842b595722eceb671fce50684ad5a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" podUID="c4c65cd6-c8cd-4005-9b33-295db8fc6f42" May 15 16:02:10.350862 containerd[1533]: time="2025-05-15T16:02:10.350813307Z" level=error msg="Failed to destroy network for sandbox \"489dd735b7e5919fa9c42b62c2b79c9da0606364e4c45379244ba43f20d799c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.353782 containerd[1533]: time="2025-05-15T16:02:10.353645687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb8c6495-dps6g,Uid:58660def-214b-4e3e-b09a-ee98aeab89f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"489dd735b7e5919fa9c42b62c2b79c9da0606364e4c45379244ba43f20d799c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.354094 kubelet[2763]: E0515 16:02:10.353920 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"489dd735b7e5919fa9c42b62c2b79c9da0606364e4c45379244ba43f20d799c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.354094 kubelet[2763]: E0515 16:02:10.354031 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"489dd735b7e5919fa9c42b62c2b79c9da0606364e4c45379244ba43f20d799c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfb8c6495-dps6g" May 15 16:02:10.354094 kubelet[2763]: E0515 16:02:10.354060 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"489dd735b7e5919fa9c42b62c2b79c9da0606364e4c45379244ba43f20d799c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfb8c6495-dps6g" May 15 16:02:10.354895 kubelet[2763]: E0515 16:02:10.354118 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bfb8c6495-dps6g_calico-apiserver(58660def-214b-4e3e-b09a-ee98aeab89f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bfb8c6495-dps6g_calico-apiserver(58660def-214b-4e3e-b09a-ee98aeab89f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"489dd735b7e5919fa9c42b62c2b79c9da0606364e4c45379244ba43f20d799c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bfb8c6495-dps6g" podUID="58660def-214b-4e3e-b09a-ee98aeab89f5" May 15 16:02:10.866936 systemd[1]: Created slice kubepods-besteffort-pod15ff8378_e357_4a15_80de_bc12411a603e.slice - libcontainer container kubepods-besteffort-pod15ff8378_e357_4a15_80de_bc12411a603e.slice. May 15 16:02:10.874587 containerd[1533]: time="2025-05-15T16:02:10.874245514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,}" May 15 16:02:10.956075 containerd[1533]: time="2025-05-15T16:02:10.956023133Z" level=error msg="Failed to destroy network for sandbox \"161ecdf12f591bf1707a773e426c77e0d3b58f2128093f8653bbaf64c4d67997\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.959164 containerd[1533]: time="2025-05-15T16:02:10.959056122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"161ecdf12f591bf1707a773e426c77e0d3b58f2128093f8653bbaf64c4d67997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.959375 systemd[1]: run-netns-cni\x2d04dfd978\x2d61fc\x2d6a0c\x2d19de\x2df9ef4572bab0.mount: Deactivated successfully. May 15 16:02:10.961020 kubelet[2763]: E0515 16:02:10.959689 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161ecdf12f591bf1707a773e426c77e0d3b58f2128093f8653bbaf64c4d67997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:10.961020 kubelet[2763]: E0515 16:02:10.959771 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161ecdf12f591bf1707a773e426c77e0d3b58f2128093f8653bbaf64c4d67997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:10.961020 kubelet[2763]: E0515 16:02:10.959796 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161ecdf12f591bf1707a773e426c77e0d3b58f2128093f8653bbaf64c4d67997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:10.961451 kubelet[2763]: E0515 16:02:10.959851 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"161ecdf12f591bf1707a773e426c77e0d3b58f2128093f8653bbaf64c4d67997\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:13.069343 kubelet[2763]: I0515 16:02:13.061501 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:13.070893 kubelet[2763]: I0515 16:02:13.069370 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:13.080033 kubelet[2763]: I0515 16:02:13.079583 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:13.102859 kubelet[2763]: I0515 16:02:13.102825 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:13.103213 kubelet[2763]: I0515 16:02:13.103187 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6f8fb6b64-svvjj","calico-apiserver/calico-apiserver-6f8fb6b64-24phr","calico-apiserver/calico-apiserver-bfb8c6495-dps6g","calico-system/calico-kube-controllers-5858fd5ccf-lw59z","kube-system/coredns-7db6d8ff4d-h2t96","kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-node-68559","calico-system/csi-node-driver-w2wp6","tigera-operator/tigera-operator-797db67f8-kmq9t","calico-system/calico-typha-8b9bd54c9-lhz4q","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:13.118618 kubelet[2763]: I0515 16:02:13.118576 2763 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6f8fb6b64-svvjj" May 15 16:02:13.118618 kubelet[2763]: I0515 16:02:13.118620 2763 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6f8fb6b64-svvjj"] May 15 16:02:13.159842 kubelet[2763]: I0515 16:02:13.158388 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-q8smq" nodeCondition=["DiskPressure"] May 15 16:02:13.165140 kubelet[2763]: I0515 16:02:13.165102 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rb7s\" (UniqueName: \"kubernetes.io/projected/90a5043f-268b-4543-8ae6-e221eca49d05-kube-api-access-7rb7s\") pod \"90a5043f-268b-4543-8ae6-e221eca49d05\" (UID: \"90a5043f-268b-4543-8ae6-e221eca49d05\") " May 15 16:02:13.166139 kubelet[2763]: I0515 16:02:13.166103 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90a5043f-268b-4543-8ae6-e221eca49d05-calico-apiserver-certs\") pod \"90a5043f-268b-4543-8ae6-e221eca49d05\" (UID: \"90a5043f-268b-4543-8ae6-e221eca49d05\") " May 15 16:02:13.180012 kubelet[2763]: I0515 16:02:13.179922 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a5043f-268b-4543-8ae6-e221eca49d05-kube-api-access-7rb7s" (OuterVolumeSpecName: "kube-api-access-7rb7s") pod "90a5043f-268b-4543-8ae6-e221eca49d05" (UID: "90a5043f-268b-4543-8ae6-e221eca49d05"). InnerVolumeSpecName "kube-api-access-7rb7s". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 16:02:13.181813 systemd[1]: var-lib-kubelet-pods-90a5043f\x2d268b\x2d4543\x2d8ae6\x2de221eca49d05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7rb7s.mount: Deactivated successfully. May 15 16:02:13.197646 systemd[1]: var-lib-kubelet-pods-90a5043f\x2d268b\x2d4543\x2d8ae6\x2de221eca49d05-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 16:02:13.201523 kubelet[2763]: I0515 16:02:13.201438 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a5043f-268b-4543-8ae6-e221eca49d05-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "90a5043f-268b-4543-8ae6-e221eca49d05" (UID: "90a5043f-268b-4543-8ae6-e221eca49d05"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 16:02:13.227627 kubelet[2763]: I0515 16:02:13.227569 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-zpt6n" nodeCondition=["DiskPressure"] May 15 16:02:13.266906 kubelet[2763]: I0515 16:02:13.266782 2763 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7rb7s\" (UniqueName: \"kubernetes.io/projected/90a5043f-268b-4543-8ae6-e221eca49d05-kube-api-access-7rb7s\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:13.267533 kubelet[2763]: I0515 16:02:13.267329 2763 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90a5043f-268b-4543-8ae6-e221eca49d05-calico-apiserver-certs\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:13.289049 kubelet[2763]: I0515 16:02:13.288377 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-vmbk2" nodeCondition=["DiskPressure"] May 15 16:02:13.343394 kubelet[2763]: I0515 16:02:13.343239 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-dklg6" nodeCondition=["DiskPressure"] May 15 16:02:13.392515 kubelet[2763]: I0515 16:02:13.392303 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-wqbkr" nodeCondition=["DiskPressure"] May 15 16:02:13.452580 kubelet[2763]: I0515 16:02:13.452500 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-rjs2h" nodeCondition=["DiskPressure"] May 15 16:02:13.502057 kubelet[2763]: I0515 16:02:13.501891 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-hhq9x" nodeCondition=["DiskPressure"] May 15 16:02:13.579229 kubelet[2763]: I0515 16:02:13.579174 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-5jhrz" nodeCondition=["DiskPressure"] May 15 16:02:13.661454 kubelet[2763]: I0515 16:02:13.659975 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-hfbmt" nodeCondition=["DiskPressure"] May 15 16:02:13.733554 kubelet[2763]: I0515 16:02:13.733479 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-zqnvq" nodeCondition=["DiskPressure"] May 15 16:02:13.827166 kubelet[2763]: I0515 16:02:13.827105 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-kflb9" nodeCondition=["DiskPressure"] May 15 16:02:13.943768 kubelet[2763]: I0515 16:02:13.943636 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-6f8fb6b64-db4p5" nodeCondition=["DiskPressure"] May 15 16:02:14.162215 systemd[1]: Removed slice kubepods-besteffort-pod90a5043f_268b_4543_8ae6_e221eca49d05.slice - libcontainer container kubepods-besteffort-pod90a5043f_268b_4543_8ae6_e221eca49d05.slice. May 15 16:02:15.119100 kubelet[2763]: I0515 16:02:15.119049 2763 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6f8fb6b64-svvjj"] May 15 16:02:15.145589 kubelet[2763]: I0515 16:02:15.144968 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:15.146732 kubelet[2763]: I0515 16:02:15.146563 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:15.151937 kubelet[2763]: I0515 16:02:15.151908 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:15.189511 kubelet[2763]: I0515 16:02:15.189428 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:15.189968 kubelet[2763]: I0515 16:02:15.189777 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-bfb8c6495-dps6g","calico-apiserver/calico-apiserver-6f8fb6b64-24phr","kube-system/coredns-7db6d8ff4d-h2t96","kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-kube-controllers-5858fd5ccf-lw59z","calico-system/calico-node-68559","calico-system/csi-node-driver-w2wp6","tigera-operator/tigera-operator-797db67f8-kmq9t","calico-system/calico-typha-8b9bd54c9-lhz4q","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:15.203935 kubelet[2763]: I0515 16:02:15.203583 2763 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-bfb8c6495-dps6g" May 15 16:02:15.203935 kubelet[2763]: I0515 16:02:15.203614 2763 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-bfb8c6495-dps6g"] May 15 16:02:15.285429 kubelet[2763]: I0515 16:02:15.285348 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r8kf\" (UniqueName: \"kubernetes.io/projected/58660def-214b-4e3e-b09a-ee98aeab89f5-kube-api-access-6r8kf\") pod \"58660def-214b-4e3e-b09a-ee98aeab89f5\" (UID: \"58660def-214b-4e3e-b09a-ee98aeab89f5\") " May 15 16:02:15.285429 kubelet[2763]: I0515 16:02:15.285399 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58660def-214b-4e3e-b09a-ee98aeab89f5-calico-apiserver-certs\") pod \"58660def-214b-4e3e-b09a-ee98aeab89f5\" (UID: \"58660def-214b-4e3e-b09a-ee98aeab89f5\") " May 15 16:02:15.301604 systemd[1]: var-lib-kubelet-pods-58660def\x2d214b\x2d4e3e\x2db09a\x2dee98aeab89f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6r8kf.mount: Deactivated successfully. May 15 16:02:15.305315 kubelet[2763]: I0515 16:02:15.305072 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58660def-214b-4e3e-b09a-ee98aeab89f5-kube-api-access-6r8kf" (OuterVolumeSpecName: "kube-api-access-6r8kf") pod "58660def-214b-4e3e-b09a-ee98aeab89f5" (UID: "58660def-214b-4e3e-b09a-ee98aeab89f5"). InnerVolumeSpecName "kube-api-access-6r8kf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 16:02:15.308674 systemd[1]: var-lib-kubelet-pods-58660def\x2d214b\x2d4e3e\x2db09a\x2dee98aeab89f5-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 16:02:15.310201 kubelet[2763]: I0515 16:02:15.308280 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58660def-214b-4e3e-b09a-ee98aeab89f5-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "58660def-214b-4e3e-b09a-ee98aeab89f5" (UID: "58660def-214b-4e3e-b09a-ee98aeab89f5"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 16:02:15.386828 kubelet[2763]: I0515 16:02:15.386695 2763 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6r8kf\" (UniqueName: \"kubernetes.io/projected/58660def-214b-4e3e-b09a-ee98aeab89f5-kube-api-access-6r8kf\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:15.387425 kubelet[2763]: I0515 16:02:15.387305 2763 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58660def-214b-4e3e-b09a-ee98aeab89f5-calico-apiserver-certs\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:15.978932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086442740.mount: Deactivated successfully. May 15 16:02:15.981761 containerd[1533]: time="2025-05-15T16:02:15.981525120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4086442740: mkdir /var/lib/containerd/tmpmounts/containerd-mount4086442740/usr/lib/.build-id/7b: no space left on device" May 15 16:02:15.981761 containerd[1533]: time="2025-05-15T16:02:15.981580228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 16:02:15.982347 kubelet[2763]: E0515 16:02:15.981806 2763 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4086442740: mkdir /var/lib/containerd/tmpmounts/containerd-mount4086442740/usr/lib/.build-id/7b: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 16:02:15.982347 kubelet[2763]: E0515 16:02:15.981875 2763 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4086442740: mkdir /var/lib/containerd/tmpmounts/containerd-mount4086442740/usr/lib/.build-id/7b: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 16:02:15.986687 kubelet[2763]: E0515 16:02:15.986535 2763 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbjkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-68559_calico-system(e007eeab-9069-48bd-be2f-87c5ad02bcf8): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4086442740: mkdir /var/lib/containerd/tmpmounts/containerd-mount4086442740/usr/lib/.build-id/7b: no space left on device May 15 16:02:15.987049 kubelet[2763]: E0515 16:02:15.986604 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4086442740: mkdir /var/lib/containerd/tmpmounts/containerd-mount4086442740/usr/lib/.build-id/7b: no space left on device\"" pod="calico-system/calico-node-68559" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" May 15 16:02:16.107498 kubelet[2763]: E0515 16:02:16.106177 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:16.108257 kubelet[2763]: E0515 16:02:16.108218 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-68559" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" May 15 16:02:16.111035 systemd[1]: Removed slice kubepods-besteffort-pod58660def_214b_4e3e_b09a_ee98aeab89f5.slice - libcontainer container kubepods-besteffort-pod58660def_214b_4e3e_b09a_ee98aeab89f5.slice. May 15 16:02:16.203935 kubelet[2763]: I0515 16:02:16.203868 2763 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-bfb8c6495-dps6g"] May 15 16:02:16.217132 kubelet[2763]: I0515 16:02:16.217031 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:16.217132 kubelet[2763]: I0515 16:02:16.217082 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:16.221065 kubelet[2763]: I0515 16:02:16.220516 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:16.237960 kubelet[2763]: I0515 16:02:16.237844 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:16.238147 kubelet[2763]: I0515 16:02:16.238118 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6f8fb6b64-24phr","calico-system/calico-kube-controllers-5858fd5ccf-lw59z","kube-system/coredns-7db6d8ff4d-82kdh","kube-system/coredns-7db6d8ff4d-h2t96","calico-system/csi-node-driver-w2wp6","calico-system/calico-node-68559","tigera-operator/tigera-operator-797db67f8-kmq9t","calico-system/calico-typha-8b9bd54c9-lhz4q","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:16.248260 kubelet[2763]: I0515 16:02:16.248216 2763 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6f8fb6b64-24phr" May 15 16:02:16.248260 kubelet[2763]: I0515 16:02:16.248248 2763 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6f8fb6b64-24phr"] May 15 16:02:16.294623 kubelet[2763]: I0515 16:02:16.294559 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/190f3495-fcbc-4417-9f1f-2a56ad306602-calico-apiserver-certs\") pod \"190f3495-fcbc-4417-9f1f-2a56ad306602\" (UID: \"190f3495-fcbc-4417-9f1f-2a56ad306602\") " May 15 16:02:16.294623 kubelet[2763]: I0515 16:02:16.294607 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54n8h\" (UniqueName: \"kubernetes.io/projected/190f3495-fcbc-4417-9f1f-2a56ad306602-kube-api-access-54n8h\") pod \"190f3495-fcbc-4417-9f1f-2a56ad306602\" (UID: \"190f3495-fcbc-4417-9f1f-2a56ad306602\") " May 15 16:02:16.302726 kubelet[2763]: I0515 16:02:16.302543 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190f3495-fcbc-4417-9f1f-2a56ad306602-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "190f3495-fcbc-4417-9f1f-2a56ad306602" (UID: "190f3495-fcbc-4417-9f1f-2a56ad306602"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 16:02:16.305223 kubelet[2763]: I0515 16:02:16.304885 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190f3495-fcbc-4417-9f1f-2a56ad306602-kube-api-access-54n8h" (OuterVolumeSpecName: "kube-api-access-54n8h") pod "190f3495-fcbc-4417-9f1f-2a56ad306602" (UID: "190f3495-fcbc-4417-9f1f-2a56ad306602"). InnerVolumeSpecName "kube-api-access-54n8h". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 16:02:16.306312 systemd[1]: var-lib-kubelet-pods-190f3495\x2dfcbc\x2d4417\x2d9f1f\x2d2a56ad306602-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d54n8h.mount: Deactivated successfully. May 15 16:02:16.310678 systemd[1]: var-lib-kubelet-pods-190f3495\x2dfcbc\x2d4417\x2d9f1f\x2d2a56ad306602-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 16:02:16.395284 kubelet[2763]: I0515 16:02:16.395236 2763 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/190f3495-fcbc-4417-9f1f-2a56ad306602-calico-apiserver-certs\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:16.395284 kubelet[2763]: I0515 16:02:16.395278 2763 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-54n8h\" (UniqueName: \"kubernetes.io/projected/190f3495-fcbc-4417-9f1f-2a56ad306602-kube-api-access-54n8h\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:16.868059 systemd[1]: Removed slice kubepods-besteffort-pod190f3495_fcbc_4417_9f1f_2a56ad306602.slice - libcontainer container kubepods-besteffort-pod190f3495_fcbc_4417_9f1f_2a56ad306602.slice. May 15 16:02:17.248859 kubelet[2763]: I0515 16:02:17.248773 2763 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6f8fb6b64-24phr"] May 15 16:02:17.260898 kubelet[2763]: I0515 16:02:17.260805 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:17.260898 kubelet[2763]: I0515 16:02:17.260855 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:17.264074 kubelet[2763]: I0515 16:02:17.264042 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:17.282012 kubelet[2763]: I0515 16:02:17.281899 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:17.282354 kubelet[2763]: I0515 16:02:17.282243 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5858fd5ccf-lw59z","kube-system/coredns-7db6d8ff4d-82kdh","kube-system/coredns-7db6d8ff4d-h2t96","calico-system/csi-node-driver-w2wp6","calico-system/calico-node-68559","tigera-operator/tigera-operator-797db67f8-kmq9t","calico-system/calico-typha-8b9bd54c9-lhz4q","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:17.282354 kubelet[2763]: E0515 16:02:17.282314 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:17.282702 kubelet[2763]: E0515 16:02:17.282572 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:17.282702 kubelet[2763]: E0515 16:02:17.282593 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:17.282702 kubelet[2763]: E0515 16:02:17.282602 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:17.282702 kubelet[2763]: E0515 16:02:17.282612 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68559" May 15 16:02:17.285631 containerd[1533]: time="2025-05-15T16:02:17.285573565Z" level=info msg="StopContainer for \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" with timeout 60 (s)" May 15 16:02:17.293042 containerd[1533]: time="2025-05-15T16:02:17.292979375Z" level=info msg="Stop container \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" with signal terminated" May 15 16:02:17.331391 systemd[1]: cri-containerd-38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53.scope: Deactivated successfully. May 15 16:02:17.332100 systemd[1]: cri-containerd-38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53.scope: Consumed 1.545s CPU time, 32.9M memory peak, 5.8M read from disk. May 15 16:02:17.334901 containerd[1533]: time="2025-05-15T16:02:17.334854145Z" level=info msg="received exit event container_id:\"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" id:\"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" pid:3127 exited_at:{seconds:1747324937 nanos:333926521}" May 15 16:02:17.335775 containerd[1533]: time="2025-05-15T16:02:17.335720648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" id:\"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" pid:3127 exited_at:{seconds:1747324937 nanos:333926521}" May 15 16:02:17.365598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53-rootfs.mount: Deactivated successfully. May 15 16:02:17.371932 containerd[1533]: time="2025-05-15T16:02:17.371881218Z" level=info msg="StopContainer for \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" returns successfully" May 15 16:02:17.373084 containerd[1533]: time="2025-05-15T16:02:17.373034749Z" level=info msg="StopPodSandbox for \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\"" May 15 16:02:17.378015 containerd[1533]: time="2025-05-15T16:02:17.377712977Z" level=info msg="Container to stop \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 16:02:17.387230 systemd[1]: cri-containerd-7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2.scope: Deactivated successfully. May 15 16:02:17.389326 containerd[1533]: time="2025-05-15T16:02:17.389271773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" id:\"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" pid:2970 exit_status:137 exited_at:{seconds:1747324937 nanos:388223690}" May 15 16:02:17.417312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2-rootfs.mount: Deactivated successfully. May 15 16:02:17.420459 containerd[1533]: time="2025-05-15T16:02:17.420318798Z" level=info msg="shim disconnected" id=7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2 namespace=k8s.io May 15 16:02:17.420459 containerd[1533]: time="2025-05-15T16:02:17.420354802Z" level=warning msg="cleaning up after shim disconnected" id=7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2 namespace=k8s.io May 15 16:02:17.433538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2-shm.mount: Deactivated successfully. May 15 16:02:17.455007 containerd[1533]: time="2025-05-15T16:02:17.420362557Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 16:02:17.455907 containerd[1533]: time="2025-05-15T16:02:17.430741971Z" level=info msg="received exit event sandbox_id:\"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" exit_status:137 exited_at:{seconds:1747324937 nanos:388223690}" May 15 16:02:17.455907 containerd[1533]: time="2025-05-15T16:02:17.454808887Z" level=info msg="TearDown network for sandbox \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" successfully" May 15 16:02:17.455907 containerd[1533]: time="2025-05-15T16:02:17.455558673Z" level=info msg="StopPodSandbox for \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" returns successfully" May 15 16:02:17.466310 kubelet[2763]: I0515 16:02:17.465764 2763 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-797db67f8-kmq9t" May 15 16:02:17.466310 kubelet[2763]: I0515 16:02:17.465792 2763 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-797db67f8-kmq9t"] May 15 16:02:17.559799 kubelet[2763]: I0515 16:02:17.558605 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-xdh4v" nodeCondition=["DiskPressure"] May 15 16:02:17.594091 kubelet[2763]: I0515 16:02:17.594029 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-hvldt" nodeCondition=["DiskPressure"] May 15 16:02:17.601856 kubelet[2763]: I0515 16:02:17.601809 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10c5d861-69f5-41ae-bab2-9fe813c77a00-var-lib-calico\") pod \"10c5d861-69f5-41ae-bab2-9fe813c77a00\" (UID: \"10c5d861-69f5-41ae-bab2-9fe813c77a00\") " May 15 16:02:17.601856 kubelet[2763]: I0515 16:02:17.601867 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrz6l\" (UniqueName: \"kubernetes.io/projected/10c5d861-69f5-41ae-bab2-9fe813c77a00-kube-api-access-mrz6l\") pod \"10c5d861-69f5-41ae-bab2-9fe813c77a00\" (UID: \"10c5d861-69f5-41ae-bab2-9fe813c77a00\") " May 15 16:02:17.602488 kubelet[2763]: I0515 16:02:17.602448 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c5d861-69f5-41ae-bab2-9fe813c77a00-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "10c5d861-69f5-41ae-bab2-9fe813c77a00" (UID: "10c5d861-69f5-41ae-bab2-9fe813c77a00"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:17.615422 kubelet[2763]: I0515 16:02:17.615336 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c5d861-69f5-41ae-bab2-9fe813c77a00-kube-api-access-mrz6l" (OuterVolumeSpecName: "kube-api-access-mrz6l") pod "10c5d861-69f5-41ae-bab2-9fe813c77a00" (UID: "10c5d861-69f5-41ae-bab2-9fe813c77a00"). InnerVolumeSpecName "kube-api-access-mrz6l". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 16:02:17.615976 systemd[1]: var-lib-kubelet-pods-10c5d861\x2d69f5\x2d41ae\x2dbab2\x2d9fe813c77a00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmrz6l.mount: Deactivated successfully. May 15 16:02:17.636851 kubelet[2763]: I0515 16:02:17.636627 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-nrkd8" nodeCondition=["DiskPressure"] May 15 16:02:17.672744 kubelet[2763]: I0515 16:02:17.672676 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-595hx" nodeCondition=["DiskPressure"] May 15 16:02:17.702758 kubelet[2763]: I0515 16:02:17.702708 2763 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10c5d861-69f5-41ae-bab2-9fe813c77a00-var-lib-calico\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:17.702758 kubelet[2763]: I0515 16:02:17.702742 2763 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mrz6l\" (UniqueName: \"kubernetes.io/projected/10c5d861-69f5-41ae-bab2-9fe813c77a00-kube-api-access-mrz6l\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:17.715628 kubelet[2763]: I0515 16:02:17.715576 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-skfhf" nodeCondition=["DiskPressure"] May 15 16:02:17.751455 kubelet[2763]: I0515 16:02:17.751396 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zm5b7" nodeCondition=["DiskPressure"] May 15 16:02:17.793197 kubelet[2763]: I0515 16:02:17.793139 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-4gxmf" nodeCondition=["DiskPressure"] May 15 16:02:17.838747 kubelet[2763]: I0515 16:02:17.838464 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-cb2rs" nodeCondition=["DiskPressure"] May 15 16:02:17.873707 kubelet[2763]: I0515 16:02:17.873336 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-8sncl" nodeCondition=["DiskPressure"] May 15 16:02:17.910526 kubelet[2763]: I0515 16:02:17.910479 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zlc56" nodeCondition=["DiskPressure"] May 15 16:02:18.071829 kubelet[2763]: I0515 16:02:18.071741 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-lwvhb" nodeCondition=["DiskPressure"] May 15 16:02:18.113101 kubelet[2763]: I0515 16:02:18.112158 2763 scope.go:117] "RemoveContainer" containerID="38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53" May 15 16:02:18.117387 containerd[1533]: time="2025-05-15T16:02:18.117073915Z" level=info msg="RemoveContainer for \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\"" May 15 16:02:18.125089 systemd[1]: Removed slice kubepods-besteffort-pod10c5d861_69f5_41ae_bab2_9fe813c77a00.slice - libcontainer container kubepods-besteffort-pod10c5d861_69f5_41ae_bab2_9fe813c77a00.slice. May 15 16:02:18.125306 systemd[1]: kubepods-besteffort-pod10c5d861_69f5_41ae_bab2_9fe813c77a00.slice: Consumed 1.578s CPU time, 33.1M memory peak, 5.8M read from disk. May 15 16:02:18.128439 containerd[1533]: time="2025-05-15T16:02:18.128387723Z" level=info msg="RemoveContainer for \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" returns successfully" May 15 16:02:18.133716 kubelet[2763]: I0515 16:02:18.133576 2763 scope.go:117] "RemoveContainer" containerID="38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53" May 15 16:02:18.134155 containerd[1533]: time="2025-05-15T16:02:18.134071440Z" level=error msg="ContainerStatus for \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\": not found" May 15 16:02:18.134368 kubelet[2763]: E0515 16:02:18.134336 2763 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\": not found" containerID="38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53" May 15 16:02:18.134432 kubelet[2763]: I0515 16:02:18.134384 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53"} err="failed to get container status \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\": rpc error: code = NotFound desc = an error occurred when try to find container \"38386cdb3a9fdf803f445fe321ef368097c50aa6eb4c23406d63d25efbe39e53\": not found" May 15 16:02:18.167387 kubelet[2763]: I0515 16:02:18.167281 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-8gstw" nodeCondition=["DiskPressure"] May 15 16:02:18.323288 kubelet[2763]: I0515 16:02:18.323238 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-w6fzv" nodeCondition=["DiskPressure"] May 15 16:02:18.467026 kubelet[2763]: I0515 16:02:18.466843 2763 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-797db67f8-kmq9t"] May 15 16:02:18.472001 kubelet[2763]: I0515 16:02:18.471929 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zhf8s" nodeCondition=["DiskPressure"] May 15 16:02:18.493646 kubelet[2763]: I0515 16:02:18.493592 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:18.494041 kubelet[2763]: I0515 16:02:18.493852 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:18.496558 containerd[1533]: time="2025-05-15T16:02:18.496493932Z" level=info msg="StopPodSandbox for \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\"" May 15 16:02:18.497735 containerd[1533]: time="2025-05-15T16:02:18.497534361Z" level=info msg="TearDown network for sandbox \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" successfully" May 15 16:02:18.497735 containerd[1533]: time="2025-05-15T16:02:18.497579881Z" level=info msg="StopPodSandbox for \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" returns successfully" May 15 16:02:18.499107 containerd[1533]: time="2025-05-15T16:02:18.498293591Z" level=info msg="RemovePodSandbox for \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\"" May 15 16:02:18.499107 containerd[1533]: time="2025-05-15T16:02:18.498333904Z" level=info msg="Forcibly stopping sandbox \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\"" May 15 16:02:18.499107 containerd[1533]: time="2025-05-15T16:02:18.498453832Z" level=info msg="TearDown network for sandbox \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" successfully" May 15 16:02:18.500256 containerd[1533]: time="2025-05-15T16:02:18.500221242Z" level=info msg="Ensure that sandbox 7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2 in task-service has been cleanup successfully" May 15 16:02:18.503683 containerd[1533]: time="2025-05-15T16:02:18.503555891Z" level=info msg="RemovePodSandbox \"7b885a725488b71097efeec3a627377edfa0da3d30c30196d39f6b7f9e05f5f2\" returns successfully" May 15 16:02:18.504616 kubelet[2763]: I0515 16:02:18.504581 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:18.521629 kubelet[2763]: I0515 16:02:18.521405 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:18.521629 kubelet[2763]: I0515 16:02:18.521480 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5858fd5ccf-lw59z","kube-system/coredns-7db6d8ff4d-h2t96","kube-system/coredns-7db6d8ff4d-82kdh","calico-system/csi-node-driver-w2wp6","calico-system/calico-node-68559","calico-system/calico-typha-8b9bd54c9-lhz4q","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521516 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521527 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521533 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521541 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521552 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68559" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521563 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8b9bd54c9-lhz4q" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521573 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521582 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521590 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:18.521629 kubelet[2763]: E0515 16:02:18.521599 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:18.521629 kubelet[2763]: I0515 16:02:18.521609 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:02:18.620869 kubelet[2763]: I0515 16:02:18.620814 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-ndnh2" nodeCondition=["DiskPressure"] May 15 16:02:18.766154 kubelet[2763]: I0515 16:02:18.766026 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-scj27" nodeCondition=["DiskPressure"] May 15 16:02:18.918789 kubelet[2763]: I0515 16:02:18.918656 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qvrc9" nodeCondition=["DiskPressure"] May 15 16:02:19.072444 kubelet[2763]: I0515 16:02:19.072295 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-sr45z" nodeCondition=["DiskPressure"] May 15 16:02:19.177675 kubelet[2763]: I0515 16:02:19.177601 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-4sfmd" nodeCondition=["DiskPressure"] May 15 16:02:19.318714 kubelet[2763]: I0515 16:02:19.318166 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-hz2ds" nodeCondition=["DiskPressure"] May 15 16:02:19.476188 kubelet[2763]: I0515 16:02:19.476120 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-smfvw" nodeCondition=["DiskPressure"] May 15 16:02:19.571305 kubelet[2763]: I0515 16:02:19.571253 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-x62h4" nodeCondition=["DiskPressure"] May 15 16:02:19.722019 kubelet[2763]: I0515 16:02:19.721029 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-sfx8l" nodeCondition=["DiskPressure"] May 15 16:02:19.872231 kubelet[2763]: I0515 16:02:19.872104 2763 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rcl48" nodeCondition=["DiskPressure"] May 15 16:02:20.239815 systemd[1]: Started sshd@7-146.190.42.225:22-139.178.68.195:52854.service - OpenSSH per-connection server daemon (139.178.68.195:52854). May 15 16:02:20.336030 sshd[3811]: Accepted publickey for core from 139.178.68.195 port 52854 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:20.338131 sshd-session[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:20.344925 systemd-logind[1490]: New session 8 of user core. May 15 16:02:20.349273 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 16:02:20.543850 sshd[3814]: Connection closed by 139.178.68.195 port 52854 May 15 16:02:20.544531 sshd-session[3811]: pam_unix(sshd:session): session closed for user core May 15 16:02:20.549918 systemd[1]: sshd@7-146.190.42.225:22-139.178.68.195:52854.service: Deactivated successfully. May 15 16:02:20.552096 systemd[1]: session-8.scope: Deactivated successfully. May 15 16:02:20.553024 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. May 15 16:02:20.555237 systemd-logind[1490]: Removed session 8. May 15 16:02:21.858362 kubelet[2763]: E0515 16:02:21.858281 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:21.860696 containerd[1533]: time="2025-05-15T16:02:21.860353966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,}" May 15 16:02:21.861829 containerd[1533]: time="2025-05-15T16:02:21.861215190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,}" May 15 16:02:21.970009 containerd[1533]: time="2025-05-15T16:02:21.969935472Z" level=error msg="Failed to destroy network for sandbox \"3df20725a99e5bbacbd9f92c52a885f0d57b70c4ceb08aaa11d89f4980fa19a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:21.975982 containerd[1533]: time="2025-05-15T16:02:21.974142706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df20725a99e5bbacbd9f92c52a885f0d57b70c4ceb08aaa11d89f4980fa19a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:21.977629 kubelet[2763]: E0515 16:02:21.976219 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df20725a99e5bbacbd9f92c52a885f0d57b70c4ceb08aaa11d89f4980fa19a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:21.977629 kubelet[2763]: E0515 16:02:21.976311 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df20725a99e5bbacbd9f92c52a885f0d57b70c4ceb08aaa11d89f4980fa19a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:21.977629 kubelet[2763]: E0515 16:02:21.976338 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df20725a99e5bbacbd9f92c52a885f0d57b70c4ceb08aaa11d89f4980fa19a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:21.977629 kubelet[2763]: E0515 16:02:21.976384 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3df20725a99e5bbacbd9f92c52a885f0d57b70c4ceb08aaa11d89f4980fa19a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-82kdh" podUID="17084be0-dcb1-4553-93ab-fa631e730966" May 15 16:02:21.978741 systemd[1]: run-netns-cni\x2deeb84f0c\x2dc988\x2de343\x2d7090\x2d367195b95726.mount: Deactivated successfully. May 15 16:02:21.994037 containerd[1533]: time="2025-05-15T16:02:21.991582520Z" level=error msg="Failed to destroy network for sandbox \"b09bb99282c20417130eb7c38b07a55c9a6aa6da9c3c82f03ccb65bd4b1100aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:21.995492 containerd[1533]: time="2025-05-15T16:02:21.995425178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09bb99282c20417130eb7c38b07a55c9a6aa6da9c3c82f03ccb65bd4b1100aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:21.996143 kubelet[2763]: E0515 16:02:21.995978 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09bb99282c20417130eb7c38b07a55c9a6aa6da9c3c82f03ccb65bd4b1100aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:21.996245 kubelet[2763]: E0515 16:02:21.996187 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09bb99282c20417130eb7c38b07a55c9a6aa6da9c3c82f03ccb65bd4b1100aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:21.996245 kubelet[2763]: E0515 16:02:21.996211 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b09bb99282c20417130eb7c38b07a55c9a6aa6da9c3c82f03ccb65bd4b1100aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:21.996335 kubelet[2763]: E0515 16:02:21.996276 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b09bb99282c20417130eb7c38b07a55c9a6aa6da9c3c82f03ccb65bd4b1100aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:21.997511 systemd[1]: run-netns-cni\x2d42fcbc7d\x2dc4bf\x2de649\x2d15ce\x2de4feb4a9c4bd.mount: Deactivated successfully. May 15 16:02:22.860513 containerd[1533]: time="2025-05-15T16:02:22.860219038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5858fd5ccf-lw59z,Uid:c4c65cd6-c8cd-4005-9b33-295db8fc6f42,Namespace:calico-system,Attempt:0,}" May 15 16:02:22.946013 containerd[1533]: time="2025-05-15T16:02:22.944350288Z" level=error msg="Failed to destroy network for sandbox \"27d0564d6a16e6a85243b3c0558b7cc5098e15d181d358df7dd93efa5c85caec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:22.948281 containerd[1533]: time="2025-05-15T16:02:22.947294031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5858fd5ccf-lw59z,Uid:c4c65cd6-c8cd-4005-9b33-295db8fc6f42,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27d0564d6a16e6a85243b3c0558b7cc5098e15d181d358df7dd93efa5c85caec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:22.947686 systemd[1]: run-netns-cni\x2dfbb056b1\x2d3f86\x2d90fa\x2dad68\x2d90274645d0da.mount: Deactivated successfully. May 15 16:02:22.949741 kubelet[2763]: E0515 16:02:22.947586 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27d0564d6a16e6a85243b3c0558b7cc5098e15d181d358df7dd93efa5c85caec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:22.949741 kubelet[2763]: E0515 16:02:22.947645 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27d0564d6a16e6a85243b3c0558b7cc5098e15d181d358df7dd93efa5c85caec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:22.949741 kubelet[2763]: E0515 16:02:22.947669 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27d0564d6a16e6a85243b3c0558b7cc5098e15d181d358df7dd93efa5c85caec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:22.949741 kubelet[2763]: E0515 16:02:22.947716 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5858fd5ccf-lw59z_calico-system(c4c65cd6-c8cd-4005-9b33-295db8fc6f42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5858fd5ccf-lw59z_calico-system(c4c65cd6-c8cd-4005-9b33-295db8fc6f42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27d0564d6a16e6a85243b3c0558b7cc5098e15d181d358df7dd93efa5c85caec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" podUID="c4c65cd6-c8cd-4005-9b33-295db8fc6f42" May 15 16:02:23.858643 kubelet[2763]: E0515 16:02:23.858571 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:23.859840 containerd[1533]: time="2025-05-15T16:02:23.859635213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,}" May 15 16:02:23.934872 containerd[1533]: time="2025-05-15T16:02:23.934821498Z" level=error msg="Failed to destroy network for sandbox \"6ddf7dbf3a930d5cfbc6f6da525f1a2b0fc9772911a0054f1f0e2b6f28f5205a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:23.938100 containerd[1533]: time="2025-05-15T16:02:23.937951532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ddf7dbf3a930d5cfbc6f6da525f1a2b0fc9772911a0054f1f0e2b6f28f5205a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:23.939191 systemd[1]: run-netns-cni\x2d3f4ddc66\x2dbd8f\x2d2d13\x2d5ca8\x2dd2ea3409855e.mount: Deactivated successfully. May 15 16:02:23.939544 kubelet[2763]: E0515 16:02:23.939236 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ddf7dbf3a930d5cfbc6f6da525f1a2b0fc9772911a0054f1f0e2b6f28f5205a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:23.939544 kubelet[2763]: E0515 16:02:23.939299 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ddf7dbf3a930d5cfbc6f6da525f1a2b0fc9772911a0054f1f0e2b6f28f5205a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:23.939544 kubelet[2763]: E0515 16:02:23.939321 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ddf7dbf3a930d5cfbc6f6da525f1a2b0fc9772911a0054f1f0e2b6f28f5205a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:23.939544 kubelet[2763]: E0515 16:02:23.939455 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ddf7dbf3a930d5cfbc6f6da525f1a2b0fc9772911a0054f1f0e2b6f28f5205a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2t96" podUID="d86f0cb4-0d25-49dd-9a44-3295d0b01a8e" May 15 16:02:25.564076 systemd[1]: Started sshd@8-146.190.42.225:22-139.178.68.195:56980.service - OpenSSH per-connection server daemon (139.178.68.195:56980). May 15 16:02:25.623547 sshd[3943]: Accepted publickey for core from 139.178.68.195 port 56980 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:25.625505 sshd-session[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:25.631543 systemd-logind[1490]: New session 9 of user core. May 15 16:02:25.639301 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 16:02:25.793809 sshd[3945]: Connection closed by 139.178.68.195 port 56980 May 15 16:02:25.793184 sshd-session[3943]: pam_unix(sshd:session): session closed for user core May 15 16:02:25.797071 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. May 15 16:02:25.797401 systemd[1]: sshd@8-146.190.42.225:22-139.178.68.195:56980.service: Deactivated successfully. May 15 16:02:25.799904 systemd[1]: session-9.scope: Deactivated successfully. May 15 16:02:25.803388 systemd-logind[1490]: Removed session 9. May 15 16:02:26.858872 kubelet[2763]: E0515 16:02:26.858614 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:26.860811 containerd[1533]: time="2025-05-15T16:02:26.860734463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 16:02:28.543143 kubelet[2763]: I0515 16:02:28.542769 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:28.543143 kubelet[2763]: I0515 16:02:28.542817 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:28.547283 kubelet[2763]: I0515 16:02:28.547220 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:28.561548 kubelet[2763]: I0515 16:02:28.561507 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:28.561945 kubelet[2763]: I0515 16:02:28.561824 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-kube-controllers-5858fd5ccf-lw59z","kube-system/coredns-7db6d8ff4d-h2t96","calico-system/calico-node-68559","calico-system/csi-node-driver-w2wp6","calico-system/calico-typha-8b9bd54c9-lhz4q","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:28.561945 kubelet[2763]: E0515 16:02:28.561883 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:28.561945 kubelet[2763]: E0515 16:02:28.561910 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:28.561945 kubelet[2763]: E0515 16:02:28.561919 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:28.561945 kubelet[2763]: E0515 16:02:28.561925 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68559" May 15 16:02:28.561945 kubelet[2763]: E0515 16:02:28.561933 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:28.562296 kubelet[2763]: E0515 16:02:28.562222 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8b9bd54c9-lhz4q" May 15 16:02:28.562296 kubelet[2763]: E0515 16:02:28.562238 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:28.562296 kubelet[2763]: E0515 16:02:28.562248 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:02:28.562296 kubelet[2763]: E0515 16:02:28.562257 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:28.562296 kubelet[2763]: E0515 16:02:28.562265 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:28.562296 kubelet[2763]: I0515 16:02:28.562285 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:02:30.811961 systemd[1]: Started sshd@9-146.190.42.225:22-139.178.68.195:56994.service - OpenSSH per-connection server daemon (139.178.68.195:56994). May 15 16:02:30.959419 sshd[3962]: Accepted publickey for core from 139.178.68.195 port 56994 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:30.961891 sshd-session[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:30.974169 systemd-logind[1490]: New session 10 of user core. May 15 16:02:30.980304 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 16:02:31.005887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196037884.mount: Deactivated successfully. May 15 16:02:31.006881 containerd[1533]: time="2025-05-15T16:02:31.006115759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3196037884: mkdir /var/lib/containerd/tmpmounts/containerd-mount3196037884/usr/lib/.build-id/58: no space left on device" May 15 16:02:31.006881 containerd[1533]: time="2025-05-15T16:02:31.006170688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 16:02:31.007267 kubelet[2763]: E0515 16:02:31.006452 2763 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3196037884: mkdir /var/lib/containerd/tmpmounts/containerd-mount3196037884/usr/lib/.build-id/58: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 16:02:31.007267 kubelet[2763]: E0515 16:02:31.006522 2763 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3196037884: mkdir /var/lib/containerd/tmpmounts/containerd-mount3196037884/usr/lib/.build-id/58: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 16:02:31.009922 kubelet[2763]: E0515 16:02:31.009874 2763 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbjkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-68559_calico-system(e007eeab-9069-48bd-be2f-87c5ad02bcf8): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3196037884: mkdir /var/lib/containerd/tmpmounts/containerd-mount3196037884/usr/lib/.build-id/58: no space left on device May 15 16:02:31.010917 kubelet[2763]: E0515 16:02:31.009949 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3196037884: mkdir /var/lib/containerd/tmpmounts/containerd-mount3196037884/usr/lib/.build-id/58: no space left on device\"" pod="calico-system/calico-node-68559" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" May 15 16:02:31.177949 sshd[3964]: Connection closed by 139.178.68.195 port 56994 May 15 16:02:31.178522 sshd-session[3962]: pam_unix(sshd:session): session closed for user core May 15 16:02:31.184468 systemd[1]: sshd@9-146.190.42.225:22-139.178.68.195:56994.service: Deactivated successfully. May 15 16:02:31.187504 systemd[1]: session-10.scope: Deactivated successfully. May 15 16:02:31.188650 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. May 15 16:02:31.190517 systemd-logind[1490]: Removed session 10. May 15 16:02:34.859886 kubelet[2763]: E0515 16:02:34.858541 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:34.860808 containerd[1533]: time="2025-05-15T16:02:34.860766144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5858fd5ccf-lw59z,Uid:c4c65cd6-c8cd-4005-9b33-295db8fc6f42,Namespace:calico-system,Attempt:0,}" May 15 16:02:34.862106 containerd[1533]: time="2025-05-15T16:02:34.860781416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,}" May 15 16:02:34.950029 containerd[1533]: time="2025-05-15T16:02:34.949614633Z" level=error msg="Failed to destroy network for sandbox \"0f742e45b797a87d719614ffd03059b6a381adf16ac90bd8e5bd9e79c73521a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:34.953016 systemd[1]: run-netns-cni\x2d5ade50a5\x2d8e24\x2dd3ba\x2d5db9\x2dc97f6189572b.mount: Deactivated successfully. May 15 16:02:34.954121 containerd[1533]: time="2025-05-15T16:02:34.954077798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5858fd5ccf-lw59z,Uid:c4c65cd6-c8cd-4005-9b33-295db8fc6f42,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f742e45b797a87d719614ffd03059b6a381adf16ac90bd8e5bd9e79c73521a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:34.955098 kubelet[2763]: E0515 16:02:34.954753 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f742e45b797a87d719614ffd03059b6a381adf16ac90bd8e5bd9e79c73521a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:34.955098 kubelet[2763]: E0515 16:02:34.955064 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f742e45b797a87d719614ffd03059b6a381adf16ac90bd8e5bd9e79c73521a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:34.955324 kubelet[2763]: E0515 16:02:34.955260 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f742e45b797a87d719614ffd03059b6a381adf16ac90bd8e5bd9e79c73521a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:34.955562 kubelet[2763]: E0515 16:02:34.955510 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5858fd5ccf-lw59z_calico-system(c4c65cd6-c8cd-4005-9b33-295db8fc6f42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5858fd5ccf-lw59z_calico-system(c4c65cd6-c8cd-4005-9b33-295db8fc6f42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f742e45b797a87d719614ffd03059b6a381adf16ac90bd8e5bd9e79c73521a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" podUID="c4c65cd6-c8cd-4005-9b33-295db8fc6f42" May 15 16:02:34.960003 containerd[1533]: time="2025-05-15T16:02:34.957781580Z" level=error msg="Failed to destroy network for sandbox \"ccb84d439eac87fadf86240662cbe7d9c1d370cf8eb98a3038f228a2089c7917\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:34.960898 containerd[1533]: time="2025-05-15T16:02:34.960854895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb84d439eac87fadf86240662cbe7d9c1d370cf8eb98a3038f228a2089c7917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:34.961157 systemd[1]: run-netns-cni\x2dfec3803f\x2dda6d\x2dc0b5\x2d9faa\x2d63eef669defc.mount: Deactivated successfully. May 15 16:02:34.962981 kubelet[2763]: E0515 16:02:34.962938 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb84d439eac87fadf86240662cbe7d9c1d370cf8eb98a3038f228a2089c7917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:34.963203 kubelet[2763]: E0515 16:02:34.963183 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb84d439eac87fadf86240662cbe7d9c1d370cf8eb98a3038f228a2089c7917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:34.963258 kubelet[2763]: E0515 16:02:34.963213 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb84d439eac87fadf86240662cbe7d9c1d370cf8eb98a3038f228a2089c7917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:34.963293 kubelet[2763]: E0515 16:02:34.963263 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccb84d439eac87fadf86240662cbe7d9c1d370cf8eb98a3038f228a2089c7917\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-82kdh" podUID="17084be0-dcb1-4553-93ab-fa631e730966" May 15 16:02:35.859106 containerd[1533]: time="2025-05-15T16:02:35.859055966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,}" May 15 16:02:35.922800 containerd[1533]: time="2025-05-15T16:02:35.922744178Z" level=error msg="Failed to destroy network for sandbox \"8d262d802a3695619739a0bc384127c00f282c6ccc97d4bf68ee1ee6f4a44716\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:35.925654 systemd[1]: run-netns-cni\x2dffa688c5\x2d7074\x2db888\x2dd526\x2d7f148e85da2f.mount: Deactivated successfully. May 15 16:02:35.926646 kubelet[2763]: E0515 16:02:35.925973 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d262d802a3695619739a0bc384127c00f282c6ccc97d4bf68ee1ee6f4a44716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:35.926646 kubelet[2763]: E0515 16:02:35.926066 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d262d802a3695619739a0bc384127c00f282c6ccc97d4bf68ee1ee6f4a44716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:35.926646 kubelet[2763]: E0515 16:02:35.926092 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d262d802a3695619739a0bc384127c00f282c6ccc97d4bf68ee1ee6f4a44716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:35.926646 kubelet[2763]: E0515 16:02:35.926144 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d262d802a3695619739a0bc384127c00f282c6ccc97d4bf68ee1ee6f4a44716\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:35.928128 containerd[1533]: time="2025-05-15T16:02:35.925695538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d262d802a3695619739a0bc384127c00f282c6ccc97d4bf68ee1ee6f4a44716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:36.196881 systemd[1]: Started sshd@10-146.190.42.225:22-139.178.68.195:38946.service - OpenSSH per-connection server daemon (139.178.68.195:38946). May 15 16:02:36.256411 sshd[4070]: Accepted publickey for core from 139.178.68.195 port 38946 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:36.258237 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:36.265259 systemd-logind[1490]: New session 11 of user core. May 15 16:02:36.270203 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 16:02:36.401266 sshd[4072]: Connection closed by 139.178.68.195 port 38946 May 15 16:02:36.401710 sshd-session[4070]: pam_unix(sshd:session): session closed for user core May 15 16:02:36.415033 systemd[1]: sshd@10-146.190.42.225:22-139.178.68.195:38946.service: Deactivated successfully. May 15 16:02:36.417796 systemd[1]: session-11.scope: Deactivated successfully. May 15 16:02:36.418954 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. May 15 16:02:36.423264 systemd[1]: Started sshd@11-146.190.42.225:22-139.178.68.195:38956.service - OpenSSH per-connection server daemon (139.178.68.195:38956). May 15 16:02:36.424281 systemd-logind[1490]: Removed session 11. May 15 16:02:36.483815 sshd[4085]: Accepted publickey for core from 139.178.68.195 port 38956 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:36.485688 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:36.490795 systemd-logind[1490]: New session 12 of user core. May 15 16:02:36.501209 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 16:02:36.665115 sshd[4087]: Connection closed by 139.178.68.195 port 38956 May 15 16:02:36.666063 sshd-session[4085]: pam_unix(sshd:session): session closed for user core May 15 16:02:36.680696 systemd[1]: sshd@11-146.190.42.225:22-139.178.68.195:38956.service: Deactivated successfully. May 15 16:02:36.687154 systemd[1]: session-12.scope: Deactivated successfully. May 15 16:02:36.689324 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. May 15 16:02:36.697017 systemd-logind[1490]: Removed session 12. May 15 16:02:36.701851 systemd[1]: Started sshd@12-146.190.42.225:22-139.178.68.195:38972.service - OpenSSH per-connection server daemon (139.178.68.195:38972). May 15 16:02:36.762086 sshd[4096]: Accepted publickey for core from 139.178.68.195 port 38972 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:36.764068 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:36.770802 systemd-logind[1490]: New session 13 of user core. May 15 16:02:36.776220 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 16:02:36.912554 sshd[4098]: Connection closed by 139.178.68.195 port 38972 May 15 16:02:36.911899 sshd-session[4096]: pam_unix(sshd:session): session closed for user core May 15 16:02:36.916641 systemd[1]: sshd@12-146.190.42.225:22-139.178.68.195:38972.service: Deactivated successfully. May 15 16:02:36.919534 systemd[1]: session-13.scope: Deactivated successfully. May 15 16:02:36.920869 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. May 15 16:02:36.923679 systemd-logind[1490]: Removed session 13. May 15 16:02:37.859668 kubelet[2763]: E0515 16:02:37.859618 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:37.860957 containerd[1533]: time="2025-05-15T16:02:37.860658040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,}" May 15 16:02:37.932018 containerd[1533]: time="2025-05-15T16:02:37.931934971Z" level=error msg="Failed to destroy network for sandbox \"138c0ee166867b46f0ab4b5808222c9cd911a104b27bf32ce13ceeaa4766ece8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:37.934576 containerd[1533]: time="2025-05-15T16:02:37.934514037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"138c0ee166867b46f0ab4b5808222c9cd911a104b27bf32ce13ceeaa4766ece8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:37.936484 kubelet[2763]: E0515 16:02:37.934879 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138c0ee166867b46f0ab4b5808222c9cd911a104b27bf32ce13ceeaa4766ece8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:37.936484 kubelet[2763]: E0515 16:02:37.934945 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138c0ee166867b46f0ab4b5808222c9cd911a104b27bf32ce13ceeaa4766ece8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:37.936484 kubelet[2763]: E0515 16:02:37.934968 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138c0ee166867b46f0ab4b5808222c9cd911a104b27bf32ce13ceeaa4766ece8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:37.936484 kubelet[2763]: E0515 16:02:37.935028 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"138c0ee166867b46f0ab4b5808222c9cd911a104b27bf32ce13ceeaa4766ece8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2t96" podUID="d86f0cb4-0d25-49dd-9a44-3295d0b01a8e" May 15 16:02:37.936111 systemd[1]: run-netns-cni\x2d65002a22\x2d5d8a\x2d4563\x2d1dc5\x2d6c59c4a26150.mount: Deactivated successfully. May 15 16:02:38.580227 kubelet[2763]: I0515 16:02:38.580165 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:38.580227 kubelet[2763]: I0515 16:02:38.580211 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:38.582840 kubelet[2763]: I0515 16:02:38.582744 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:38.595797 kubelet[2763]: I0515 16:02:38.595567 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:38.595797 kubelet[2763]: I0515 16:02:38.595648 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-kube-controllers-5858fd5ccf-lw59z","kube-system/coredns-7db6d8ff4d-h2t96","calico-system/csi-node-driver-w2wp6","calico-system/calico-node-68559","calico-system/calico-typha-8b9bd54c9-lhz4q","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595686 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595696 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595703 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595710 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595716 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-68559" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595727 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8b9bd54c9-lhz4q" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595738 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595747 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595755 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:38.595797 kubelet[2763]: E0515 16:02:38.595766 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:38.595797 kubelet[2763]: I0515 16:02:38.595776 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:02:41.926653 systemd[1]: Started sshd@13-146.190.42.225:22-139.178.68.195:38978.service - OpenSSH per-connection server daemon (139.178.68.195:38978). May 15 16:02:41.992110 sshd[4139]: Accepted publickey for core from 139.178.68.195 port 38978 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:41.993731 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:41.998948 systemd-logind[1490]: New session 14 of user core. May 15 16:02:42.007263 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 16:02:42.146608 sshd[4141]: Connection closed by 139.178.68.195 port 38978 May 15 16:02:42.147263 sshd-session[4139]: pam_unix(sshd:session): session closed for user core May 15 16:02:42.153591 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. May 15 16:02:42.154272 systemd[1]: sshd@13-146.190.42.225:22-139.178.68.195:38978.service: Deactivated successfully. May 15 16:02:42.157154 systemd[1]: session-14.scope: Deactivated successfully. May 15 16:02:42.159869 systemd-logind[1490]: Removed session 14. May 15 16:02:45.859452 kubelet[2763]: E0515 16:02:45.859013 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:45.860431 containerd[1533]: time="2025-05-15T16:02:45.860319547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5858fd5ccf-lw59z,Uid:c4c65cd6-c8cd-4005-9b33-295db8fc6f42,Namespace:calico-system,Attempt:0,}" May 15 16:02:45.861370 kubelet[2763]: E0515 16:02:45.861344 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-68559" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" May 15 16:02:45.941351 containerd[1533]: time="2025-05-15T16:02:45.941303781Z" level=error msg="Failed to destroy network for sandbox \"346285e2fb2be6cf4f234e7c9c42d4648ef7eebbf2f8d6f619bcb944bedc8a73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:45.944622 containerd[1533]: time="2025-05-15T16:02:45.944474695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5858fd5ccf-lw59z,Uid:c4c65cd6-c8cd-4005-9b33-295db8fc6f42,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"346285e2fb2be6cf4f234e7c9c42d4648ef7eebbf2f8d6f619bcb944bedc8a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:45.944860 kubelet[2763]: E0515 16:02:45.944788 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346285e2fb2be6cf4f234e7c9c42d4648ef7eebbf2f8d6f619bcb944bedc8a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:45.944860 kubelet[2763]: E0515 16:02:45.944850 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346285e2fb2be6cf4f234e7c9c42d4648ef7eebbf2f8d6f619bcb944bedc8a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:45.944942 kubelet[2763]: E0515 16:02:45.944870 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346285e2fb2be6cf4f234e7c9c42d4648ef7eebbf2f8d6f619bcb944bedc8a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" May 15 16:02:45.944942 kubelet[2763]: E0515 16:02:45.944915 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5858fd5ccf-lw59z_calico-system(c4c65cd6-c8cd-4005-9b33-295db8fc6f42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5858fd5ccf-lw59z_calico-system(c4c65cd6-c8cd-4005-9b33-295db8fc6f42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"346285e2fb2be6cf4f234e7c9c42d4648ef7eebbf2f8d6f619bcb944bedc8a73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5858fd5ccf-lw59z" podUID="c4c65cd6-c8cd-4005-9b33-295db8fc6f42" May 15 16:02:45.945430 systemd[1]: run-netns-cni\x2d620320ec\x2d4f86\x2de908\x2d8d1d\x2dcfa2cc3268ab.mount: Deactivated successfully. May 15 16:02:46.859289 containerd[1533]: time="2025-05-15T16:02:46.858513227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,}" May 15 16:02:46.939317 containerd[1533]: time="2025-05-15T16:02:46.939217441Z" level=error msg="Failed to destroy network for sandbox \"c5373ea82b35e995a8436374166b39d0ff7389cfdf246c2b93af45e999dea196\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:46.942706 systemd[1]: run-netns-cni\x2df63e1c5c\x2dc044\x2da834\x2dc851\x2d86d7a2f1cc39.mount: Deactivated successfully. May 15 16:02:46.943050 containerd[1533]: time="2025-05-15T16:02:46.943008036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5373ea82b35e995a8436374166b39d0ff7389cfdf246c2b93af45e999dea196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:46.943500 kubelet[2763]: E0515 16:02:46.943461 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5373ea82b35e995a8436374166b39d0ff7389cfdf246c2b93af45e999dea196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:46.945185 kubelet[2763]: E0515 16:02:46.944508 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5373ea82b35e995a8436374166b39d0ff7389cfdf246c2b93af45e999dea196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:46.945185 kubelet[2763]: E0515 16:02:46.944640 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5373ea82b35e995a8436374166b39d0ff7389cfdf246c2b93af45e999dea196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:46.945533 kubelet[2763]: E0515 16:02:46.945316 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5373ea82b35e995a8436374166b39d0ff7389cfdf246c2b93af45e999dea196\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:02:47.011017 containerd[1533]: time="2025-05-15T16:02:47.010952335Z" level=info msg="StopContainer for \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" with timeout 300 (s)" May 15 16:02:47.012506 containerd[1533]: time="2025-05-15T16:02:47.011844747Z" level=info msg="Stop container \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" with signal terminated" May 15 16:02:47.099172 containerd[1533]: time="2025-05-15T16:02:47.098811475Z" level=info msg="StopPodSandbox for \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\"" May 15 16:02:47.099907 containerd[1533]: time="2025-05-15T16:02:47.099870024Z" level=info msg="Container to stop \"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 16:02:47.100080 containerd[1533]: time="2025-05-15T16:02:47.100060622Z" level=info msg="Container to stop \"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 16:02:47.114246 systemd[1]: cri-containerd-e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924.scope: Deactivated successfully. May 15 16:02:47.118717 containerd[1533]: time="2025-05-15T16:02:47.118485561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" id:\"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" pid:3279 exit_status:137 exited_at:{seconds:1747324967 nanos:117650285}" May 15 16:02:47.168312 systemd[1]: Started sshd@14-146.190.42.225:22-139.178.68.195:54348.service - OpenSSH per-connection server daemon (139.178.68.195:54348). May 15 16:02:47.217175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924-rootfs.mount: Deactivated successfully. May 15 16:02:47.229566 containerd[1533]: time="2025-05-15T16:02:47.229341889Z" level=info msg="shim disconnected" id=e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924 namespace=k8s.io May 15 16:02:47.229566 containerd[1533]: time="2025-05-15T16:02:47.229375507Z" level=warning msg="cleaning up after shim disconnected" id=e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924 namespace=k8s.io May 15 16:02:47.229566 containerd[1533]: time="2025-05-15T16:02:47.229382955Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 16:02:47.284389 containerd[1533]: time="2025-05-15T16:02:47.283545673Z" level=info msg="received exit event sandbox_id:\"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" exit_status:137 exited_at:{seconds:1747324967 nanos:117650285}" May 15 16:02:47.290242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924-shm.mount: Deactivated successfully. May 15 16:02:47.291425 containerd[1533]: time="2025-05-15T16:02:47.291393161Z" level=info msg="TearDown network for sandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" successfully" May 15 16:02:47.291519 containerd[1533]: time="2025-05-15T16:02:47.291506919Z" level=info msg="StopPodSandbox for \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" returns successfully" May 15 16:02:47.295705 sshd[4234]: Accepted publickey for core from 139.178.68.195 port 54348 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:47.299457 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:47.312243 systemd-logind[1490]: New session 15 of user core. May 15 16:02:47.318300 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 16:02:47.432740 kubelet[2763]: I0515 16:02:47.430722 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-net-dir\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.432740 kubelet[2763]: I0515 16:02:47.430795 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-lib-modules\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.432740 kubelet[2763]: I0515 16:02:47.430821 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-policysync\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.432740 kubelet[2763]: I0515 16:02:47.430845 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-var-run-calico\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.432740 kubelet[2763]: I0515 16:02:47.430872 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-xtables-lock\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.432740 kubelet[2763]: I0515 16:02:47.430905 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e007eeab-9069-48bd-be2f-87c5ad02bcf8-tigera-ca-bundle\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.432740 kubelet[2763]: I0515 16:02:47.430931 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c65cd6-c8cd-4005-9b33-295db8fc6f42-tigera-ca-bundle\") pod \"c4c65cd6-c8cd-4005-9b33-295db8fc6f42\" (UID: \"c4c65cd6-c8cd-4005-9b33-295db8fc6f42\") " May 15 16:02:47.432740 kubelet[2763]: I0515 16:02:47.430966 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e007eeab-9069-48bd-be2f-87c5ad02bcf8-node-certs\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.434861 kubelet[2763]: I0515 16:02:47.433474 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.434861 kubelet[2763]: I0515 16:02:47.433561 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.434861 kubelet[2763]: I0515 16:02:47.433587 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.434861 kubelet[2763]: I0515 16:02:47.433609 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-policysync" (OuterVolumeSpecName: "policysync") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.434861 kubelet[2763]: I0515 16:02:47.434131 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-log-dir\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.434861 kubelet[2763]: I0515 16:02:47.434797 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-flexvol-driver-host\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.434861 kubelet[2763]: I0515 16:02:47.434852 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-var-lib-calico\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.435364 kubelet[2763]: I0515 16:02:47.434903 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbjkl\" (UniqueName: \"kubernetes.io/projected/e007eeab-9069-48bd-be2f-87c5ad02bcf8-kube-api-access-gbjkl\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.435364 kubelet[2763]: I0515 16:02:47.434931 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjnvf\" (UniqueName: \"kubernetes.io/projected/c4c65cd6-c8cd-4005-9b33-295db8fc6f42-kube-api-access-cjnvf\") pod \"c4c65cd6-c8cd-4005-9b33-295db8fc6f42\" (UID: \"c4c65cd6-c8cd-4005-9b33-295db8fc6f42\") " May 15 16:02:47.435364 kubelet[2763]: I0515 16:02:47.435047 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-bin-dir\") pod \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\" (UID: \"e007eeab-9069-48bd-be2f-87c5ad02bcf8\") " May 15 16:02:47.437627 kubelet[2763]: I0515 16:02:47.434205 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.437627 kubelet[2763]: I0515 16:02:47.437309 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.438710 kubelet[2763]: I0515 16:02:47.437861 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.438710 kubelet[2763]: I0515 16:02:47.437901 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.439071 kubelet[2763]: I0515 16:02:47.438793 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 16:02:47.439071 kubelet[2763]: I0515 16:02:47.438863 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e007eeab-9069-48bd-be2f-87c5ad02bcf8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 16:02:47.441571 kubelet[2763]: I0515 16:02:47.441352 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c65cd6-c8cd-4005-9b33-295db8fc6f42-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c4c65cd6-c8cd-4005-9b33-295db8fc6f42" (UID: "c4c65cd6-c8cd-4005-9b33-295db8fc6f42"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 16:02:47.450875 kubelet[2763]: I0515 16:02:47.450289 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e007eeab-9069-48bd-be2f-87c5ad02bcf8-kube-api-access-gbjkl" (OuterVolumeSpecName: "kube-api-access-gbjkl") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "kube-api-access-gbjkl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 16:02:47.452061 systemd[1]: var-lib-kubelet-pods-e007eeab\x2d9069\x2d48bd\x2dbe2f\x2d87c5ad02bcf8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgbjkl.mount: Deactivated successfully. May 15 16:02:47.455442 kubelet[2763]: I0515 16:02:47.455379 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e007eeab-9069-48bd-be2f-87c5ad02bcf8-node-certs" (OuterVolumeSpecName: "node-certs") pod "e007eeab-9069-48bd-be2f-87c5ad02bcf8" (UID: "e007eeab-9069-48bd-be2f-87c5ad02bcf8"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 16:02:47.458129 kubelet[2763]: I0515 16:02:47.457407 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c65cd6-c8cd-4005-9b33-295db8fc6f42-kube-api-access-cjnvf" (OuterVolumeSpecName: "kube-api-access-cjnvf") pod "c4c65cd6-c8cd-4005-9b33-295db8fc6f42" (UID: "c4c65cd6-c8cd-4005-9b33-295db8fc6f42"). InnerVolumeSpecName "kube-api-access-cjnvf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 16:02:47.538016 kubelet[2763]: I0515 16:02:47.537914 2763 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-bin-dir\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.538016 kubelet[2763]: I0515 16:02:47.537951 2763 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-net-dir\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.538016 kubelet[2763]: I0515 16:02:47.537964 2763 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-lib-modules\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.538735 kubelet[2763]: I0515 16:02:47.537979 2763 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-policysync\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.538735 kubelet[2763]: I0515 16:02:47.538111 2763 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-var-run-calico\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.538735 kubelet[2763]: I0515 16:02:47.538126 2763 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-xtables-lock\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.538735 kubelet[2763]: I0515 16:02:47.538138 2763 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e007eeab-9069-48bd-be2f-87c5ad02bcf8-tigera-ca-bundle\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.539260 kubelet[2763]: I0515 16:02:47.538945 2763 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c65cd6-c8cd-4005-9b33-295db8fc6f42-tigera-ca-bundle\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.539260 kubelet[2763]: I0515 16:02:47.539105 2763 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e007eeab-9069-48bd-be2f-87c5ad02bcf8-node-certs\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.539260 kubelet[2763]: I0515 16:02:47.539123 2763 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-cni-log-dir\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.539260 kubelet[2763]: I0515 16:02:47.539138 2763 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-flexvol-driver-host\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.539260 kubelet[2763]: I0515 16:02:47.539154 2763 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e007eeab-9069-48bd-be2f-87c5ad02bcf8-var-lib-calico\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.539260 kubelet[2763]: I0515 16:02:47.539185 2763 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gbjkl\" (UniqueName: \"kubernetes.io/projected/e007eeab-9069-48bd-be2f-87c5ad02bcf8-kube-api-access-gbjkl\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.539260 kubelet[2763]: I0515 16:02:47.539202 2763 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cjnvf\" (UniqueName: \"kubernetes.io/projected/c4c65cd6-c8cd-4005-9b33-295db8fc6f42-kube-api-access-cjnvf\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:47.543908 sshd[4264]: Connection closed by 139.178.68.195 port 54348 May 15 16:02:47.545696 sshd-session[4234]: pam_unix(sshd:session): session closed for user core May 15 16:02:47.553064 systemd[1]: sshd@14-146.190.42.225:22-139.178.68.195:54348.service: Deactivated successfully. May 15 16:02:47.558708 systemd[1]: session-15.scope: Deactivated successfully. May 15 16:02:47.561222 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. May 15 16:02:47.567442 systemd-logind[1490]: Removed session 15. May 15 16:02:47.870266 systemd[1]: var-lib-kubelet-pods-c4c65cd6\x2dc8cd\x2d4005\x2d9b33\x2d295db8fc6f42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcjnvf.mount: Deactivated successfully. May 15 16:02:47.870421 systemd[1]: var-lib-kubelet-pods-e007eeab\x2d9069\x2d48bd\x2dbe2f\x2d87c5ad02bcf8-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 15 16:02:48.021853 systemd[1]: cri-containerd-cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8.scope: Deactivated successfully. May 15 16:02:48.022526 systemd[1]: cri-containerd-cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8.scope: Consumed 358ms CPU time, 34.5M memory peak, 12.9M read from disk. May 15 16:02:48.027700 containerd[1533]: time="2025-05-15T16:02:48.027452837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" id:\"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" pid:3315 exit_status:1 exited_at:{seconds:1747324968 nanos:26935739}" May 15 16:02:48.028514 containerd[1533]: time="2025-05-15T16:02:48.028178806Z" level=info msg="received exit event container_id:\"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" id:\"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" pid:3315 exit_status:1 exited_at:{seconds:1747324968 nanos:26935739}" May 15 16:02:48.060513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8-rootfs.mount: Deactivated successfully. May 15 16:02:48.066425 containerd[1533]: time="2025-05-15T16:02:48.066369819Z" level=info msg="StopContainer for \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" returns successfully" May 15 16:02:48.067232 containerd[1533]: time="2025-05-15T16:02:48.067206524Z" level=info msg="StopPodSandbox for \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\"" May 15 16:02:48.067499 containerd[1533]: time="2025-05-15T16:02:48.067474968Z" level=info msg="Container to stop \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 16:02:48.077482 systemd[1]: cri-containerd-ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97.scope: Deactivated successfully. May 15 16:02:48.080151 containerd[1533]: time="2025-05-15T16:02:48.079862333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" id:\"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" pid:3212 exit_status:137 exited_at:{seconds:1747324968 nanos:78523294}" May 15 16:02:48.124920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97-rootfs.mount: Deactivated successfully. May 15 16:02:48.129351 containerd[1533]: time="2025-05-15T16:02:48.128484441Z" level=info msg="received exit event sandbox_id:\"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" exit_status:137 exited_at:{seconds:1747324968 nanos:78523294}" May 15 16:02:48.131475 containerd[1533]: time="2025-05-15T16:02:48.131435439Z" level=info msg="TearDown network for sandbox \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" successfully" May 15 16:02:48.132195 containerd[1533]: time="2025-05-15T16:02:48.132170893Z" level=info msg="StopPodSandbox for \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" returns successfully" May 15 16:02:48.133073 containerd[1533]: time="2025-05-15T16:02:48.132184125Z" level=info msg="shim disconnected" id=ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97 namespace=k8s.io May 15 16:02:48.133073 containerd[1533]: time="2025-05-15T16:02:48.133073954Z" level=warning msg="cleaning up after shim disconnected" id=ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97 namespace=k8s.io May 15 16:02:48.133209 containerd[1533]: time="2025-05-15T16:02:48.133082323Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 16:02:48.133285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97-shm.mount: Deactivated successfully. May 15 16:02:48.192293 kubelet[2763]: I0515 16:02:48.192186 2763 scope.go:117] "RemoveContainer" containerID="f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0" May 15 16:02:48.203825 containerd[1533]: time="2025-05-15T16:02:48.203463239Z" level=info msg="RemoveContainer for \"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\"" May 15 16:02:48.213586 systemd[1]: Removed slice kubepods-besteffort-pode007eeab_9069_48bd_be2f_87c5ad02bcf8.slice - libcontainer container kubepods-besteffort-pode007eeab_9069_48bd_be2f_87c5ad02bcf8.slice. May 15 16:02:48.213774 systemd[1]: kubepods-besteffort-pode007eeab_9069_48bd_be2f_87c5ad02bcf8.slice: Consumed 645ms CPU time, 146.4M memory peak, 1.5M read from disk, 160.4M written to disk. May 15 16:02:48.219771 systemd[1]: Removed slice kubepods-besteffort-podc4c65cd6_c8cd_4005_9b33_295db8fc6f42.slice - libcontainer container kubepods-besteffort-podc4c65cd6_c8cd_4005_9b33_295db8fc6f42.slice. May 15 16:02:48.223083 containerd[1533]: time="2025-05-15T16:02:48.222606465Z" level=info msg="RemoveContainer for \"f1e2d7cda0dd14d6f438e3487081c359be03dcf5bcc7dff63d058438fb6762d0\" returns successfully" May 15 16:02:48.224111 kubelet[2763]: I0515 16:02:48.223882 2763 scope.go:117] "RemoveContainer" containerID="03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a" May 15 16:02:48.230655 containerd[1533]: time="2025-05-15T16:02:48.230357521Z" level=info msg="RemoveContainer for \"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\"" May 15 16:02:48.238493 containerd[1533]: time="2025-05-15T16:02:48.238422665Z" level=info msg="RemoveContainer for \"03c9ffdaf35563522f605a81d36f22bc97052f4ff68bfc31352f17babe077c2a\" returns successfully" May 15 16:02:48.239096 kubelet[2763]: I0515 16:02:48.238787 2763 scope.go:117] "RemoveContainer" containerID="cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8" May 15 16:02:48.241324 containerd[1533]: time="2025-05-15T16:02:48.241255791Z" level=info msg="RemoveContainer for \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\"" May 15 16:02:48.244225 containerd[1533]: time="2025-05-15T16:02:48.244150774Z" level=info msg="RemoveContainer for \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" returns successfully" May 15 16:02:48.245112 kubelet[2763]: I0515 16:02:48.244543 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/767d34ab-3299-46dd-add9-09d52538ad17-typha-certs\") pod \"767d34ab-3299-46dd-add9-09d52538ad17\" (UID: \"767d34ab-3299-46dd-add9-09d52538ad17\") " May 15 16:02:48.245112 kubelet[2763]: I0515 16:02:48.244575 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7qdd\" (UniqueName: \"kubernetes.io/projected/767d34ab-3299-46dd-add9-09d52538ad17-kube-api-access-w7qdd\") pod \"767d34ab-3299-46dd-add9-09d52538ad17\" (UID: \"767d34ab-3299-46dd-add9-09d52538ad17\") " May 15 16:02:48.245112 kubelet[2763]: I0515 16:02:48.244595 2763 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767d34ab-3299-46dd-add9-09d52538ad17-tigera-ca-bundle\") pod \"767d34ab-3299-46dd-add9-09d52538ad17\" (UID: \"767d34ab-3299-46dd-add9-09d52538ad17\") " May 15 16:02:48.245112 kubelet[2763]: I0515 16:02:48.245035 2763 scope.go:117] "RemoveContainer" containerID="cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8" May 15 16:02:48.245329 containerd[1533]: time="2025-05-15T16:02:48.245298402Z" level=error msg="ContainerStatus for \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\": not found" May 15 16:02:48.245512 kubelet[2763]: E0515 16:02:48.245487 2763 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\": not found" containerID="cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8" May 15 16:02:48.245570 kubelet[2763]: I0515 16:02:48.245520 2763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8"} err="failed to get container status \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc627ff2db7208287b61e77f24d5b6e049cee7138456d6702aa29d8152d6cbc8\": not found" May 15 16:02:48.256337 systemd[1]: var-lib-kubelet-pods-767d34ab\x2d3299\x2d46dd\x2dadd9\x2d09d52538ad17-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 15 16:02:48.267037 kubelet[2763]: I0515 16:02:48.266456 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767d34ab-3299-46dd-add9-09d52538ad17-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "767d34ab-3299-46dd-add9-09d52538ad17" (UID: "767d34ab-3299-46dd-add9-09d52538ad17"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 16:02:48.270564 kubelet[2763]: I0515 16:02:48.270500 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/767d34ab-3299-46dd-add9-09d52538ad17-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "767d34ab-3299-46dd-add9-09d52538ad17" (UID: "767d34ab-3299-46dd-add9-09d52538ad17"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 16:02:48.270722 kubelet[2763]: I0515 16:02:48.270631 2763 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767d34ab-3299-46dd-add9-09d52538ad17-kube-api-access-w7qdd" (OuterVolumeSpecName: "kube-api-access-w7qdd") pod "767d34ab-3299-46dd-add9-09d52538ad17" (UID: "767d34ab-3299-46dd-add9-09d52538ad17"). InnerVolumeSpecName "kube-api-access-w7qdd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 16:02:48.287143 kubelet[2763]: I0515 16:02:48.287088 2763 topology_manager.go:215] "Topology Admit Handler" podUID="0e33e6c6-c6c7-474c-b042-d3d51a0e6649" podNamespace="calico-system" podName="calico-node-l99xj" May 15 16:02:48.287304 kubelet[2763]: E0515 16:02:48.287169 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10c5d861-69f5-41ae-bab2-9fe813c77a00" containerName="tigera-operator" May 15 16:02:48.287304 kubelet[2763]: E0515 16:02:48.287178 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="767d34ab-3299-46dd-add9-09d52538ad17" containerName="calico-typha" May 15 16:02:48.287304 kubelet[2763]: E0515 16:02:48.287184 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" containerName="flexvol-driver" May 15 16:02:48.287304 kubelet[2763]: E0515 16:02:48.287190 2763 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" containerName="install-cni" May 15 16:02:48.291114 kubelet[2763]: I0515 16:02:48.290921 2763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" containerName="install-cni" May 15 16:02:48.291114 kubelet[2763]: I0515 16:02:48.290951 2763 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c5d861-69f5-41ae-bab2-9fe813c77a00" containerName="tigera-operator" May 15 16:02:48.291114 kubelet[2763]: I0515 16:02:48.290959 2763 memory_manager.go:354] "RemoveStaleState removing state" podUID="767d34ab-3299-46dd-add9-09d52538ad17" containerName="calico-typha" May 15 16:02:48.301367 systemd[1]: Created slice kubepods-besteffort-pod0e33e6c6_c6c7_474c_b042_d3d51a0e6649.slice - libcontainer container kubepods-besteffort-pod0e33e6c6_c6c7_474c_b042_d3d51a0e6649.slice. May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347253 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-policysync\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347305 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-var-run-calico\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347325 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-cni-log-dir\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347347 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-cni-bin-dir\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347364 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-var-lib-calico\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347386 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-node-certs\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347404 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-cni-net-dir\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347427 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-lib-modules\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347444 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcvtg\" (UniqueName: \"kubernetes.io/projected/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-kube-api-access-rcvtg\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347463 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-tigera-ca-bundle\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347483 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-flexvol-driver-host\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347498 2763 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e33e6c6-c6c7-474c-b042-d3d51a0e6649-xtables-lock\") pod \"calico-node-l99xj\" (UID: \"0e33e6c6-c6c7-474c-b042-d3d51a0e6649\") " pod="calico-system/calico-node-l99xj" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347521 2763 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/767d34ab-3299-46dd-add9-09d52538ad17-typha-certs\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347533 2763 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w7qdd\" (UniqueName: \"kubernetes.io/projected/767d34ab-3299-46dd-add9-09d52538ad17-kube-api-access-w7qdd\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:48.347624 kubelet[2763]: I0515 16:02:48.347542 2763 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767d34ab-3299-46dd-add9-09d52538ad17-tigera-ca-bundle\") on node \"ci-4334.0.0-a-32b0bb88bb\" DevicePath \"\"" May 15 16:02:48.510208 systemd[1]: Removed slice kubepods-besteffort-pod767d34ab_3299_46dd_add9_09d52538ad17.slice - libcontainer container kubepods-besteffort-pod767d34ab_3299_46dd_add9_09d52538ad17.slice. May 15 16:02:48.510673 systemd[1]: kubepods-besteffort-pod767d34ab_3299_46dd_add9_09d52538ad17.slice: Consumed 394ms CPU time, 34.8M memory peak, 12.9M read from disk. May 15 16:02:48.606175 kubelet[2763]: E0515 16:02:48.606008 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:48.608038 containerd[1533]: time="2025-05-15T16:02:48.608003948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l99xj,Uid:0e33e6c6-c6c7-474c-b042-d3d51a0e6649,Namespace:calico-system,Attempt:0,}" May 15 16:02:48.610325 kubelet[2763]: I0515 16:02:48.610277 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:48.610325 kubelet[2763]: I0515 16:02:48.610310 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:48.614656 containerd[1533]: time="2025-05-15T16:02:48.614127924Z" level=info msg="StopPodSandbox for \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\"" May 15 16:02:48.614656 containerd[1533]: time="2025-05-15T16:02:48.614317506Z" level=info msg="TearDown network for sandbox \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" successfully" May 15 16:02:48.614656 containerd[1533]: time="2025-05-15T16:02:48.614355298Z" level=info msg="StopPodSandbox for \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" returns successfully" May 15 16:02:48.616290 containerd[1533]: time="2025-05-15T16:02:48.616254033Z" level=info msg="RemovePodSandbox for \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\"" May 15 16:02:48.616290 containerd[1533]: time="2025-05-15T16:02:48.616290466Z" level=info msg="Forcibly stopping sandbox \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\"" May 15 16:02:48.616458 containerd[1533]: time="2025-05-15T16:02:48.616388235Z" level=info msg="TearDown network for sandbox \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" successfully" May 15 16:02:48.621432 containerd[1533]: time="2025-05-15T16:02:48.621204829Z" level=info msg="Ensure that sandbox ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97 in task-service has been cleanup successfully" May 15 16:02:48.625173 containerd[1533]: time="2025-05-15T16:02:48.625016741Z" level=info msg="RemovePodSandbox \"ece05a21a1a75a37dfdcbe32b1d3e97d81c2071aec295e61aa0054a71a276b97\" returns successfully" May 15 16:02:48.626268 containerd[1533]: time="2025-05-15T16:02:48.626216690Z" level=info msg="StopPodSandbox for \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\"" May 15 16:02:48.626394 containerd[1533]: time="2025-05-15T16:02:48.626374149Z" level=info msg="TearDown network for sandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" successfully" May 15 16:02:48.626394 containerd[1533]: time="2025-05-15T16:02:48.626389376Z" level=info msg="StopPodSandbox for \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" returns successfully" May 15 16:02:48.626808 containerd[1533]: time="2025-05-15T16:02:48.626779900Z" level=info msg="RemovePodSandbox for \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\"" May 15 16:02:48.626808 containerd[1533]: time="2025-05-15T16:02:48.626806027Z" level=info msg="Forcibly stopping sandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\"" May 15 16:02:48.627028 containerd[1533]: time="2025-05-15T16:02:48.626874031Z" level=info msg="TearDown network for sandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" successfully" May 15 16:02:48.628312 containerd[1533]: time="2025-05-15T16:02:48.628239293Z" level=info msg="Ensure that sandbox e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924 in task-service has been cleanup successfully" May 15 16:02:48.629959 containerd[1533]: time="2025-05-15T16:02:48.629913159Z" level=info msg="RemovePodSandbox \"e9341e2ae2ad47eb022f9d0f98084c0d2130260f2a83bed5d4d072390697e924\" returns successfully" May 15 16:02:48.630788 kubelet[2763]: I0515 16:02:48.630690 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:48.638932 containerd[1533]: time="2025-05-15T16:02:48.638855646Z" level=info msg="connecting to shim bb4df4e45f4e57b24fdfb455ecc3b8e8123a718ba4ccbd654c70f5ea0d16089c" address="unix:///run/containerd/s/9754ede6da9b2d8d8dd7c8940937ea42f6240c64fc6c105520699493f99c2db3" namespace=k8s.io protocol=ttrpc version=3 May 15 16:02:48.644014 kubelet[2763]: I0515 16:02:48.643952 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:48.644165 kubelet[2763]: I0515 16:02:48.644076 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-h2t96","kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-node-l99xj","calico-system/csi-node-driver-w2wp6","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:48.644165 kubelet[2763]: E0515 16:02:48.644114 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:48.644165 kubelet[2763]: E0515 16:02:48.644126 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:48.644165 kubelet[2763]: E0515 16:02:48.644134 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-l99xj" May 15 16:02:48.644165 kubelet[2763]: E0515 16:02:48.644141 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:48.644165 kubelet[2763]: E0515 16:02:48.644152 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:48.644165 kubelet[2763]: E0515 16:02:48.644163 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:02:48.644350 kubelet[2763]: E0515 16:02:48.644172 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:48.644350 kubelet[2763]: E0515 16:02:48.644180 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:48.644350 kubelet[2763]: I0515 16:02:48.644190 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:02:48.683282 systemd[1]: Started cri-containerd-bb4df4e45f4e57b24fdfb455ecc3b8e8123a718ba4ccbd654c70f5ea0d16089c.scope - libcontainer container bb4df4e45f4e57b24fdfb455ecc3b8e8123a718ba4ccbd654c70f5ea0d16089c. May 15 16:02:48.721963 containerd[1533]: time="2025-05-15T16:02:48.721916354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l99xj,Uid:0e33e6c6-c6c7-474c-b042-d3d51a0e6649,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb4df4e45f4e57b24fdfb455ecc3b8e8123a718ba4ccbd654c70f5ea0d16089c\"" May 15 16:02:48.723595 kubelet[2763]: E0515 16:02:48.723190 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:48.726516 containerd[1533]: time="2025-05-15T16:02:48.726456127Z" level=info msg="CreateContainer within sandbox \"bb4df4e45f4e57b24fdfb455ecc3b8e8123a718ba4ccbd654c70f5ea0d16089c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 16:02:48.766666 containerd[1533]: time="2025-05-15T16:02:48.765903669Z" level=info msg="Container 161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e: CDI devices from CRI Config.CDIDevices: []" May 15 16:02:48.784534 containerd[1533]: time="2025-05-15T16:02:48.783612220Z" level=info msg="CreateContainer within sandbox \"bb4df4e45f4e57b24fdfb455ecc3b8e8123a718ba4ccbd654c70f5ea0d16089c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e\"" May 15 16:02:48.787229 containerd[1533]: time="2025-05-15T16:02:48.787191289Z" level=info msg="StartContainer for \"161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e\"" May 15 16:02:48.789103 containerd[1533]: time="2025-05-15T16:02:48.789040369Z" level=info msg="connecting to shim 161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e" address="unix:///run/containerd/s/9754ede6da9b2d8d8dd7c8940937ea42f6240c64fc6c105520699493f99c2db3" protocol=ttrpc version=3 May 15 16:02:48.813181 systemd[1]: Started cri-containerd-161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e.scope - libcontainer container 161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e. May 15 16:02:48.861620 kubelet[2763]: E0515 16:02:48.861478 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:48.863092 containerd[1533]: time="2025-05-15T16:02:48.862714601Z" level=info msg="StartContainer for \"161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e\" returns successfully" May 15 16:02:48.864955 containerd[1533]: time="2025-05-15T16:02:48.864767549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,}" May 15 16:02:48.871262 kubelet[2763]: I0515 16:02:48.867972 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767d34ab-3299-46dd-add9-09d52538ad17" path="/var/lib/kubelet/pods/767d34ab-3299-46dd-add9-09d52538ad17/volumes" May 15 16:02:48.874526 kubelet[2763]: I0515 16:02:48.873854 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c65cd6-c8cd-4005-9b33-295db8fc6f42" path="/var/lib/kubelet/pods/c4c65cd6-c8cd-4005-9b33-295db8fc6f42/volumes" May 15 16:02:48.882725 kubelet[2763]: I0515 16:02:48.878974 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e007eeab-9069-48bd-be2f-87c5ad02bcf8" path="/var/lib/kubelet/pods/e007eeab-9069-48bd-be2f-87c5ad02bcf8/volumes" May 15 16:02:48.882474 systemd[1]: var-lib-kubelet-pods-767d34ab\x2d3299\x2d46dd\x2dadd9\x2d09d52538ad17-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 15 16:02:48.882578 systemd[1]: var-lib-kubelet-pods-767d34ab\x2d3299\x2d46dd\x2dadd9\x2d09d52538ad17-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7qdd.mount: Deactivated successfully. May 15 16:02:48.948135 systemd[1]: cri-containerd-161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e.scope: Deactivated successfully. May 15 16:02:48.949006 systemd[1]: cri-containerd-161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e.scope: Consumed 43ms CPU time, 15.6M memory peak, 7.8M read from disk, 6.3M written to disk. May 15 16:02:48.956689 containerd[1533]: time="2025-05-15T16:02:48.956631006Z" level=info msg="received exit event container_id:\"161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e\" id:\"161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e\" pid:4397 exited_at:{seconds:1747324968 nanos:956246080}" May 15 16:02:48.958426 containerd[1533]: time="2025-05-15T16:02:48.958377084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e\" id:\"161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e\" pid:4397 exited_at:{seconds:1747324968 nanos:956246080}" May 15 16:02:49.004177 containerd[1533]: time="2025-05-15T16:02:49.004042705Z" level=error msg="Failed to destroy network for sandbox \"b3b713a5945a93abcea273c3d78b8c7d7b07b2fa12f4554ede5a51baf545f625\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:49.005812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-161ecab113588f071f46525a9a587c71a9291f663912222116222297d64ee22e-rootfs.mount: Deactivated successfully. May 15 16:02:49.011815 containerd[1533]: time="2025-05-15T16:02:49.011757449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b713a5945a93abcea273c3d78b8c7d7b07b2fa12f4554ede5a51baf545f625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:49.012607 kubelet[2763]: E0515 16:02:49.012334 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b713a5945a93abcea273c3d78b8c7d7b07b2fa12f4554ede5a51baf545f625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:49.012607 kubelet[2763]: E0515 16:02:49.012412 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b713a5945a93abcea273c3d78b8c7d7b07b2fa12f4554ede5a51baf545f625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:49.012607 kubelet[2763]: E0515 16:02:49.012441 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3b713a5945a93abcea273c3d78b8c7d7b07b2fa12f4554ede5a51baf545f625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:49.012607 kubelet[2763]: E0515 16:02:49.012493 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3b713a5945a93abcea273c3d78b8c7d7b07b2fa12f4554ede5a51baf545f625\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-82kdh" podUID="17084be0-dcb1-4553-93ab-fa631e730966" May 15 16:02:49.015705 systemd[1]: run-netns-cni\x2d8c024e2a\x2d4bc6\x2d27c9\x2d2021\x2ded49248831ca.mount: Deactivated successfully. May 15 16:02:49.208981 kubelet[2763]: E0515 16:02:49.208927 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:49.216085 containerd[1533]: time="2025-05-15T16:02:49.215883883Z" level=info msg="CreateContainer within sandbox \"bb4df4e45f4e57b24fdfb455ecc3b8e8123a718ba4ccbd654c70f5ea0d16089c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 16:02:49.230331 containerd[1533]: time="2025-05-15T16:02:49.230268854Z" level=info msg="Container 1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad: CDI devices from CRI Config.CDIDevices: []" May 15 16:02:49.267011 containerd[1533]: time="2025-05-15T16:02:49.266940926Z" level=info msg="CreateContainer within sandbox \"bb4df4e45f4e57b24fdfb455ecc3b8e8123a718ba4ccbd654c70f5ea0d16089c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad\"" May 15 16:02:49.268270 containerd[1533]: time="2025-05-15T16:02:49.268218386Z" level=info msg="StartContainer for \"1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad\"" May 15 16:02:49.270856 containerd[1533]: time="2025-05-15T16:02:49.270806296Z" level=info msg="connecting to shim 1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad" address="unix:///run/containerd/s/9754ede6da9b2d8d8dd7c8940937ea42f6240c64fc6c105520699493f99c2db3" protocol=ttrpc version=3 May 15 16:02:49.298328 systemd[1]: Started cri-containerd-1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad.scope - libcontainer container 1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad. May 15 16:02:49.351757 containerd[1533]: time="2025-05-15T16:02:49.351618260Z" level=info msg="StartContainer for \"1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad\" returns successfully" May 15 16:02:50.220035 kubelet[2763]: E0515 16:02:50.219939 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:50.330644 systemd[1]: cri-containerd-1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad.scope: Deactivated successfully. May 15 16:02:50.334487 systemd[1]: cri-containerd-1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad.scope: Consumed 938ms CPU time, 125.1M memory peak, 103.9M read from disk. May 15 16:02:50.335964 containerd[1533]: time="2025-05-15T16:02:50.335890841Z" level=info msg="received exit event container_id:\"1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad\" id:\"1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad\" pid:4475 exited_at:{seconds:1747324970 nanos:335656490}" May 15 16:02:50.337065 containerd[1533]: time="2025-05-15T16:02:50.337028971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad\" id:\"1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad\" pid:4475 exited_at:{seconds:1747324970 nanos:335656490}" May 15 16:02:50.372309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b2cf82e34a6364046f877395c6dda2a348505d2ba0dd4f7c2f02e55e9040cad-rootfs.mount: Deactivated successfully. May 15 16:02:51.230622 kubelet[2763]: E0515 16:02:51.230241 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:51.245225 containerd[1533]: time="2025-05-15T16:02:51.244741858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 16:02:51.859247 kubelet[2763]: E0515 16:02:51.859198 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:51.859824 containerd[1533]: time="2025-05-15T16:02:51.859750362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,}" May 15 16:02:51.942677 containerd[1533]: time="2025-05-15T16:02:51.942605009Z" level=error msg="Failed to destroy network for sandbox \"d1c7fa87344d193f741a229d0b8352323b1e3d6f84a21ad634bca5d6585ac95c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:51.945580 containerd[1533]: time="2025-05-15T16:02:51.945338227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c7fa87344d193f741a229d0b8352323b1e3d6f84a21ad634bca5d6585ac95c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:51.947451 kubelet[2763]: E0515 16:02:51.947394 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c7fa87344d193f741a229d0b8352323b1e3d6f84a21ad634bca5d6585ac95c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:51.947584 kubelet[2763]: E0515 16:02:51.947475 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c7fa87344d193f741a229d0b8352323b1e3d6f84a21ad634bca5d6585ac95c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:51.947584 kubelet[2763]: E0515 16:02:51.947513 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c7fa87344d193f741a229d0b8352323b1e3d6f84a21ad634bca5d6585ac95c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:51.947670 kubelet[2763]: E0515 16:02:51.947570 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1c7fa87344d193f741a229d0b8352323b1e3d6f84a21ad634bca5d6585ac95c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2t96" podUID="d86f0cb4-0d25-49dd-9a44-3295d0b01a8e" May 15 16:02:51.948949 systemd[1]: run-netns-cni\x2de14a211d\x2d1bd9\x2def95\x2dfe65\x2dd94b66cd33ad.mount: Deactivated successfully. May 15 16:02:52.562621 systemd[1]: Started sshd@15-146.190.42.225:22-139.178.68.195:54358.service - OpenSSH per-connection server daemon (139.178.68.195:54358). May 15 16:02:52.644025 sshd[4534]: Accepted publickey for core from 139.178.68.195 port 54358 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:52.644124 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:52.656825 systemd-logind[1490]: New session 16 of user core. May 15 16:02:52.660810 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 16:02:52.830635 sshd[4536]: Connection closed by 139.178.68.195 port 54358 May 15 16:02:52.831082 sshd-session[4534]: pam_unix(sshd:session): session closed for user core May 15 16:02:52.839692 systemd[1]: sshd@15-146.190.42.225:22-139.178.68.195:54358.service: Deactivated successfully. May 15 16:02:52.843614 systemd[1]: session-16.scope: Deactivated successfully. May 15 16:02:52.848730 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. May 15 16:02:52.856084 systemd-logind[1490]: Removed session 16. May 15 16:02:55.170920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809934972.mount: Deactivated successfully. May 15 16:02:55.174458 containerd[1533]: time="2025-05-15T16:02:55.174391027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2809934972: mkdir /var/lib/containerd/tmpmounts/containerd-mount2809934972/usr/lib/.build-id/77: no space left on device" May 15 16:02:55.175161 containerd[1533]: time="2025-05-15T16:02:55.174818900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 16:02:55.175275 kubelet[2763]: E0515 16:02:55.175215 2763 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2809934972: mkdir /var/lib/containerd/tmpmounts/containerd-mount2809934972/usr/lib/.build-id/77: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 16:02:55.175643 kubelet[2763]: E0515 16:02:55.175289 2763 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2809934972: mkdir /var/lib/containerd/tmpmounts/containerd-mount2809934972/usr/lib/.build-id/77: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 16:02:55.175643 kubelet[2763]: E0515 16:02:55.175535 2763 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcvtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-l99xj_calico-system(0e33e6c6-c6c7-474c-b042-d3d51a0e6649): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2809934972: mkdir /var/lib/containerd/tmpmounts/containerd-mount2809934972/usr/lib/.build-id/77: no space left on device May 15 16:02:55.175818 kubelet[2763]: E0515 16:02:55.175577 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2809934972: mkdir /var/lib/containerd/tmpmounts/containerd-mount2809934972/usr/lib/.build-id/77: no space left on device\"" pod="calico-system/calico-node-l99xj" podUID="0e33e6c6-c6c7-474c-b042-d3d51a0e6649" May 15 16:02:57.850936 systemd[1]: Started sshd@16-146.190.42.225:22-139.178.68.195:38886.service - OpenSSH per-connection server daemon (139.178.68.195:38886). May 15 16:02:57.859484 kubelet[2763]: E0515 16:02:57.859436 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:02:57.969425 sshd[4560]: Accepted publickey for core from 139.178.68.195 port 38886 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:02:57.971605 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:02:57.979197 systemd-logind[1490]: New session 17 of user core. May 15 16:02:57.984254 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 16:02:58.160047 sshd[4562]: Connection closed by 139.178.68.195 port 38886 May 15 16:02:58.162059 sshd-session[4560]: pam_unix(sshd:session): session closed for user core May 15 16:02:58.166606 systemd[1]: sshd@16-146.190.42.225:22-139.178.68.195:38886.service: Deactivated successfully. May 15 16:02:58.171155 systemd[1]: session-17.scope: Deactivated successfully. May 15 16:02:58.173176 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. May 15 16:02:58.174558 systemd-logind[1490]: Removed session 17. May 15 16:02:58.658359 kubelet[2763]: I0515 16:02:58.658325 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:02:58.659025 kubelet[2763]: I0515 16:02:58.658593 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:02:58.661978 kubelet[2763]: I0515 16:02:58.661923 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:02:58.675435 kubelet[2763]: I0515 16:02:58.675382 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:02:58.675592 kubelet[2763]: I0515 16:02:58.675472 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-h2t96","kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-node-l99xj","calico-system/csi-node-driver-w2wp6","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:02:58.675592 kubelet[2763]: E0515 16:02:58.675506 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:02:58.675592 kubelet[2763]: E0515 16:02:58.675515 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:02:58.675592 kubelet[2763]: E0515 16:02:58.675522 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-l99xj" May 15 16:02:58.675592 kubelet[2763]: E0515 16:02:58.675528 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:58.675592 kubelet[2763]: E0515 16:02:58.675545 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:58.675592 kubelet[2763]: E0515 16:02:58.675560 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:02:58.675592 kubelet[2763]: E0515 16:02:58.675573 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:58.675592 kubelet[2763]: E0515 16:02:58.675584 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:02:58.675826 kubelet[2763]: I0515 16:02:58.675596 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:02:58.859226 containerd[1533]: time="2025-05-15T16:02:58.859151358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,}" May 15 16:02:58.950765 containerd[1533]: time="2025-05-15T16:02:58.950718026Z" level=error msg="Failed to destroy network for sandbox \"1fe07802ef09e5b7a672e7906c23e739d657614f71b2714c48c042f8fcb65a82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:58.953895 containerd[1533]: time="2025-05-15T16:02:58.953818796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe07802ef09e5b7a672e7906c23e739d657614f71b2714c48c042f8fcb65a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:58.954297 systemd[1]: run-netns-cni\x2d9d13b25f\x2d4398\x2d2a54\x2d74e4\x2daed18556a8fa.mount: Deactivated successfully. May 15 16:02:58.955745 kubelet[2763]: E0515 16:02:58.954620 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe07802ef09e5b7a672e7906c23e739d657614f71b2714c48c042f8fcb65a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:02:58.955745 kubelet[2763]: E0515 16:02:58.954737 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe07802ef09e5b7a672e7906c23e739d657614f71b2714c48c042f8fcb65a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:58.955745 kubelet[2763]: E0515 16:02:58.954767 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe07802ef09e5b7a672e7906c23e739d657614f71b2714c48c042f8fcb65a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:02:58.955745 kubelet[2763]: E0515 16:02:58.954817 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fe07802ef09e5b7a672e7906c23e739d657614f71b2714c48c042f8fcb65a82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:03:02.859942 kubelet[2763]: E0515 16:03:02.859657 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:03.177549 systemd[1]: Started sshd@17-146.190.42.225:22-139.178.68.195:38896.service - OpenSSH per-connection server daemon (139.178.68.195:38896). May 15 16:03:03.248302 sshd[4604]: Accepted publickey for core from 139.178.68.195 port 38896 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:03.251013 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:03.259372 systemd-logind[1490]: New session 18 of user core. May 15 16:03:03.266714 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 16:03:03.443055 sshd[4606]: Connection closed by 139.178.68.195 port 38896 May 15 16:03:03.443362 sshd-session[4604]: pam_unix(sshd:session): session closed for user core May 15 16:03:03.448476 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. May 15 16:03:03.449549 systemd[1]: sshd@17-146.190.42.225:22-139.178.68.195:38896.service: Deactivated successfully. May 15 16:03:03.454230 systemd[1]: session-18.scope: Deactivated successfully. May 15 16:03:03.461068 systemd-logind[1490]: Removed session 18. May 15 16:03:03.858478 kubelet[2763]: E0515 16:03:03.858213 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:03.859457 containerd[1533]: time="2025-05-15T16:03:03.859083944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,}" May 15 16:03:03.928753 containerd[1533]: time="2025-05-15T16:03:03.928675018Z" level=error msg="Failed to destroy network for sandbox \"eb13c9473d1d17ec0d8def43ac27159f790e4fba8517fc95d88736c85b1275e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:03.932102 containerd[1533]: time="2025-05-15T16:03:03.932034524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb13c9473d1d17ec0d8def43ac27159f790e4fba8517fc95d88736c85b1275e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:03.932700 systemd[1]: run-netns-cni\x2d04f8bffe\x2d28e7\x2dff83\x2d4a07\x2d2227346af6b5.mount: Deactivated successfully. May 15 16:03:03.933717 kubelet[2763]: E0515 16:03:03.933426 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb13c9473d1d17ec0d8def43ac27159f790e4fba8517fc95d88736c85b1275e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:03.933717 kubelet[2763]: E0515 16:03:03.933577 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb13c9473d1d17ec0d8def43ac27159f790e4fba8517fc95d88736c85b1275e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:03.933717 kubelet[2763]: E0515 16:03:03.933620 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb13c9473d1d17ec0d8def43ac27159f790e4fba8517fc95d88736c85b1275e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:03.934174 kubelet[2763]: E0515 16:03:03.933718 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb13c9473d1d17ec0d8def43ac27159f790e4fba8517fc95d88736c85b1275e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-82kdh" podUID="17084be0-dcb1-4553-93ab-fa631e730966" May 15 16:03:04.858968 kubelet[2763]: E0515 16:03:04.858523 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:04.860201 containerd[1533]: time="2025-05-15T16:03:04.860160964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,}" May 15 16:03:04.937249 containerd[1533]: time="2025-05-15T16:03:04.937059624Z" level=error msg="Failed to destroy network for sandbox \"4881355b7ac54e271249bd6f5189397db77ee49f88439570cf27e560e5eb9071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:04.940053 containerd[1533]: time="2025-05-15T16:03:04.939927980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4881355b7ac54e271249bd6f5189397db77ee49f88439570cf27e560e5eb9071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:04.941334 systemd[1]: run-netns-cni\x2d168b6f45\x2d7d3a\x2d6fec\x2d9e1a\x2df383398feee1.mount: Deactivated successfully. May 15 16:03:04.944071 kubelet[2763]: E0515 16:03:04.942267 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4881355b7ac54e271249bd6f5189397db77ee49f88439570cf27e560e5eb9071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:04.944071 kubelet[2763]: E0515 16:03:04.942354 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4881355b7ac54e271249bd6f5189397db77ee49f88439570cf27e560e5eb9071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:04.944071 kubelet[2763]: E0515 16:03:04.942379 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4881355b7ac54e271249bd6f5189397db77ee49f88439570cf27e560e5eb9071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:04.944071 kubelet[2763]: E0515 16:03:04.942421 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4881355b7ac54e271249bd6f5189397db77ee49f88439570cf27e560e5eb9071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2t96" podUID="d86f0cb4-0d25-49dd-9a44-3295d0b01a8e" May 15 16:03:05.859501 kubelet[2763]: E0515 16:03:05.859403 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:07.859775 kubelet[2763]: E0515 16:03:07.859542 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:07.863956 containerd[1533]: time="2025-05-15T16:03:07.863157773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 16:03:08.468368 systemd[1]: Started sshd@18-146.190.42.225:22-139.178.68.195:43412.service - OpenSSH per-connection server daemon (139.178.68.195:43412). May 15 16:03:08.542206 sshd[4677]: Accepted publickey for core from 139.178.68.195 port 43412 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:08.545647 sshd-session[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:08.554852 systemd-logind[1490]: New session 19 of user core. May 15 16:03:08.560282 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 16:03:08.697120 kubelet[2763]: I0515 16:03:08.697065 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:03:08.697120 kubelet[2763]: I0515 16:03:08.697123 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:03:08.701638 kubelet[2763]: I0515 16:03:08.701563 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:03:08.726042 kubelet[2763]: I0515 16:03:08.725736 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:03:08.726042 kubelet[2763]: I0515 16:03:08.725951 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-82kdh","kube-system/coredns-7db6d8ff4d-h2t96","calico-system/calico-node-l99xj","calico-system/csi-node-driver-w2wp6","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:03:08.726243 kubelet[2763]: E0515 16:03:08.726167 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:08.726243 kubelet[2763]: E0515 16:03:08.726191 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:08.726243 kubelet[2763]: E0515 16:03:08.726219 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-l99xj" May 15 16:03:08.726243 kubelet[2763]: E0515 16:03:08.726232 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:08.726479 kubelet[2763]: E0515 16:03:08.726251 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:08.726479 kubelet[2763]: E0515 16:03:08.726266 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:03:08.726479 kubelet[2763]: E0515 16:03:08.726389 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:08.726479 kubelet[2763]: E0515 16:03:08.726413 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:08.726479 kubelet[2763]: I0515 16:03:08.726428 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:03:08.741944 sshd[4679]: Connection closed by 139.178.68.195 port 43412 May 15 16:03:08.742818 sshd-session[4677]: pam_unix(sshd:session): session closed for user core May 15 16:03:08.755535 systemd[1]: sshd@18-146.190.42.225:22-139.178.68.195:43412.service: Deactivated successfully. May 15 16:03:08.758556 systemd[1]: session-19.scope: Deactivated successfully. May 15 16:03:08.760113 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. May 15 16:03:08.765405 systemd[1]: Started sshd@19-146.190.42.225:22-139.178.68.195:43424.service - OpenSSH per-connection server daemon (139.178.68.195:43424). May 15 16:03:08.767458 systemd-logind[1490]: Removed session 19. May 15 16:03:08.817676 sshd[4691]: Accepted publickey for core from 139.178.68.195 port 43424 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:08.820358 sshd-session[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:08.827256 systemd-logind[1490]: New session 20 of user core. May 15 16:03:08.835273 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 16:03:09.245444 sshd[4693]: Connection closed by 139.178.68.195 port 43424 May 15 16:03:09.246147 sshd-session[4691]: pam_unix(sshd:session): session closed for user core May 15 16:03:09.260238 systemd[1]: sshd@19-146.190.42.225:22-139.178.68.195:43424.service: Deactivated successfully. May 15 16:03:09.264350 systemd[1]: session-20.scope: Deactivated successfully. May 15 16:03:09.266265 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. May 15 16:03:09.271068 systemd-logind[1490]: Removed session 20. May 15 16:03:09.274431 systemd[1]: Started sshd@20-146.190.42.225:22-139.178.68.195:43436.service - OpenSSH per-connection server daemon (139.178.68.195:43436). May 15 16:03:09.342878 sshd[4703]: Accepted publickey for core from 139.178.68.195 port 43436 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:09.346331 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:09.357074 systemd-logind[1490]: New session 21 of user core. May 15 16:03:09.364412 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 16:03:11.861289 containerd[1533]: time="2025-05-15T16:03:11.860057145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,}" May 15 16:03:12.196035 containerd[1533]: time="2025-05-15T16:03:12.195455183Z" level=error msg="Failed to destroy network for sandbox \"e02332fbe627c69ad0e1527530cc624385599cc35ca0510c50ef7dd5e8efb875\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:12.204509 containerd[1533]: time="2025-05-15T16:03:12.202751874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e02332fbe627c69ad0e1527530cc624385599cc35ca0510c50ef7dd5e8efb875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:12.205200 kubelet[2763]: E0515 16:03:12.205052 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e02332fbe627c69ad0e1527530cc624385599cc35ca0510c50ef7dd5e8efb875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:12.207800 systemd[1]: run-netns-cni\x2da842c92b\x2d337a\x2d1429\x2d8412\x2d547376e22e0b.mount: Deactivated successfully. May 15 16:03:12.211058 kubelet[2763]: E0515 16:03:12.208510 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e02332fbe627c69ad0e1527530cc624385599cc35ca0510c50ef7dd5e8efb875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:12.211058 kubelet[2763]: E0515 16:03:12.208593 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e02332fbe627c69ad0e1527530cc624385599cc35ca0510c50ef7dd5e8efb875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:12.213487 kubelet[2763]: E0515 16:03:12.210028 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e02332fbe627c69ad0e1527530cc624385599cc35ca0510c50ef7dd5e8efb875\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:03:13.542770 containerd[1533]: time="2025-05-15T16:03:13.542429094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112162423: mkdir /var/lib/containerd/tmpmounts/containerd-mount4112162423/usr/lib/.build-id/68: no space left on device" May 15 16:03:13.542916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112162423.mount: Deactivated successfully. May 15 16:03:13.548635 kubelet[2763]: E0515 16:03:13.543633 2763 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112162423: mkdir /var/lib/containerd/tmpmounts/containerd-mount4112162423/usr/lib/.build-id/68: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 16:03:13.548635 kubelet[2763]: E0515 16:03:13.543792 2763 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112162423: mkdir /var/lib/containerd/tmpmounts/containerd-mount4112162423/usr/lib/.build-id/68: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 16:03:13.550497 containerd[1533]: time="2025-05-15T16:03:13.546704094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 16:03:13.550563 kubelet[2763]: E0515 16:03:13.548205 2763 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcvtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-l99xj_calico-system(0e33e6c6-c6c7-474c-b042-d3d51a0e6649): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112162423: mkdir /var/lib/containerd/tmpmounts/containerd-mount4112162423/usr/lib/.build-id/68: no space left on device May 15 16:03:13.550749 kubelet[2763]: E0515 16:03:13.548260 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112162423: mkdir /var/lib/containerd/tmpmounts/containerd-mount4112162423/usr/lib/.build-id/68: no space left on device\"" pod="calico-system/calico-node-l99xj" podUID="0e33e6c6-c6c7-474c-b042-d3d51a0e6649" May 15 16:03:13.565814 sshd[4705]: Connection closed by 139.178.68.195 port 43436 May 15 16:03:13.566953 sshd-session[4703]: pam_unix(sshd:session): session closed for user core May 15 16:03:13.581513 systemd[1]: sshd@20-146.190.42.225:22-139.178.68.195:43436.service: Deactivated successfully. May 15 16:03:13.584965 systemd[1]: session-21.scope: Deactivated successfully. May 15 16:03:13.589201 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. May 15 16:03:13.592590 systemd[1]: Started sshd@21-146.190.42.225:22-139.178.68.195:60258.service - OpenSSH per-connection server daemon (139.178.68.195:60258). May 15 16:03:13.595710 systemd-logind[1490]: Removed session 21. May 15 16:03:13.710740 sshd[4758]: Accepted publickey for core from 139.178.68.195 port 60258 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:13.712835 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:13.720324 systemd-logind[1490]: New session 22 of user core. May 15 16:03:13.728372 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 16:03:14.097152 sshd[4760]: Connection closed by 139.178.68.195 port 60258 May 15 16:03:14.099132 sshd-session[4758]: pam_unix(sshd:session): session closed for user core May 15 16:03:14.112631 systemd[1]: sshd@21-146.190.42.225:22-139.178.68.195:60258.service: Deactivated successfully. May 15 16:03:14.116823 systemd[1]: session-22.scope: Deactivated successfully. May 15 16:03:14.118768 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. May 15 16:03:14.124363 systemd[1]: Started sshd@22-146.190.42.225:22-139.178.68.195:60268.service - OpenSSH per-connection server daemon (139.178.68.195:60268). May 15 16:03:14.127087 systemd-logind[1490]: Removed session 22. May 15 16:03:14.191353 sshd[4770]: Accepted publickey for core from 139.178.68.195 port 60268 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:14.194438 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:14.204240 systemd-logind[1490]: New session 23 of user core. May 15 16:03:14.210247 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 16:03:14.296909 update_engine[1493]: I20250515 16:03:14.296772 1493 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 16:03:14.296909 update_engine[1493]: I20250515 16:03:14.296891 1493 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 16:03:14.300894 update_engine[1493]: I20250515 16:03:14.300839 1493 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 16:03:14.303273 update_engine[1493]: I20250515 16:03:14.303122 1493 omaha_request_params.cc:62] Current group set to developer May 15 16:03:14.303408 update_engine[1493]: I20250515 16:03:14.303352 1493 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 16:03:14.303408 update_engine[1493]: I20250515 16:03:14.303371 1493 update_attempter.cc:643] Scheduling an action processor start. May 15 16:03:14.303408 update_engine[1493]: I20250515 16:03:14.303398 1493 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 16:03:14.303505 update_engine[1493]: I20250515 16:03:14.303478 1493 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 16:03:14.304150 update_engine[1493]: I20250515 16:03:14.303582 1493 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 16:03:14.304150 update_engine[1493]: I20250515 16:03:14.303601 1493 omaha_request_action.cc:272] Request: May 15 16:03:14.304150 update_engine[1493]: May 15 16:03:14.304150 update_engine[1493]: May 15 16:03:14.304150 update_engine[1493]: May 15 16:03:14.304150 update_engine[1493]: May 15 16:03:14.304150 update_engine[1493]: May 15 16:03:14.304150 update_engine[1493]: May 15 16:03:14.304150 update_engine[1493]: May 15 16:03:14.304150 update_engine[1493]: May 15 16:03:14.306807 update_engine[1493]: I20250515 16:03:14.303608 1493 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 16:03:14.326787 update_engine[1493]: I20250515 16:03:14.326726 1493 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 16:03:14.327229 update_engine[1493]: I20250515 16:03:14.327176 1493 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 16:03:14.330311 update_engine[1493]: E20250515 16:03:14.330248 1493 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 16:03:14.330473 update_engine[1493]: I20250515 16:03:14.330430 1493 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 16:03:14.334062 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 16:03:14.393237 sshd[4772]: Connection closed by 139.178.68.195 port 60268 May 15 16:03:14.392962 sshd-session[4770]: pam_unix(sshd:session): session closed for user core May 15 16:03:14.398756 systemd[1]: sshd@22-146.190.42.225:22-139.178.68.195:60268.service: Deactivated successfully. May 15 16:03:14.402348 systemd[1]: session-23.scope: Deactivated successfully. May 15 16:03:14.403525 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. May 15 16:03:14.405601 systemd-logind[1490]: Removed session 23. May 15 16:03:15.859196 kubelet[2763]: E0515 16:03:15.859160 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:18.794689 kubelet[2763]: I0515 16:03:18.794609 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:03:18.794689 kubelet[2763]: I0515 16:03:18.794655 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:03:18.802187 kubelet[2763]: I0515 16:03:18.802159 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:03:18.816945 kubelet[2763]: I0515 16:03:18.816722 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:03:18.816945 kubelet[2763]: I0515 16:03:18.816812 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-h2t96","kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-node-l99xj","calico-system/csi-node-driver-w2wp6","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:03:18.816945 kubelet[2763]: E0515 16:03:18.816845 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:18.816945 kubelet[2763]: E0515 16:03:18.816857 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:18.816945 kubelet[2763]: E0515 16:03:18.816864 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-l99xj" May 15 16:03:18.816945 kubelet[2763]: E0515 16:03:18.816870 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:18.816945 kubelet[2763]: E0515 16:03:18.816881 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:18.816945 kubelet[2763]: E0515 16:03:18.816891 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:03:18.816945 kubelet[2763]: E0515 16:03:18.816905 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:18.816945 kubelet[2763]: E0515 16:03:18.816915 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:18.816945 kubelet[2763]: I0515 16:03:18.816925 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:03:18.859255 kubelet[2763]: E0515 16:03:18.859041 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:18.860167 containerd[1533]: time="2025-05-15T16:03:18.860111901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,}" May 15 16:03:18.943977 containerd[1533]: time="2025-05-15T16:03:18.943926294Z" level=error msg="Failed to destroy network for sandbox \"3002483db6884cd165b0eae7fae549b424db4c117d90345f82e68d3d83b2a032\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:18.947136 containerd[1533]: time="2025-05-15T16:03:18.947070880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3002483db6884cd165b0eae7fae549b424db4c117d90345f82e68d3d83b2a032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:18.947409 kubelet[2763]: E0515 16:03:18.947368 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3002483db6884cd165b0eae7fae549b424db4c117d90345f82e68d3d83b2a032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:18.947494 kubelet[2763]: E0515 16:03:18.947434 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3002483db6884cd165b0eae7fae549b424db4c117d90345f82e68d3d83b2a032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:18.947494 kubelet[2763]: E0515 16:03:18.947456 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3002483db6884cd165b0eae7fae549b424db4c117d90345f82e68d3d83b2a032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:18.947610 kubelet[2763]: E0515 16:03:18.947501 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3002483db6884cd165b0eae7fae549b424db4c117d90345f82e68d3d83b2a032\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-82kdh" podUID="17084be0-dcb1-4553-93ab-fa631e730966" May 15 16:03:18.949304 systemd[1]: run-netns-cni\x2d03cea164\x2d041e\x2d449c\x2d8bdc\x2dd39537b92f28.mount: Deactivated successfully. May 15 16:03:19.411773 systemd[1]: Started sshd@23-146.190.42.225:22-139.178.68.195:60272.service - OpenSSH per-connection server daemon (139.178.68.195:60272). May 15 16:03:19.487788 sshd[4815]: Accepted publickey for core from 139.178.68.195 port 60272 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:19.489706 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:19.495101 systemd-logind[1490]: New session 24 of user core. May 15 16:03:19.505299 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 16:03:19.652547 sshd[4817]: Connection closed by 139.178.68.195 port 60272 May 15 16:03:19.653606 sshd-session[4815]: pam_unix(sshd:session): session closed for user core May 15 16:03:19.660095 systemd[1]: sshd@23-146.190.42.225:22-139.178.68.195:60272.service: Deactivated successfully. May 15 16:03:19.663415 systemd[1]: session-24.scope: Deactivated successfully. May 15 16:03:19.665217 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. May 15 16:03:19.667666 systemd-logind[1490]: Removed session 24. May 15 16:03:19.858331 kubelet[2763]: E0515 16:03:19.858291 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:19.859453 containerd[1533]: time="2025-05-15T16:03:19.859321717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,}" May 15 16:03:19.943307 containerd[1533]: time="2025-05-15T16:03:19.943178120Z" level=error msg="Failed to destroy network for sandbox \"e97899da528b694a4537d38746f3ab50cc493237c19b3e87d955fb46f0ef6811\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:19.946405 systemd[1]: run-netns-cni\x2d5acdfb39\x2dcb57\x2d3cb8\x2d4a14\x2de31540c2913f.mount: Deactivated successfully. May 15 16:03:19.946749 containerd[1533]: time="2025-05-15T16:03:19.946662543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97899da528b694a4537d38746f3ab50cc493237c19b3e87d955fb46f0ef6811\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:19.947202 kubelet[2763]: E0515 16:03:19.947111 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97899da528b694a4537d38746f3ab50cc493237c19b3e87d955fb46f0ef6811\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:19.947374 kubelet[2763]: E0515 16:03:19.947306 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97899da528b694a4537d38746f3ab50cc493237c19b3e87d955fb46f0ef6811\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:19.947529 kubelet[2763]: E0515 16:03:19.947343 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97899da528b694a4537d38746f3ab50cc493237c19b3e87d955fb46f0ef6811\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:19.947624 kubelet[2763]: E0515 16:03:19.947596 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e97899da528b694a4537d38746f3ab50cc493237c19b3e87d955fb46f0ef6811\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2t96" podUID="d86f0cb4-0d25-49dd-9a44-3295d0b01a8e" May 15 16:03:23.859007 containerd[1533]: time="2025-05-15T16:03:23.858647980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,}" May 15 16:03:23.946024 containerd[1533]: time="2025-05-15T16:03:23.945958742Z" level=error msg="Failed to destroy network for sandbox \"7d94121fee1b0cbd1a666729b80ccef24d78a55e87dc5afc7ee393047069d0ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:23.948980 systemd[1]: run-netns-cni\x2d542cccce\x2d78cc\x2da8c5\x2d510f\x2db4bc78839f18.mount: Deactivated successfully. May 15 16:03:23.950000 containerd[1533]: time="2025-05-15T16:03:23.948952624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d94121fee1b0cbd1a666729b80ccef24d78a55e87dc5afc7ee393047069d0ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:23.950103 kubelet[2763]: E0515 16:03:23.949356 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d94121fee1b0cbd1a666729b80ccef24d78a55e87dc5afc7ee393047069d0ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:23.950103 kubelet[2763]: E0515 16:03:23.949430 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d94121fee1b0cbd1a666729b80ccef24d78a55e87dc5afc7ee393047069d0ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:23.950103 kubelet[2763]: E0515 16:03:23.949473 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d94121fee1b0cbd1a666729b80ccef24d78a55e87dc5afc7ee393047069d0ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:23.950103 kubelet[2763]: E0515 16:03:23.949518 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d94121fee1b0cbd1a666729b80ccef24d78a55e87dc5afc7ee393047069d0ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:03:24.242902 update_engine[1493]: I20250515 16:03:24.242049 1493 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 16:03:24.242902 update_engine[1493]: I20250515 16:03:24.242432 1493 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 16:03:24.242902 update_engine[1493]: I20250515 16:03:24.242795 1493 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 16:03:24.303050 update_engine[1493]: E20250515 16:03:24.302963 1493 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 16:03:24.303433 update_engine[1493]: I20250515 16:03:24.303365 1493 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 16:03:24.672732 systemd[1]: Started sshd@24-146.190.42.225:22-139.178.68.195:44922.service - OpenSSH per-connection server daemon (139.178.68.195:44922). May 15 16:03:24.744813 sshd[4893]: Accepted publickey for core from 139.178.68.195 port 44922 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:24.747336 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:24.754667 systemd-logind[1490]: New session 25 of user core. May 15 16:03:24.770286 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 16:03:24.919889 sshd[4895]: Connection closed by 139.178.68.195 port 44922 May 15 16:03:24.920823 sshd-session[4893]: pam_unix(sshd:session): session closed for user core May 15 16:03:24.925426 systemd[1]: sshd@24-146.190.42.225:22-139.178.68.195:44922.service: Deactivated successfully. May 15 16:03:24.928001 systemd[1]: session-25.scope: Deactivated successfully. May 15 16:03:24.929437 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. May 15 16:03:24.931094 systemd-logind[1490]: Removed session 25. May 15 16:03:27.858392 kubelet[2763]: E0515 16:03:27.858253 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:27.860814 kubelet[2763]: E0515 16:03:27.860772 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-l99xj" podUID="0e33e6c6-c6c7-474c-b042-d3d51a0e6649" May 15 16:03:28.830581 kubelet[2763]: I0515 16:03:28.830523 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:03:28.831713 kubelet[2763]: I0515 16:03:28.830848 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:03:28.833536 kubelet[2763]: I0515 16:03:28.833511 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:03:28.854570 kubelet[2763]: I0515 16:03:28.854541 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:03:28.854708 kubelet[2763]: I0515 16:03:28.854621 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-h2t96","kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-node-l99xj","calico-system/csi-node-driver-w2wp6","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:03:28.854708 kubelet[2763]: E0515 16:03:28.854657 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:28.854708 kubelet[2763]: E0515 16:03:28.854666 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:28.854708 kubelet[2763]: E0515 16:03:28.854674 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-l99xj" May 15 16:03:28.854708 kubelet[2763]: E0515 16:03:28.854681 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:28.854708 kubelet[2763]: E0515 16:03:28.854693 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:28.854708 kubelet[2763]: E0515 16:03:28.854704 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:03:28.854708 kubelet[2763]: E0515 16:03:28.854712 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:28.855065 kubelet[2763]: E0515 16:03:28.854721 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:28.855065 kubelet[2763]: I0515 16:03:28.854731 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:03:29.858756 kubelet[2763]: E0515 16:03:29.858575 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:29.859787 containerd[1533]: time="2025-05-15T16:03:29.859621760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,}" May 15 16:03:29.934871 systemd[1]: Started sshd@25-146.190.42.225:22-139.178.68.195:44928.service - OpenSSH per-connection server daemon (139.178.68.195:44928). May 15 16:03:29.947424 containerd[1533]: time="2025-05-15T16:03:29.947367615Z" level=error msg="Failed to destroy network for sandbox \"d89896857dbb5b8c12b59f5081b2a127319b4460a0887237577a57969a3954b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:29.950682 systemd[1]: run-netns-cni\x2dbbe8ed3a\x2d6897\x2d5e17\x2d2d55\x2de133e1ce346e.mount: Deactivated successfully. May 15 16:03:29.951207 containerd[1533]: time="2025-05-15T16:03:29.950643096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82kdh,Uid:17084be0-dcb1-4553-93ab-fa631e730966,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89896857dbb5b8c12b59f5081b2a127319b4460a0887237577a57969a3954b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:29.951893 kubelet[2763]: E0515 16:03:29.951125 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89896857dbb5b8c12b59f5081b2a127319b4460a0887237577a57969a3954b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:29.951893 kubelet[2763]: E0515 16:03:29.951246 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89896857dbb5b8c12b59f5081b2a127319b4460a0887237577a57969a3954b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:29.951893 kubelet[2763]: E0515 16:03:29.951288 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89896857dbb5b8c12b59f5081b2a127319b4460a0887237577a57969a3954b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:29.951893 kubelet[2763]: E0515 16:03:29.951362 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-82kdh_kube-system(17084be0-dcb1-4553-93ab-fa631e730966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d89896857dbb5b8c12b59f5081b2a127319b4460a0887237577a57969a3954b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-82kdh" podUID="17084be0-dcb1-4553-93ab-fa631e730966" May 15 16:03:30.000536 sshd[4936]: Accepted publickey for core from 139.178.68.195 port 44928 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:30.003538 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:30.013152 systemd-logind[1490]: New session 26 of user core. May 15 16:03:30.020321 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 16:03:30.173766 sshd[4939]: Connection closed by 139.178.68.195 port 44928 May 15 16:03:30.174749 sshd-session[4936]: pam_unix(sshd:session): session closed for user core May 15 16:03:30.181394 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. May 15 16:03:30.181910 systemd[1]: sshd@25-146.190.42.225:22-139.178.68.195:44928.service: Deactivated successfully. May 15 16:03:30.184768 systemd[1]: session-26.scope: Deactivated successfully. May 15 16:03:30.187682 systemd-logind[1490]: Removed session 26. May 15 16:03:31.858793 kubelet[2763]: E0515 16:03:31.858701 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 15 16:03:31.860237 containerd[1533]: time="2025-05-15T16:03:31.860184825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,}" May 15 16:03:31.933955 containerd[1533]: time="2025-05-15T16:03:31.933865693Z" level=error msg="Failed to destroy network for sandbox \"fa39836b04abaedf3faa56c5ed3dfc6c99ba4b94417e8b8599ab0eef631c6325\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:31.937470 containerd[1533]: time="2025-05-15T16:03:31.937396229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2t96,Uid:d86f0cb4-0d25-49dd-9a44-3295d0b01a8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa39836b04abaedf3faa56c5ed3dfc6c99ba4b94417e8b8599ab0eef631c6325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:31.939008 kubelet[2763]: E0515 16:03:31.937912 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa39836b04abaedf3faa56c5ed3dfc6c99ba4b94417e8b8599ab0eef631c6325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:31.938923 systemd[1]: run-netns-cni\x2dd8476686\x2d37df\x2dc361\x2dd346\x2d0e3eb013d194.mount: Deactivated successfully. May 15 16:03:31.940882 kubelet[2763]: E0515 16:03:31.939617 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa39836b04abaedf3faa56c5ed3dfc6c99ba4b94417e8b8599ab0eef631c6325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:31.940882 kubelet[2763]: E0515 16:03:31.939668 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa39836b04abaedf3faa56c5ed3dfc6c99ba4b94417e8b8599ab0eef631c6325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:31.940882 kubelet[2763]: E0515 16:03:31.939745 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h2t96_kube-system(d86f0cb4-0d25-49dd-9a44-3295d0b01a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa39836b04abaedf3faa56c5ed3dfc6c99ba4b94417e8b8599ab0eef631c6325\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2t96" podUID="d86f0cb4-0d25-49dd-9a44-3295d0b01a8e" May 15 16:03:34.242138 update_engine[1493]: I20250515 16:03:34.242046 1493 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 16:03:34.242555 update_engine[1493]: I20250515 16:03:34.242306 1493 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 16:03:34.242747 update_engine[1493]: I20250515 16:03:34.242565 1493 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 16:03:34.243486 update_engine[1493]: E20250515 16:03:34.243347 1493 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 16:03:34.243486 update_engine[1493]: I20250515 16:03:34.243419 1493 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 16:03:35.188460 systemd[1]: Started sshd@26-146.190.42.225:22-139.178.68.195:45300.service - OpenSSH per-connection server daemon (139.178.68.195:45300). May 15 16:03:35.262772 sshd[4981]: Accepted publickey for core from 139.178.68.195 port 45300 ssh2: RSA SHA256:B/UxUXQCxSZ8KU40ngHRVORunBWEBvczyNhULS2+bec May 15 16:03:35.264792 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:03:35.272077 systemd-logind[1490]: New session 27 of user core. May 15 16:03:35.278350 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 16:03:35.455564 sshd[4983]: Connection closed by 139.178.68.195 port 45300 May 15 16:03:35.456267 sshd-session[4981]: pam_unix(sshd:session): session closed for user core May 15 16:03:35.465237 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. May 15 16:03:35.466470 systemd[1]: sshd@26-146.190.42.225:22-139.178.68.195:45300.service: Deactivated successfully. May 15 16:03:35.471563 systemd[1]: session-27.scope: Deactivated successfully. May 15 16:03:35.475580 systemd-logind[1490]: Removed session 27. May 15 16:03:35.859431 containerd[1533]: time="2025-05-15T16:03:35.859010842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,}" May 15 16:03:35.931777 containerd[1533]: time="2025-05-15T16:03:35.931703534Z" level=error msg="Failed to destroy network for sandbox \"23143781f56bb0444e4645477c20d8a53d73881b4ed7beeeb5a9efa236774b7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:35.934608 containerd[1533]: time="2025-05-15T16:03:35.934557070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2wp6,Uid:15ff8378-e357-4a15-80de-bc12411a603e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23143781f56bb0444e4645477c20d8a53d73881b4ed7beeeb5a9efa236774b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:35.934883 kubelet[2763]: E0515 16:03:35.934847 2763 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23143781f56bb0444e4645477c20d8a53d73881b4ed7beeeb5a9efa236774b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:03:35.935371 kubelet[2763]: E0515 16:03:35.934917 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23143781f56bb0444e4645477c20d8a53d73881b4ed7beeeb5a9efa236774b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:35.935371 kubelet[2763]: E0515 16:03:35.934938 2763 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23143781f56bb0444e4645477c20d8a53d73881b4ed7beeeb5a9efa236774b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:35.935371 kubelet[2763]: E0515 16:03:35.935157 2763 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2wp6_calico-system(15ff8378-e357-4a15-80de-bc12411a603e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23143781f56bb0444e4645477c20d8a53d73881b4ed7beeeb5a9efa236774b7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2wp6" podUID="15ff8378-e357-4a15-80de-bc12411a603e" May 15 16:03:35.935594 systemd[1]: run-netns-cni\x2ddacf83cb\x2db3e8\x2d1168\x2d45cf\x2d6c614c49eeb8.mount: Deactivated successfully. May 15 16:03:38.871019 kubelet[2763]: I0515 16:03:38.870807 2763 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 16:03:38.871019 kubelet[2763]: I0515 16:03:38.870852 2763 container_gc.go:88] "Attempting to delete unused containers" May 15 16:03:38.875827 kubelet[2763]: I0515 16:03:38.875795 2763 image_gc_manager.go:404] "Attempting to delete unused images" May 15 16:03:38.878268 kubelet[2763]: I0515 16:03:38.878235 2763 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" size=321520 runtimeHandler="" May 15 16:03:38.885121 containerd[1533]: time="2025-05-15T16:03:38.885072915Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 16:03:38.892948 containerd[1533]: time="2025-05-15T16:03:38.892284217Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" returns successfully" May 15 16:03:38.893387 kubelet[2763]: I0515 16:03:38.893344 2763 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" size=57236178 runtimeHandler="" May 15 16:03:38.894037 containerd[1533]: time="2025-05-15T16:03:38.893881185Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 16:03:38.897246 containerd[1533]: time="2025-05-15T16:03:38.897208669Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" returns successfully" May 15 16:03:38.897657 kubelet[2763]: I0515 16:03:38.897617 2763 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" size=18182961 runtimeHandler="" May 15 16:03:38.898175 containerd[1533]: time="2025-05-15T16:03:38.898078526Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 16:03:38.900216 containerd[1533]: time="2025-05-15T16:03:38.900025656Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" returns successfully" May 15 16:03:38.914316 kubelet[2763]: I0515 16:03:38.914256 2763 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 16:03:38.914624 kubelet[2763]: I0515 16:03:38.914525 2763 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-h2t96","kube-system/coredns-7db6d8ff4d-82kdh","calico-system/calico-node-l99xj","calico-system/csi-node-driver-w2wp6","kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-proxy-rnj6z","kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb","kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb"] May 15 16:03:38.914624 kubelet[2763]: E0515 16:03:38.914576 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-h2t96" May 15 16:03:38.914624 kubelet[2763]: E0515 16:03:38.914586 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-82kdh" May 15 16:03:38.914624 kubelet[2763]: E0515 16:03:38.914596 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-l99xj" May 15 16:03:38.914624 kubelet[2763]: E0515 16:03:38.914603 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-w2wp6" May 15 16:03:38.914842 kubelet[2763]: E0515 16:03:38.914784 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:38.914842 kubelet[2763]: E0515 16:03:38.914803 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rnj6z" May 15 16:03:38.914842 kubelet[2763]: E0515 16:03:38.914813 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:38.914842 kubelet[2763]: E0515 16:03:38.914821 2763 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-32b0bb88bb" May 15 16:03:38.914842 kubelet[2763]: I0515 16:03:38.914831 2763 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 16:03:38.950879 containerd[1533]: time="2025-05-15T16:03:38.950784345Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.9\"" May 15 16:03:38.951065 containerd[1533]: time="2025-05-15T16:03:38.950935395Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"" May 15 16:03:38.951065 containerd[1533]: time="2025-05-15T16:03:38.951023356Z" level=info msg="ImageDelete event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 16:03:38.951131 containerd[1533]: time="2025-05-15T16:03:38.951081684Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.12-0\"" May 15 16:03:38.951158 containerd[1533]: time="2025-05-15T16:03:38.951138229Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\"" May 15 16:03:38.951235 containerd[1533]: time="2025-05-15T16:03:38.951187269Z" level=info msg="ImageDelete event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 16:03:38.951325 containerd[1533]: time="2025-05-15T16:03:38.951245244Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 16:03:38.951325 containerd[1533]: time="2025-05-15T16:03:38.951288160Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"" May 15 16:03:38.951377 containerd[1533]: time="2025-05-15T16:03:38.951344237Z" level=info msg="ImageDelete event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""