May 15 15:12:42.807398 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 15:12:42.807429 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 15:12:42.807442 kernel: BIOS-provided physical RAM map: May 15 15:12:42.807452 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 15:12:42.807462 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 15:12:42.807472 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 15:12:42.807480 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 15 15:12:42.807491 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 15 15:12:42.807501 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 15:12:42.807508 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 15:12:42.807515 kernel: NX (Execute Disable) protection: active May 15 15:12:42.807521 kernel: APIC: Static calls initialized May 15 15:12:42.807528 kernel: SMBIOS 2.8 present. May 15 15:12:42.807535 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 15 15:12:42.807546 kernel: DMI: Memory slots populated: 1/1 May 15 15:12:42.807554 kernel: Hypervisor detected: KVM May 15 15:12:42.807564 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 15:12:42.807572 kernel: kvm-clock: using sched offset of 4096116940 cycles May 15 15:12:42.807580 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 15:12:42.807588 kernel: tsc: Detected 2494.140 MHz processor May 15 15:12:42.807596 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 15:12:42.807605 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 15:12:42.807612 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 15 15:12:42.807627 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 15:12:42.807638 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 15:12:42.807649 kernel: ACPI: Early table checksum verification disabled May 15 15:12:42.807661 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 15 15:12:42.807675 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:12:42.807691 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:12:42.807703 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:12:42.807717 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 15:12:42.807730 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:12:42.807750 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:12:42.807765 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:12:42.807780 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:12:42.807794 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 15 15:12:42.807807 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 15 15:12:42.807821 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 15:12:42.807835 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 15 15:12:42.807851 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 15 15:12:42.807875 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 15 15:12:42.807892 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 15 15:12:42.807911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 15 15:12:42.807930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 15 15:12:42.807948 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 15 15:12:42.807967 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 15 15:12:42.807990 kernel: Zone ranges: May 15 15:12:42.808007 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 15:12:42.808025 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 15 15:12:42.808044 kernel: Normal empty May 15 15:12:42.808063 kernel: Device empty May 15 15:12:42.808081 kernel: Movable zone start for each node May 15 15:12:42.808098 kernel: Early memory node ranges May 15 15:12:42.808117 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 15:12:42.808136 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 15 15:12:42.808161 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 15 15:12:42.808216 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 15:12:42.808236 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 15:12:42.808255 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 15 15:12:42.808273 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 15:12:42.808291 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 15:12:42.808320 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 15:12:42.808339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 15:12:42.808362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 15:12:42.808382 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 15:12:42.808397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 15:12:42.808409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 15:12:42.808421 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 15:12:42.808432 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 15:12:42.808440 kernel: TSC deadline timer available May 15 15:12:42.808448 kernel: CPU topo: Max. logical packages: 1 May 15 15:12:42.808457 kernel: CPU topo: Max. logical dies: 1 May 15 15:12:42.808465 kernel: CPU topo: Max. dies per package: 1 May 15 15:12:42.808473 kernel: CPU topo: Max. threads per core: 1 May 15 15:12:42.808484 kernel: CPU topo: Num. cores per package: 2 May 15 15:12:42.808492 kernel: CPU topo: Num. threads per package: 2 May 15 15:12:42.808501 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 15:12:42.808513 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 15:12:42.808525 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 15 15:12:42.808537 kernel: Booting paravirtualized kernel on KVM May 15 15:12:42.808549 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 15:12:42.808562 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 15:12:42.808570 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 15:12:42.808582 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 15:12:42.808590 kernel: pcpu-alloc: [0] 0 1 May 15 15:12:42.808598 kernel: kvm-guest: PV spinlocks disabled, no host support May 15 15:12:42.808608 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 15:12:42.808617 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 15:12:42.808625 kernel: random: crng init done May 15 15:12:42.808633 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 15:12:42.808642 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 15 15:12:42.808652 kernel: Fallback order for Node 0: 0 May 15 15:12:42.808660 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 15 15:12:42.808669 kernel: Policy zone: DMA32 May 15 15:12:42.808677 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 15:12:42.808685 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 15:12:42.808694 kernel: Kernel/User page tables isolation: enabled May 15 15:12:42.808702 kernel: ftrace: allocating 40065 entries in 157 pages May 15 15:12:42.808710 kernel: ftrace: allocated 157 pages with 5 groups May 15 15:12:42.808718 kernel: Dynamic Preempt: voluntary May 15 15:12:42.808729 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 15:12:42.808739 kernel: rcu: RCU event tracing is enabled. May 15 15:12:42.808747 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 15:12:42.808755 kernel: Trampoline variant of Tasks RCU enabled. May 15 15:12:42.808764 kernel: Rude variant of Tasks RCU enabled. May 15 15:12:42.808772 kernel: Tracing variant of Tasks RCU enabled. May 15 15:12:42.808780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 15:12:42.808788 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 15:12:42.808796 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 15:12:42.808811 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 15:12:42.808820 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 15:12:42.808828 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 15:12:42.808836 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 15:12:42.808844 kernel: Console: colour VGA+ 80x25 May 15 15:12:42.808852 kernel: printk: legacy console [tty0] enabled May 15 15:12:42.808860 kernel: printk: legacy console [ttyS0] enabled May 15 15:12:42.808869 kernel: ACPI: Core revision 20240827 May 15 15:12:42.808877 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 15:12:42.808896 kernel: APIC: Switch to symmetric I/O mode setup May 15 15:12:42.808905 kernel: x2apic enabled May 15 15:12:42.808914 kernel: APIC: Switched APIC routing to: physical x2apic May 15 15:12:42.808925 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 15:12:42.808935 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 15 15:12:42.808944 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) May 15 15:12:42.808953 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 15 15:12:42.808962 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 15 15:12:42.808971 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 15:12:42.808983 kernel: Spectre V2 : Mitigation: Retpolines May 15 15:12:42.808991 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 15:12:42.809000 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 15:12:42.809009 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 15:12:42.809018 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 15:12:42.809027 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 15:12:42.809036 kernel: MDS: Mitigation: Clear CPU buffers May 15 15:12:42.809047 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 15 15:12:42.809056 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 15:12:42.809064 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 15:12:42.809073 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 15:12:42.809082 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 15:12:42.809091 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 15:12:42.809100 kernel: Freeing SMP alternatives memory: 32K May 15 15:12:42.809108 kernel: pid_max: default: 32768 minimum: 301 May 15 15:12:42.809117 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 15:12:42.809128 kernel: landlock: Up and running. May 15 15:12:42.809137 kernel: SELinux: Initializing. May 15 15:12:42.809146 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 15:12:42.809154 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 15:12:42.809163 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 15 15:12:42.809185 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 15 15:12:42.809203 kernel: signal: max sigframe size: 1776 May 15 15:12:42.809212 kernel: rcu: Hierarchical SRCU implementation. May 15 15:12:42.809221 kernel: rcu: Max phase no-delay instances is 400. May 15 15:12:42.809234 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 15:12:42.809242 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 15 15:12:42.809251 kernel: smp: Bringing up secondary CPUs ... May 15 15:12:42.809260 kernel: smpboot: x86: Booting SMP configuration: May 15 15:12:42.809271 kernel: .... node #0, CPUs: #1 May 15 15:12:42.809280 kernel: smp: Brought up 1 node, 2 CPUs May 15 15:12:42.809289 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) May 15 15:12:42.809298 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 125140K reserved, 0K cma-reserved) May 15 15:12:42.809307 kernel: devtmpfs: initialized May 15 15:12:42.809319 kernel: x86/mm: Memory block size: 128MB May 15 15:12:42.809328 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 15:12:42.809337 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 15:12:42.809346 kernel: pinctrl core: initialized pinctrl subsystem May 15 15:12:42.809354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 15:12:42.809363 kernel: audit: initializing netlink subsys (disabled) May 15 15:12:42.809372 kernel: audit: type=2000 audit(1747321959.778:1): state=initialized audit_enabled=0 res=1 May 15 15:12:42.809380 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 15:12:42.809389 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 15:12:42.809401 kernel: cpuidle: using governor menu May 15 15:12:42.809410 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 15:12:42.809418 kernel: dca service started, version 1.12.1 May 15 15:12:42.809427 kernel: PCI: Using configuration type 1 for base access May 15 15:12:42.809436 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 15:12:42.809445 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 15:12:42.809454 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 15:12:42.809465 kernel: ACPI: Added _OSI(Module Device) May 15 15:12:42.809481 kernel: ACPI: Added _OSI(Processor Device) May 15 15:12:42.809495 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 15:12:42.809507 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 15:12:42.809520 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 15:12:42.809531 kernel: ACPI: Interpreter enabled May 15 15:12:42.809542 kernel: ACPI: PM: (supports S0 S5) May 15 15:12:42.809553 kernel: ACPI: Using IOAPIC for interrupt routing May 15 15:12:42.809565 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 15:12:42.809576 kernel: PCI: Using E820 reservations for host bridge windows May 15 15:12:42.809588 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 15 15:12:42.809604 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 15:12:42.809832 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 15:12:42.809935 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 15 15:12:42.810027 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 15 15:12:42.810039 kernel: acpiphp: Slot [3] registered May 15 15:12:42.810049 kernel: acpiphp: Slot [4] registered May 15 15:12:42.810058 kernel: acpiphp: Slot [5] registered May 15 15:12:42.810071 kernel: acpiphp: Slot [6] registered May 15 15:12:42.810080 kernel: acpiphp: Slot [7] registered May 15 15:12:42.810089 kernel: acpiphp: Slot [8] registered May 15 15:12:42.810097 kernel: acpiphp: Slot [9] registered May 15 15:12:42.810106 kernel: acpiphp: Slot [10] registered May 15 15:12:42.810115 kernel: acpiphp: Slot [11] registered May 15 15:12:42.810124 kernel: acpiphp: Slot [12] registered May 15 15:12:42.810133 kernel: acpiphp: Slot [13] registered May 15 15:12:42.810141 kernel: acpiphp: Slot [14] registered May 15 15:12:42.810150 kernel: acpiphp: Slot [15] registered May 15 15:12:42.810162 kernel: acpiphp: Slot [16] registered May 15 15:12:42.810186 kernel: acpiphp: Slot [17] registered May 15 15:12:42.810195 kernel: acpiphp: Slot [18] registered May 15 15:12:42.810214 kernel: acpiphp: Slot [19] registered May 15 15:12:42.810223 kernel: acpiphp: Slot [20] registered May 15 15:12:42.810232 kernel: acpiphp: Slot [21] registered May 15 15:12:42.810240 kernel: acpiphp: Slot [22] registered May 15 15:12:42.810249 kernel: acpiphp: Slot [23] registered May 15 15:12:42.810258 kernel: acpiphp: Slot [24] registered May 15 15:12:42.810270 kernel: acpiphp: Slot [25] registered May 15 15:12:42.810279 kernel: acpiphp: Slot [26] registered May 15 15:12:42.810288 kernel: acpiphp: Slot [27] registered May 15 15:12:42.810296 kernel: acpiphp: Slot [28] registered May 15 15:12:42.810305 kernel: acpiphp: Slot [29] registered May 15 15:12:42.810314 kernel: acpiphp: Slot [30] registered May 15 15:12:42.810323 kernel: acpiphp: Slot [31] registered May 15 15:12:42.810332 kernel: PCI host bridge to bus 0000:00 May 15 15:12:42.810433 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 15:12:42.810518 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 15:12:42.810602 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 15:12:42.810680 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 15 15:12:42.810758 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 15 15:12:42.810835 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 15:12:42.810955 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 15 15:12:42.811058 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 15 15:12:42.811159 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 15 15:12:42.811260 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 15 15:12:42.811348 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 15 15:12:42.811437 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 15 15:12:42.811537 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 15 15:12:42.811662 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 15 15:12:42.811815 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 15 15:12:42.811910 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 15 15:12:42.812014 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 15 15:12:42.812104 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 15 15:12:42.812217 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 15 15:12:42.812345 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 15 15:12:42.812442 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 15 15:12:42.812529 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 15 15:12:42.812617 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 15 15:12:42.812705 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 15 15:12:42.812792 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 15:12:42.812914 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 15:12:42.813005 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 15 15:12:42.813097 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 15 15:12:42.813198 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 15 15:12:42.813298 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 15:12:42.813387 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 15 15:12:42.813475 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 15 15:12:42.813563 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 15 15:12:42.813670 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 15:12:42.813834 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 15 15:12:42.813953 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 15 15:12:42.814043 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 15 15:12:42.814142 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 15:12:42.814245 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 15 15:12:42.814334 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 15 15:12:42.814421 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 15 15:12:42.814572 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 15:12:42.814663 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 15 15:12:42.814750 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 15 15:12:42.814856 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 15 15:12:42.814981 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 15 15:12:42.815072 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 15 15:12:42.815165 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 15 15:12:42.815187 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 15:12:42.815196 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 15:12:42.815205 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 15:12:42.815214 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 15:12:42.815223 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 15:12:42.815232 kernel: iommu: Default domain type: Translated May 15 15:12:42.815241 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 15:12:42.815254 kernel: PCI: Using ACPI for IRQ routing May 15 15:12:42.815263 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 15:12:42.815272 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 15:12:42.815281 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 15 15:12:42.815370 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 15 15:12:42.815459 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 15 15:12:42.815549 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 15:12:42.815561 kernel: vgaarb: loaded May 15 15:12:42.815571 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 15:12:42.815583 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 15:12:42.815592 kernel: clocksource: Switched to clocksource kvm-clock May 15 15:12:42.815600 kernel: VFS: Disk quotas dquot_6.6.0 May 15 15:12:42.815609 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 15:12:42.815618 kernel: pnp: PnP ACPI init May 15 15:12:42.815627 kernel: pnp: PnP ACPI: found 4 devices May 15 15:12:42.815636 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 15:12:42.815646 kernel: NET: Registered PF_INET protocol family May 15 15:12:42.815654 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 15:12:42.815666 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 15 15:12:42.815675 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 15:12:42.815684 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 15 15:12:42.815693 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 15 15:12:42.815702 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 15 15:12:42.815711 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 15:12:42.815720 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 15:12:42.815729 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 15:12:42.815738 kernel: NET: Registered PF_XDP protocol family May 15 15:12:42.815828 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 15:12:42.815908 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 15:12:42.815986 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 15:12:42.816064 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 15 15:12:42.816142 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 15 15:12:42.816257 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 15 15:12:42.816352 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 15:12:42.816365 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 15 15:12:42.816460 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 24769 usecs May 15 15:12:42.816471 kernel: PCI: CLS 0 bytes, default 64 May 15 15:12:42.816480 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 15 15:12:42.816490 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 15 15:12:42.816499 kernel: Initialise system trusted keyrings May 15 15:12:42.816507 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 15 15:12:42.816516 kernel: Key type asymmetric registered May 15 15:12:42.816525 kernel: Asymmetric key parser 'x509' registered May 15 15:12:42.816537 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 15:12:42.816545 kernel: io scheduler mq-deadline registered May 15 15:12:42.816554 kernel: io scheduler kyber registered May 15 15:12:42.816563 kernel: io scheduler bfq registered May 15 15:12:42.816572 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 15:12:42.816581 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 15 15:12:42.816590 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 15:12:42.816599 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 15:12:42.816608 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 15:12:42.816617 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 15:12:42.816628 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 15:12:42.816637 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 15:12:42.816646 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 15:12:42.816758 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 15:12:42.816771 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 15:12:42.816852 kernel: rtc_cmos 00:03: registered as rtc0 May 15 15:12:42.816934 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T15:12:42 UTC (1747321962) May 15 15:12:42.817020 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 15 15:12:42.817032 kernel: intel_pstate: CPU model not supported May 15 15:12:42.817041 kernel: NET: Registered PF_INET6 protocol family May 15 15:12:42.817050 kernel: Segment Routing with IPv6 May 15 15:12:42.817059 kernel: In-situ OAM (IOAM) with IPv6 May 15 15:12:42.817068 kernel: NET: Registered PF_PACKET protocol family May 15 15:12:42.817077 kernel: Key type dns_resolver registered May 15 15:12:42.817085 kernel: IPI shorthand broadcast: enabled May 15 15:12:42.817094 kernel: sched_clock: Marking stable (3079004880, 97774044)->(3192836272, -16057348) May 15 15:12:42.817106 kernel: registered taskstats version 1 May 15 15:12:42.817116 kernel: Loading compiled-in X.509 certificates May 15 15:12:42.817125 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 15:12:42.817133 kernel: Demotion targets for Node 0: null May 15 15:12:42.817142 kernel: Key type .fscrypt registered May 15 15:12:42.817151 kernel: Key type fscrypt-provisioning registered May 15 15:12:42.817186 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 15:12:42.817212 kernel: ima: Allocated hash algorithm: sha1 May 15 15:12:42.817222 kernel: ima: No architecture policies found May 15 15:12:42.817234 kernel: clk: Disabling unused clocks May 15 15:12:42.817243 kernel: Warning: unable to open an initial console. May 15 15:12:42.817253 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 15:12:42.817262 kernel: Write protecting the kernel read-only data: 24576k May 15 15:12:42.817271 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 15:12:42.817280 kernel: Run /init as init process May 15 15:12:42.817289 kernel: with arguments: May 15 15:12:42.817298 kernel: /init May 15 15:12:42.817307 kernel: with environment: May 15 15:12:42.817319 kernel: HOME=/ May 15 15:12:42.817328 kernel: TERM=linux May 15 15:12:42.817337 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 15:12:42.817351 systemd[1]: Successfully made /usr/ read-only. May 15 15:12:42.817371 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 15:12:42.817385 systemd[1]: Detected virtualization kvm. May 15 15:12:42.817398 systemd[1]: Detected architecture x86-64. May 15 15:12:42.817411 systemd[1]: Running in initrd. May 15 15:12:42.817424 systemd[1]: No hostname configured, using default hostname. May 15 15:12:42.817434 systemd[1]: Hostname set to . May 15 15:12:42.817443 systemd[1]: Initializing machine ID from VM UUID. May 15 15:12:42.817453 systemd[1]: Queued start job for default target initrd.target. May 15 15:12:42.817464 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 15:12:42.817473 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 15:12:42.817483 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 15:12:42.817493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 15:12:42.817505 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 15:12:42.817519 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 15:12:42.817530 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 15:12:42.817542 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 15:12:42.817552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 15:12:42.817562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 15:12:42.817572 systemd[1]: Reached target paths.target - Path Units. May 15 15:12:42.817582 systemd[1]: Reached target slices.target - Slice Units. May 15 15:12:42.817594 systemd[1]: Reached target swap.target - Swaps. May 15 15:12:42.817608 systemd[1]: Reached target timers.target - Timer Units. May 15 15:12:42.817619 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 15:12:42.817630 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 15:12:42.817650 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 15:12:42.817664 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 15:12:42.817677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 15:12:42.817692 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 15:12:42.817718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 15:12:42.817733 systemd[1]: Reached target sockets.target - Socket Units. May 15 15:12:42.817748 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 15:12:42.817762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 15:12:42.817779 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 15:12:42.817790 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 15:12:42.817800 systemd[1]: Starting systemd-fsck-usr.service... May 15 15:12:42.817810 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 15:12:42.817820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 15:12:42.817830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:12:42.817881 systemd-journald[212]: Collecting audit messages is disabled. May 15 15:12:42.817908 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 15:12:42.817918 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 15:12:42.817932 systemd-journald[212]: Journal started May 15 15:12:42.817954 systemd-journald[212]: Runtime Journal (/run/log/journal/88e29c98fc0e4aee9640f8416ea08257) is 4.9M, max 39.5M, 34.6M free. May 15 15:12:42.820215 systemd[1]: Started systemd-journald.service - Journal Service. May 15 15:12:42.823582 systemd[1]: Finished systemd-fsck-usr.service. May 15 15:12:42.830303 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 15:12:42.833359 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 15:12:42.840027 systemd-modules-load[214]: Inserted module 'overlay' May 15 15:12:42.853967 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 15:12:42.888514 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 15:12:42.888546 kernel: Bridge firewalling registered May 15 15:12:42.863107 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 15:12:42.874060 systemd-modules-load[214]: Inserted module 'br_netfilter' May 15 15:12:42.889213 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 15:12:42.890400 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:12:42.890914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 15:12:42.893972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 15:12:42.895082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 15:12:42.897326 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 15:12:42.914222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 15:12:42.916634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 15:12:42.919407 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 15:12:42.934438 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 15:12:42.937412 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 15:12:42.969214 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 15:12:42.974550 systemd-resolved[243]: Positive Trust Anchors: May 15 15:12:42.975359 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 15:12:42.975995 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 15:12:42.981609 systemd-resolved[243]: Defaulting to hostname 'linux'. May 15 15:12:42.983353 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 15:12:42.983850 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 15:12:43.069241 kernel: SCSI subsystem initialized May 15 15:12:43.079210 kernel: Loading iSCSI transport class v2.0-870. May 15 15:12:43.092223 kernel: iscsi: registered transport (tcp) May 15 15:12:43.115378 kernel: iscsi: registered transport (qla4xxx) May 15 15:12:43.115457 kernel: QLogic iSCSI HBA Driver May 15 15:12:43.143026 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 15:12:43.171108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 15:12:43.172093 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 15:12:43.240155 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 15:12:43.242084 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 15:12:43.295222 kernel: raid6: avx2x4 gen() 17159 MB/s May 15 15:12:43.312217 kernel: raid6: avx2x2 gen() 17150 MB/s May 15 15:12:43.329380 kernel: raid6: avx2x1 gen() 12736 MB/s May 15 15:12:43.329461 kernel: raid6: using algorithm avx2x4 gen() 17159 MB/s May 15 15:12:43.347723 kernel: raid6: .... xor() 8907 MB/s, rmw enabled May 15 15:12:43.347817 kernel: raid6: using avx2x2 recovery algorithm May 15 15:12:43.370216 kernel: xor: automatically using best checksumming function avx May 15 15:12:43.540357 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 15:12:43.547365 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 15:12:43.550316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 15:12:43.577271 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 15 15:12:43.584745 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 15:12:43.588178 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 15:12:43.615070 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 15 15:12:43.646218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 15:12:43.648716 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 15:12:43.713221 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 15:12:43.716630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 15:12:43.791830 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 15 15:12:43.796581 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 15 15:12:43.845918 kernel: scsi host0: Virtio SCSI HBA May 15 15:12:43.849576 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 15 15:12:43.850671 kernel: cryptd: max_cpu_qlen set to 1000 May 15 15:12:43.850692 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 15:12:43.850704 kernel: GPT:9289727 != 125829119 May 15 15:12:43.850715 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 15:12:43.850727 kernel: GPT:9289727 != 125829119 May 15 15:12:43.850738 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 15:12:43.850749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 15:12:43.850760 kernel: AES CTR mode by8 optimization enabled May 15 15:12:43.850772 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 15 15:12:43.874474 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 15:12:43.874500 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 15 15:12:43.876294 kernel: ACPI: bus type USB registered May 15 15:12:43.876313 kernel: libata version 3.00 loaded. May 15 15:12:43.876326 kernel: ata_piix 0000:00:01.1: version 2.13 May 15 15:12:43.898739 kernel: usbcore: registered new interface driver usbfs May 15 15:12:43.898759 kernel: usbcore: registered new interface driver hub May 15 15:12:43.898771 kernel: usbcore: registered new device driver usb May 15 15:12:43.898789 kernel: scsi host1: ata_piix May 15 15:12:43.898922 kernel: scsi host2: ata_piix May 15 15:12:43.899032 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 15 15:12:43.899045 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 15 15:12:43.890834 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 15:12:43.891021 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:12:43.892298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:12:43.895823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:12:43.897922 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 15:12:43.958703 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 15:12:43.981539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:12:43.991504 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 15:12:44.000466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 15:12:44.007575 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 15:12:44.008023 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 15:12:44.009998 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 15:12:44.024301 disk-uuid[599]: Primary Header is updated. May 15 15:12:44.024301 disk-uuid[599]: Secondary Entries is updated. May 15 15:12:44.024301 disk-uuid[599]: Secondary Header is updated. May 15 15:12:44.035209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 15:12:44.077012 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 15 15:12:44.102028 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 15 15:12:44.102236 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 15 15:12:44.102359 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 15 15:12:44.102469 kernel: hub 1-0:1.0: USB hub found May 15 15:12:44.102594 kernel: hub 1-0:1.0: 2 ports detected May 15 15:12:44.173282 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 15:12:44.174268 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 15:12:44.174644 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 15:12:44.175486 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 15:12:44.177096 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 15:12:44.196600 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 15:12:45.044704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 15:12:45.046377 disk-uuid[600]: The operation has completed successfully. May 15 15:12:45.108140 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 15:12:45.108302 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 15:12:45.142120 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 15:12:45.168652 sh[631]: Success May 15 15:12:45.189351 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 15:12:45.189428 kernel: device-mapper: uevent: version 1.0.3 May 15 15:12:45.191970 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 15:12:45.204203 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 15 15:12:45.265891 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 15:12:45.267276 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 15:12:45.282512 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 15:12:45.292203 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 15:12:45.294428 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (644) May 15 15:12:45.294491 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 15:12:45.296330 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 15:12:45.296384 kernel: BTRFS info (device dm-0): using free-space-tree May 15 15:12:45.304399 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 15:12:45.305916 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 15:12:45.306867 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 15:12:45.308358 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 15:12:45.309830 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 15:12:45.343200 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (675) May 15 15:12:45.343283 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:12:45.345512 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 15:12:45.345566 kernel: BTRFS info (device vda6): using free-space-tree May 15 15:12:45.357248 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:12:45.359547 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 15:12:45.362403 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 15:12:45.434004 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 15:12:45.438331 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 15:12:45.482150 systemd-networkd[814]: lo: Link UP May 15 15:12:45.482161 systemd-networkd[814]: lo: Gained carrier May 15 15:12:45.487634 systemd-networkd[814]: Enumeration completed May 15 15:12:45.487972 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 15 15:12:45.487976 systemd-networkd[814]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 15 15:12:45.488833 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 15:12:45.489824 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 15:12:45.489829 systemd-networkd[814]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 15:12:45.492046 systemd-networkd[814]: eth0: Link UP May 15 15:12:45.492051 systemd-networkd[814]: eth0: Gained carrier May 15 15:12:45.492064 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 15 15:12:45.492998 systemd[1]: Reached target network.target - Network. May 15 15:12:45.496497 systemd-networkd[814]: eth1: Link UP May 15 15:12:45.496502 systemd-networkd[814]: eth1: Gained carrier May 15 15:12:45.496516 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 15:12:45.510286 systemd-networkd[814]: eth0: DHCPv4 address 165.232.158.142/20, gateway 165.232.144.1 acquired from 169.254.169.253 May 15 15:12:45.518280 systemd-networkd[814]: eth1: DHCPv4 address 10.124.0.33/20 acquired from 169.254.169.253 May 15 15:12:45.547208 ignition[732]: Ignition 2.21.0 May 15 15:12:45.547761 ignition[732]: Stage: fetch-offline May 15 15:12:45.547799 ignition[732]: no configs at "/usr/lib/ignition/base.d" May 15 15:12:45.547807 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:12:45.547916 ignition[732]: parsed url from cmdline: "" May 15 15:12:45.547920 ignition[732]: no config URL provided May 15 15:12:45.547926 ignition[732]: reading system config file "/usr/lib/ignition/user.ign" May 15 15:12:45.547933 ignition[732]: no config at "/usr/lib/ignition/user.ign" May 15 15:12:45.547938 ignition[732]: failed to fetch config: resource requires networking May 15 15:12:45.552883 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 15:12:45.550392 ignition[732]: Ignition finished successfully May 15 15:12:45.554948 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 15:12:45.588227 ignition[824]: Ignition 2.21.0 May 15 15:12:45.588240 ignition[824]: Stage: fetch May 15 15:12:45.588443 ignition[824]: no configs at "/usr/lib/ignition/base.d" May 15 15:12:45.588452 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:12:45.588543 ignition[824]: parsed url from cmdline: "" May 15 15:12:45.588547 ignition[824]: no config URL provided May 15 15:12:45.588552 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" May 15 15:12:45.588560 ignition[824]: no config at "/usr/lib/ignition/user.ign" May 15 15:12:45.588587 ignition[824]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 15 15:12:45.603494 ignition[824]: GET result: OK May 15 15:12:45.604384 ignition[824]: parsing config with SHA512: 740c9c77a7a1a96aaf697a7d3501a4b72251f27c7c444061a704c13bf88a643970e0248937baa2221d48c692268bf40630869d8153d1979a35cde66f098a657d May 15 15:12:45.613130 unknown[824]: fetched base config from "system" May 15 15:12:45.613144 unknown[824]: fetched base config from "system" May 15 15:12:45.613522 ignition[824]: fetch: fetch complete May 15 15:12:45.613153 unknown[824]: fetched user config from "digitalocean" May 15 15:12:45.613528 ignition[824]: fetch: fetch passed May 15 15:12:45.613577 ignition[824]: Ignition finished successfully May 15 15:12:45.615833 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 15:12:45.619346 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 15:12:45.651487 ignition[830]: Ignition 2.21.0 May 15 15:12:45.651500 ignition[830]: Stage: kargs May 15 15:12:45.651656 ignition[830]: no configs at "/usr/lib/ignition/base.d" May 15 15:12:45.651667 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:12:45.652469 ignition[830]: kargs: kargs passed May 15 15:12:45.652522 ignition[830]: Ignition finished successfully May 15 15:12:45.653869 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 15:12:45.656103 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 15:12:45.697612 ignition[837]: Ignition 2.21.0 May 15 15:12:45.697638 ignition[837]: Stage: disks May 15 15:12:45.697953 ignition[837]: no configs at "/usr/lib/ignition/base.d" May 15 15:12:45.697970 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:12:45.699338 ignition[837]: disks: disks passed May 15 15:12:45.699410 ignition[837]: Ignition finished successfully May 15 15:12:45.700667 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 15:12:45.701830 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 15:12:45.702249 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 15:12:45.702985 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 15:12:45.703631 systemd[1]: Reached target sysinit.target - System Initialization. May 15 15:12:45.704312 systemd[1]: Reached target basic.target - Basic System. May 15 15:12:45.706074 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 15:12:45.736631 systemd-fsck[845]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 15:12:45.739739 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 15:12:45.742290 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 15:12:45.865214 kernel: EXT4-fs (vda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 15:12:45.865474 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 15:12:45.866403 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 15:12:45.868193 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 15:12:45.869982 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 15:12:45.873071 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 15 15:12:45.882457 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 15:12:45.883636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 15:12:45.884548 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 15:12:45.888124 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 15:12:45.894700 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 15:12:45.909198 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (853) May 15 15:12:45.930122 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:12:45.930217 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 15:12:45.930240 kernel: BTRFS info (device vda6): using free-space-tree May 15 15:12:45.955358 coreos-metadata[856]: May 15 15:12:45.955 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 15:12:45.961420 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 15:12:45.968441 coreos-metadata[856]: May 15 15:12:45.968 INFO Fetch successful May 15 15:12:45.973386 coreos-metadata[855]: May 15 15:12:45.973 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 15:12:45.974865 coreos-metadata[856]: May 15 15:12:45.974 INFO wrote hostname ci-4334.0.0-a-3982d56781 to /sysroot/etc/hostname May 15 15:12:45.975891 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 15:12:45.977268 initrd-setup-root[883]: cut: /sysroot/etc/passwd: No such file or directory May 15 15:12:45.982549 initrd-setup-root[891]: cut: /sysroot/etc/group: No such file or directory May 15 15:12:45.988303 initrd-setup-root[898]: cut: /sysroot/etc/shadow: No such file or directory May 15 15:12:45.989430 coreos-metadata[855]: May 15 15:12:45.989 INFO Fetch successful May 15 15:12:45.995891 initrd-setup-root[905]: cut: /sysroot/etc/gshadow: No such file or directory May 15 15:12:45.996479 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 15 15:12:45.996596 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 15 15:12:46.093066 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 15:12:46.095537 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 15:12:46.096578 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 15:12:46.123207 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:12:46.138118 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 15:12:46.152473 ignition[978]: INFO : Ignition 2.21.0 May 15 15:12:46.152473 ignition[978]: INFO : Stage: mount May 15 15:12:46.153460 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 15:12:46.153460 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:12:46.154447 ignition[978]: INFO : mount: mount passed May 15 15:12:46.154447 ignition[978]: INFO : Ignition finished successfully May 15 15:12:46.155323 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 15:12:46.157145 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 15:12:46.293068 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 15:12:46.295454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 15:12:46.330410 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (989) May 15 15:12:46.330468 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:12:46.332562 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 15:12:46.332618 kernel: BTRFS info (device vda6): using free-space-tree May 15 15:12:46.339570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 15:12:46.371839 ignition[1005]: INFO : Ignition 2.21.0 May 15 15:12:46.371839 ignition[1005]: INFO : Stage: files May 15 15:12:46.376339 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 15:12:46.376339 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:12:46.377386 ignition[1005]: DEBUG : files: compiled without relabeling support, skipping May 15 15:12:46.378054 ignition[1005]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 15:12:46.378054 ignition[1005]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 15:12:46.380260 ignition[1005]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 15:12:46.380739 ignition[1005]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 15:12:46.381237 ignition[1005]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 15:12:46.380749 unknown[1005]: wrote ssh authorized keys file for user: core May 15 15:12:46.382604 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 15:12:46.383217 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 15:12:46.420227 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 15:12:46.553420 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 15:12:46.554292 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 15:12:46.554292 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 15:12:46.554292 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 15:12:46.554292 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 15:12:46.554292 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 15:12:46.554292 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 15:12:46.554292 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 15:12:46.554292 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 15:12:46.561742 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 15:12:46.561742 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 15:12:46.561742 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 15:12:46.561742 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 15:12:46.561742 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 15:12:46.561742 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 15:12:46.789347 systemd-networkd[814]: eth0: Gained IPv6LL May 15 15:12:47.041212 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 15:12:47.365309 systemd-networkd[814]: eth1: Gained IPv6LL May 15 15:12:47.390014 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 15:12:47.390014 ignition[1005]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 15:12:47.391811 ignition[1005]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 15:12:47.392679 ignition[1005]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 15:12:47.392679 ignition[1005]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 15:12:47.392679 ignition[1005]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 15 15:12:47.392679 ignition[1005]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 15 15:12:47.392679 ignition[1005]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 15:12:47.392679 ignition[1005]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 15:12:47.392679 ignition[1005]: INFO : files: files passed May 15 15:12:47.392679 ignition[1005]: INFO : Ignition finished successfully May 15 15:12:47.394790 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 15:12:47.396915 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 15:12:47.401320 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 15:12:47.411629 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 15:12:47.411761 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 15:12:47.419871 initrd-setup-root-after-ignition[1036]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 15:12:47.419871 initrd-setup-root-after-ignition[1036]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 15:12:47.422089 initrd-setup-root-after-ignition[1040]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 15:12:47.423943 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 15:12:47.425447 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 15:12:47.426855 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 15:12:47.481972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 15:12:47.482116 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 15:12:47.482996 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 15:12:47.483719 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 15:12:47.484494 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 15:12:47.485376 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 15:12:47.508553 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 15:12:47.510570 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 15:12:47.532284 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 15:12:47.533471 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 15:12:47.534647 systemd[1]: Stopped target timers.target - Timer Units. May 15 15:12:47.535473 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 15:12:47.535640 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 15:12:47.537201 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 15:12:47.537984 systemd[1]: Stopped target basic.target - Basic System. May 15 15:12:47.538839 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 15:12:47.539712 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 15:12:47.540616 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 15:12:47.541605 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 15:12:47.542416 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 15:12:47.543562 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 15:12:47.544219 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 15:12:47.545082 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 15:12:47.545773 systemd[1]: Stopped target swap.target - Swaps. May 15 15:12:47.546327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 15:12:47.546489 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 15:12:47.547254 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 15:12:47.548077 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 15:12:47.548694 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 15:12:47.548920 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 15:12:47.549566 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 15:12:47.549838 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 15:12:47.550906 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 15:12:47.551061 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 15:12:47.551962 systemd[1]: ignition-files.service: Deactivated successfully. May 15 15:12:47.552105 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 15:12:47.552669 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 15:12:47.552810 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 15:12:47.554281 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 15:12:47.556405 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 15:12:47.556599 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 15:12:47.558966 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 15:12:47.560273 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 15:12:47.560409 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 15:12:47.564146 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 15:12:47.564285 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 15:12:47.569135 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 15:12:47.570363 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 15:12:47.590041 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 15:12:47.620438 ignition[1060]: INFO : Ignition 2.21.0 May 15 15:12:47.620438 ignition[1060]: INFO : Stage: umount May 15 15:12:47.620438 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 15:12:47.620438 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:12:47.620438 ignition[1060]: INFO : umount: umount passed May 15 15:12:47.620438 ignition[1060]: INFO : Ignition finished successfully May 15 15:12:47.598938 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 15:12:47.599063 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 15:12:47.623161 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 15:12:47.623348 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 15:12:47.623808 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 15:12:47.623870 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 15:12:47.625276 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 15:12:47.625338 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 15:12:47.625883 systemd[1]: Stopped target network.target - Network. May 15 15:12:47.626420 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 15:12:47.626472 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 15:12:47.627085 systemd[1]: Stopped target paths.target - Path Units. May 15 15:12:47.627619 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 15:12:47.631309 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 15:12:47.631780 systemd[1]: Stopped target slices.target - Slice Units. May 15 15:12:47.632543 systemd[1]: Stopped target sockets.target - Socket Units. May 15 15:12:47.633161 systemd[1]: iscsid.socket: Deactivated successfully. May 15 15:12:47.633230 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 15:12:47.633818 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 15:12:47.633859 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 15:12:47.634356 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 15:12:47.634415 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 15:12:47.634995 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 15:12:47.635053 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 15:12:47.635743 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 15:12:47.636338 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 15:12:47.637595 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 15:12:47.637731 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 15:12:47.638832 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 15:12:47.638944 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 15:12:47.644590 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 15:12:47.644731 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 15:12:47.648738 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 15:12:47.649006 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 15:12:47.649158 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 15:12:47.651289 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 15:12:47.651991 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 15:12:47.652793 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 15:12:47.652837 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 15:12:47.654108 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 15:12:47.654518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 15:12:47.654570 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 15:12:47.654978 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 15:12:47.655016 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 15:12:47.656294 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 15:12:47.656335 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 15:12:47.657013 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 15:12:47.657071 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 15:12:47.657879 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 15:12:47.660453 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 15:12:47.660517 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 15:12:47.669030 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 15:12:47.674471 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 15:12:47.675409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 15:12:47.675465 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 15:12:47.675883 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 15:12:47.675913 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 15:12:47.676240 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 15:12:47.676296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 15:12:47.676736 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 15:12:47.676791 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 15:12:47.677970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 15:12:47.678015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 15:12:47.680414 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 15:12:47.681018 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 15:12:47.681069 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 15:12:47.683447 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 15:12:47.683507 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 15:12:47.685871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 15:12:47.685921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:12:47.688534 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 15:12:47.688604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 15:12:47.688644 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 15:12:47.699913 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 15:12:47.700055 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 15:12:47.701103 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 15:12:47.701219 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 15:12:47.702658 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 15:12:47.707467 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 15:12:47.730582 systemd[1]: Switching root. May 15 15:12:47.768733 systemd-journald[212]: Journal stopped May 15 15:12:48.849779 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). May 15 15:12:48.849854 kernel: SELinux: policy capability network_peer_controls=1 May 15 15:12:48.849874 kernel: SELinux: policy capability open_perms=1 May 15 15:12:48.849886 kernel: SELinux: policy capability extended_socket_class=1 May 15 15:12:48.849898 kernel: SELinux: policy capability always_check_network=0 May 15 15:12:48.849910 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 15:12:48.849921 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 15:12:48.849933 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 15:12:48.849944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 15:12:48.849960 kernel: SELinux: policy capability userspace_initial_context=0 May 15 15:12:48.849975 kernel: audit: type=1403 audit(1747321967.903:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 15:12:48.849988 systemd[1]: Successfully loaded SELinux policy in 46.434ms. May 15 15:12:48.850008 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.036ms. May 15 15:12:48.850022 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 15:12:48.850035 systemd[1]: Detected virtualization kvm. May 15 15:12:48.850047 systemd[1]: Detected architecture x86-64. May 15 15:12:48.850058 systemd[1]: Detected first boot. May 15 15:12:48.850071 systemd[1]: Hostname set to . May 15 15:12:48.850090 systemd[1]: Initializing machine ID from VM UUID. May 15 15:12:48.850103 zram_generator::config[1105]: No configuration found. May 15 15:12:48.850116 kernel: Guest personality initialized and is inactive May 15 15:12:48.850127 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 15:12:48.850151 kernel: Initialized host personality May 15 15:12:48.850163 kernel: NET: Registered PF_VSOCK protocol family May 15 15:12:48.850197 systemd[1]: Populated /etc with preset unit settings. May 15 15:12:48.850212 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 15:12:48.850225 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 15:12:48.850242 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 15:12:48.850254 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 15:12:48.850267 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 15:12:48.850279 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 15:12:48.850292 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 15:12:48.850305 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 15:12:48.850318 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 15:12:48.850330 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 15:12:48.850344 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 15:12:48.850357 systemd[1]: Created slice user.slice - User and Session Slice. May 15 15:12:48.850369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 15:12:48.850382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 15:12:48.850394 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 15:12:48.850407 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 15:12:48.850423 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 15:12:48.850435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 15:12:48.850453 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 15:12:48.850470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 15:12:48.850483 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 15:12:48.850496 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 15:12:48.850508 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 15:12:48.850520 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 15:12:48.850532 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 15:12:48.850547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 15:12:48.850559 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 15:12:48.850571 systemd[1]: Reached target slices.target - Slice Units. May 15 15:12:48.850583 systemd[1]: Reached target swap.target - Swaps. May 15 15:12:48.850595 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 15:12:48.850608 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 15:12:48.850620 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 15:12:48.850632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 15:12:48.850645 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 15:12:48.850657 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 15:12:48.850672 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 15:12:48.850684 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 15:12:48.850696 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 15:12:48.850708 systemd[1]: Mounting media.mount - External Media Directory... May 15 15:12:48.850726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:12:48.850738 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 15:12:48.850752 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 15:12:48.850769 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 15:12:48.850794 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 15:12:48.850810 systemd[1]: Reached target machines.target - Containers. May 15 15:12:48.850829 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 15:12:48.850847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:12:48.850864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 15:12:48.850881 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 15:12:48.850898 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 15:12:48.850915 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 15:12:48.850933 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 15:12:48.850954 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 15:12:48.850967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 15:12:48.850980 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 15:12:48.850992 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 15:12:48.851005 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 15:12:48.851017 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 15:12:48.851030 systemd[1]: Stopped systemd-fsck-usr.service. May 15 15:12:48.851043 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:12:48.851069 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 15:12:48.851084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 15:12:48.851097 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 15:12:48.851109 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 15:12:48.851122 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 15:12:48.851135 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 15:12:48.851151 systemd[1]: verity-setup.service: Deactivated successfully. May 15 15:12:48.851164 systemd[1]: Stopped verity-setup.service. May 15 15:12:48.853086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:12:48.853113 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 15:12:48.853135 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 15:12:48.853148 systemd[1]: Mounted media.mount - External Media Directory. May 15 15:12:48.853160 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 15:12:48.853197 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 15:12:48.853211 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 15:12:48.853224 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 15:12:48.853237 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 15:12:48.853249 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 15:12:48.853262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 15:12:48.853278 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 15:12:48.853290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 15:12:48.853303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 15:12:48.853316 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 15:12:48.853329 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 15:12:48.853341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 15:12:48.853354 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 15:12:48.853366 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 15:12:48.853379 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 15:12:48.853395 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 15:12:48.853407 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 15:12:48.853419 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 15:12:48.853432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:12:48.853444 kernel: loop: module loaded May 15 15:12:48.853460 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 15:12:48.853473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 15:12:48.853669 systemd-journald[1179]: Collecting audit messages is disabled. May 15 15:12:48.853712 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 15:12:48.853726 kernel: fuse: init (API version 7.41) May 15 15:12:48.853741 systemd-journald[1179]: Journal started May 15 15:12:48.853767 systemd-journald[1179]: Runtime Journal (/run/log/journal/88e29c98fc0e4aee9640f8416ea08257) is 4.9M, max 39.5M, 34.6M free. May 15 15:12:48.537525 systemd[1]: Queued start job for default target multi-user.target. May 15 15:12:48.563015 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 15:12:48.563618 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 15:12:48.864205 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 15:12:48.867391 systemd[1]: Started systemd-journald.service - Journal Service. May 15 15:12:48.890679 kernel: loop0: detected capacity change from 0 to 8 May 15 15:12:48.869129 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 15:12:48.869389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 15:12:48.870128 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 15:12:48.871227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 15:12:48.871890 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 15:12:48.918265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 15:12:48.916214 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 15:12:48.921390 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 15:12:48.924311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 15:12:48.934781 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 15:12:48.957327 kernel: ACPI: bus type drm_connector registered May 15 15:12:48.954105 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 15:12:48.954756 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 15:12:48.962328 kernel: loop1: detected capacity change from 0 to 113872 May 15 15:12:48.959389 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 15:12:48.970625 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 15:12:48.970835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 15:12:48.991894 systemd-journald[1179]: Time spent on flushing to /var/log/journal/88e29c98fc0e4aee9640f8416ea08257 is 82.743ms for 1010 entries. May 15 15:12:48.991894 systemd-journald[1179]: System Journal (/var/log/journal/88e29c98fc0e4aee9640f8416ea08257) is 8M, max 195.6M, 187.6M free. May 15 15:12:49.090331 systemd-journald[1179]: Received client request to flush runtime journal. May 15 15:12:49.090401 kernel: loop2: detected capacity change from 0 to 146240 May 15 15:12:49.090418 kernel: loop3: detected capacity change from 0 to 210664 May 15 15:12:48.997477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 15:12:49.011903 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 15:12:49.040417 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 15:12:49.050486 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 15:12:49.073539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 15:12:49.092937 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 15:12:49.112014 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 15:12:49.115478 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 15:12:49.129314 kernel: loop4: detected capacity change from 0 to 8 May 15 15:12:49.149391 kernel: loop5: detected capacity change from 0 to 113872 May 15 15:12:49.165196 kernel: loop6: detected capacity change from 0 to 146240 May 15 15:12:49.182880 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. May 15 15:12:49.182899 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. May 15 15:12:49.189208 kernel: loop7: detected capacity change from 0 to 210664 May 15 15:12:49.194362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 15:12:49.219105 (sd-merge)[1249]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 15 15:12:49.221867 (sd-merge)[1249]: Merged extensions into '/usr'. May 15 15:12:49.236933 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... May 15 15:12:49.237148 systemd[1]: Reloading... May 15 15:12:49.465198 zram_generator::config[1277]: No configuration found. May 15 15:12:49.574236 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 15:12:49.655024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 15:12:49.745721 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 15:12:49.746051 systemd[1]: Reloading finished in 508 ms. May 15 15:12:49.762418 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 15:12:49.763193 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 15:12:49.768522 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 15:12:49.776338 systemd[1]: Starting ensure-sysext.service... May 15 15:12:49.779442 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 15:12:49.788083 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 15:12:49.813455 systemd[1]: Reload requested from client PID 1321 ('systemctl') (unit ensure-sysext.service)... May 15 15:12:49.813472 systemd[1]: Reloading... May 15 15:12:49.839001 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 15:12:49.839438 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 15:12:49.839723 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 15:12:49.840005 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 15:12:49.840899 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 15:12:49.841157 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 15 15:12:49.841265 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 15 15:12:49.848527 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. May 15 15:12:49.848540 systemd-tmpfiles[1322]: Skipping /boot May 15 15:12:49.874780 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. May 15 15:12:49.874795 systemd-tmpfiles[1322]: Skipping /boot May 15 15:12:49.950214 zram_generator::config[1353]: No configuration found. May 15 15:12:50.085208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 15:12:50.174509 systemd[1]: Reloading finished in 360 ms. May 15 15:12:50.195692 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 15:12:50.196660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 15:12:50.210193 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 15:12:50.216545 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 15:12:50.220508 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 15:12:50.226750 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 15:12:50.231673 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 15:12:50.234533 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 15:12:50.243931 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:12:50.244126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:12:50.247818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 15:12:50.250656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 15:12:50.255881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 15:12:50.256388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:12:50.256511 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:12:50.256611 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:12:50.263757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 15:12:50.266724 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:12:50.266916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:12:50.267085 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:12:50.267184 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:12:50.267277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:12:50.272289 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:12:50.272557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:12:50.279901 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 15:12:50.280632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:12:50.280750 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:12:50.280928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:12:50.286365 systemd[1]: Finished ensure-sysext.service. May 15 15:12:50.296092 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 15:12:50.307418 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 15:12:50.317975 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 15:12:50.318256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 15:12:50.322617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 15:12:50.324397 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 15:12:50.325085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 15:12:50.327662 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 15:12:50.333768 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 15:12:50.339626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 15:12:50.339852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 15:12:50.348555 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 15:12:50.351424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 15:12:50.352215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 15:12:50.373648 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 15:12:50.375861 systemd-udevd[1399]: Using default interface naming scheme 'v255'. May 15 15:12:50.381680 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 15:12:50.384256 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 15:12:50.393168 augenrules[1437]: No rules May 15 15:12:50.396577 systemd[1]: audit-rules.service: Deactivated successfully. May 15 15:12:50.397108 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 15:12:50.407134 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 15:12:50.416099 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 15:12:50.427561 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 15:12:50.518858 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 15:12:50.522495 systemd[1]: Reached target time-set.target - System Time Set. May 15 15:12:50.555566 systemd-resolved[1398]: Positive Trust Anchors: May 15 15:12:50.555583 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 15:12:50.555621 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 15:12:50.561670 systemd-resolved[1398]: Using system hostname 'ci-4334.0.0-a-3982d56781'. May 15 15:12:50.565640 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 15:12:50.566135 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 15:12:50.566725 systemd[1]: Reached target sysinit.target - System Initialization. May 15 15:12:50.567360 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 15:12:50.567928 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 15:12:50.568710 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 15:12:50.569353 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 15:12:50.570272 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 15:12:50.570823 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 15:12:50.571311 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 15:12:50.571341 systemd[1]: Reached target paths.target - Path Units. May 15 15:12:50.571787 systemd[1]: Reached target timers.target - Timer Units. May 15 15:12:50.573846 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 15:12:50.576650 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 15:12:50.583677 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 15:12:50.584840 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 15:12:50.585855 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 15:12:50.594134 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 15:12:50.596125 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 15:12:50.597810 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 15:12:50.601611 systemd[1]: Reached target sockets.target - Socket Units. May 15 15:12:50.602238 systemd[1]: Reached target basic.target - Basic System. May 15 15:12:50.603095 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 15:12:50.603127 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 15:12:50.605865 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 15:12:50.608905 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 15:12:50.614057 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 15:12:50.619878 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 15:12:50.623776 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 15:12:50.624188 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 15:12:50.629549 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 15:12:50.636525 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 15:12:50.640256 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 15:12:50.647958 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 15:12:50.653939 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 15:12:50.668705 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 15:12:50.669999 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 15:12:50.674547 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 15:12:50.677904 coreos-metadata[1478]: May 15 15:12:50.677 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 15:12:50.678149 coreos-metadata[1478]: May 15 15:12:50.678 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 15 15:12:50.679371 systemd[1]: Starting update-engine.service - Update Engine... May 15 15:12:50.685138 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 15:12:50.687352 jq[1481]: false May 15 15:12:50.689715 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 15:12:50.690455 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 15:12:50.691304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 15:12:50.710675 oslogin_cache_refresh[1485]: Refreshing passwd entry cache May 15 15:12:50.712540 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Refreshing passwd entry cache May 15 15:12:50.718057 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 15:12:50.718295 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 15:12:50.724639 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Failure getting users, quitting May 15 15:12:50.724639 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 15:12:50.724639 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Refreshing group entry cache May 15 15:12:50.724639 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Failure getting groups, quitting May 15 15:12:50.724639 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 15:12:50.724143 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 15:12:50.721729 oslogin_cache_refresh[1485]: Failure getting users, quitting May 15 15:12:50.721750 oslogin_cache_refresh[1485]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 15:12:50.721811 oslogin_cache_refresh[1485]: Refreshing group entry cache May 15 15:12:50.722415 oslogin_cache_refresh[1485]: Failure getting groups, quitting May 15 15:12:50.722426 oslogin_cache_refresh[1485]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 15:12:50.725456 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 15:12:50.729252 jq[1495]: true May 15 15:12:50.745746 dbus-daemon[1479]: [system] SELinux support is enabled May 15 15:12:50.745945 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 15:12:50.750566 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 15:12:50.750602 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 15:12:50.751051 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 15:12:50.751068 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 15:12:50.765931 extend-filesystems[1482]: Found loop4 May 15 15:12:50.769205 extend-filesystems[1482]: Found loop5 May 15 15:12:50.769205 extend-filesystems[1482]: Found loop6 May 15 15:12:50.769205 extend-filesystems[1482]: Found loop7 May 15 15:12:50.769205 extend-filesystems[1482]: Found vda May 15 15:12:50.769205 extend-filesystems[1482]: Found vda1 May 15 15:12:50.769205 extend-filesystems[1482]: Found vda2 May 15 15:12:50.769205 extend-filesystems[1482]: Found vda3 May 15 15:12:50.769205 extend-filesystems[1482]: Found usr May 15 15:12:50.769205 extend-filesystems[1482]: Found vda4 May 15 15:12:50.769205 extend-filesystems[1482]: Found vda6 May 15 15:12:50.769205 extend-filesystems[1482]: Found vda7 May 15 15:12:50.769205 extend-filesystems[1482]: Found vda9 May 15 15:12:50.769205 extend-filesystems[1482]: Found vdb May 15 15:12:50.798250 tar[1498]: linux-amd64/helm May 15 15:12:50.799668 update_engine[1494]: I20250515 15:12:50.774058 1494 main.cc:92] Flatcar Update Engine starting May 15 15:12:50.799668 update_engine[1494]: I20250515 15:12:50.785379 1494 update_check_scheduler.cc:74] Next update check in 3m58s May 15 15:12:50.770377 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 15:12:50.800010 jq[1507]: true May 15 15:12:50.770822 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 15:12:50.784461 systemd[1]: Started update-engine.service - Update Engine. May 15 15:12:50.801417 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 15:12:50.802235 systemd[1]: motdgen.service: Deactivated successfully. May 15 15:12:50.802478 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 15:12:50.811935 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 15 15:12:50.814485 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 15:12:50.816813 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 15 15:12:50.818372 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 15:12:50.844370 kernel: ISO 9660 Extensions: RRIP_1991A May 15 15:12:50.879352 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 15:12:50.883326 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 15:12:50.905002 bash[1544]: Updated "/home/core/.ssh/authorized_keys" May 15 15:12:50.905019 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 15 15:12:50.906802 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 15:12:50.910434 systemd[1]: Starting sshkeys.service... May 15 15:12:50.910781 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 15 15:12:50.933651 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 15:12:50.935331 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 15:12:50.995921 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 15:12:51.038951 coreos-metadata[1549]: May 15 15:12:51.038 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 15:12:51.041506 coreos-metadata[1549]: May 15 15:12:51.040 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 15 15:12:51.090131 kernel: mousedev: PS/2 mouse device common for all mice May 15 15:12:51.100977 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 15:12:51.116826 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 15:12:51.149205 kernel: ACPI: button: Power Button [PWRF] May 15 15:12:51.168292 systemd-networkd[1449]: lo: Link UP May 15 15:12:51.168300 systemd-networkd[1449]: lo: Gained carrier May 15 15:12:51.175292 systemd-networkd[1449]: Enumeration completed May 15 15:12:51.175470 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 15:12:51.176916 systemd[1]: Reached target network.target - Network. May 15 15:12:51.183287 systemd[1]: Starting containerd.service - containerd container runtime... May 15 15:12:51.187946 systemd-networkd[1449]: eth0: Configuring with /run/systemd/network/10-da:75:be:f8:05:62.network. May 15 15:12:51.189455 systemd-networkd[1449]: eth1: Configuring with /run/systemd/network/10-da:0a:a1:e2:5c:52.network. May 15 15:12:51.190440 systemd-logind[1490]: New seat seat0. May 15 15:12:51.193658 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 15:12:51.196693 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 15:12:51.197502 systemd[1]: Started systemd-logind.service - User Login Management. May 15 15:12:51.200293 systemd-networkd[1449]: eth0: Link UP May 15 15:12:51.203342 systemd-networkd[1449]: eth0: Gained carrier May 15 15:12:51.208477 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 15 15:12:51.208793 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 15:12:51.210251 systemd-networkd[1449]: eth1: Link UP May 15 15:12:51.213125 systemd-networkd[1449]: eth1: Gained carrier May 15 15:12:51.225650 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. May 15 15:12:51.228499 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. May 15 15:12:51.263332 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 15 15:12:51.263408 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 15 15:12:51.269362 kernel: Console: switching to colour dummy device 80x25 May 15 15:12:51.269432 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 15 15:12:51.269448 kernel: [drm] features: -context_init May 15 15:12:51.271215 kernel: [drm] number of scanouts: 1 May 15 15:12:51.271309 kernel: [drm] number of cap sets: 0 May 15 15:12:51.274192 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 15 15:12:51.296444 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 15:12:51.296934 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 15:12:51.657577 containerd[1566]: time="2025-05-15T15:12:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 15:12:51.659114 containerd[1566]: time="2025-05-15T15:12:51.659067148Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 15:12:51.678320 coreos-metadata[1478]: May 15 15:12:51.678 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 15 15:12:51.693118 coreos-metadata[1478]: May 15 15:12:51.693 INFO Fetch successful May 15 15:12:51.706742 containerd[1566]: time="2025-05-15T15:12:51.706696933Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.729µs" May 15 15:12:51.706742 containerd[1566]: time="2025-05-15T15:12:51.706733485Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 15:12:51.706742 containerd[1566]: time="2025-05-15T15:12:51.706753300Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 15:12:51.707001 containerd[1566]: time="2025-05-15T15:12:51.706980113Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 15:12:51.707028 containerd[1566]: time="2025-05-15T15:12:51.707004672Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 15:12:51.707048 containerd[1566]: time="2025-05-15T15:12:51.707032254Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 15:12:51.707107 containerd[1566]: time="2025-05-15T15:12:51.707090742Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 15:12:51.707132 containerd[1566]: time="2025-05-15T15:12:51.707105199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 15:12:51.715482 containerd[1566]: time="2025-05-15T15:12:51.715418992Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 15:12:51.715482 containerd[1566]: time="2025-05-15T15:12:51.715464390Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 15:12:51.715482 containerd[1566]: time="2025-05-15T15:12:51.715483471Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 15:12:51.715730 containerd[1566]: time="2025-05-15T15:12:51.715494214Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 15:12:51.715730 containerd[1566]: time="2025-05-15T15:12:51.715630603Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 15:12:51.715883 containerd[1566]: time="2025-05-15T15:12:51.715849654Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 15:12:51.715960 containerd[1566]: time="2025-05-15T15:12:51.715886539Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 15:12:51.715960 containerd[1566]: time="2025-05-15T15:12:51.715897461Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 15:12:51.715960 containerd[1566]: time="2025-05-15T15:12:51.715933568Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 15:12:51.716319 containerd[1566]: time="2025-05-15T15:12:51.716288442Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 15:12:51.716592 containerd[1566]: time="2025-05-15T15:12:51.716563526Z" level=info msg="metadata content store policy set" policy=shared May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722478205Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722549506Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722569438Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722586903Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722604121Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722633924Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722661263Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722678772Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722696637Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722710654Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722723273Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722741983Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722921272Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 15:12:51.723208 containerd[1566]: time="2025-05-15T15:12:51.722963127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.722986770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.723005120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.723021809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.723036026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.723051679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.723065914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.723084257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.723100374Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 15:12:51.723670 containerd[1566]: time="2025-05-15T15:12:51.723116298Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 15:12:51.724007 containerd[1566]: time="2025-05-15T15:12:51.723982966Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 15:12:51.724065 containerd[1566]: time="2025-05-15T15:12:51.724056606Z" level=info msg="Start snapshots syncer" May 15 15:12:51.724126 containerd[1566]: time="2025-05-15T15:12:51.724115327Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 15:12:51.724490 containerd[1566]: time="2025-05-15T15:12:51.724448925Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 15:12:51.724693 containerd[1566]: time="2025-05-15T15:12:51.724675879Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726245165Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726422388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726447282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726457985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726468714Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726484031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726494784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726505073Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726532437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726543096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726555429Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726587855Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726603393Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 15:12:51.728181 containerd[1566]: time="2025-05-15T15:12:51.726612248Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726620849Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726628630Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726637325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726647935Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726664072Z" level=info msg="runtime interface created" May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726669113Z" level=info msg="created NRI interface" May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726677557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726692044Z" level=info msg="Connect containerd service" May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.726722336Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 15:12:51.728652 containerd[1566]: time="2025-05-15T15:12:51.727702862Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 15:12:51.774926 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 15:12:51.776094 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 15:12:51.789853 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 15:12:51.812563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:12:51.853892 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) May 15 15:12:51.963036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 15:12:51.963759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:12:51.967012 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 15:12:51.974673 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025385610Z" level=info msg="Start subscribing containerd event" May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025443811Z" level=info msg="Start recovering state" May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025558979Z" level=info msg="Start event monitor" May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025576953Z" level=info msg="Start cni network conf syncer for default" May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025586699Z" level=info msg="Start streaming server" May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025597141Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025605413Z" level=info msg="runtime interface starting up..." May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025611028Z" level=info msg="starting plugins..." May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.025625054Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.026068812Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.026111166Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 15:12:52.027199 containerd[1566]: time="2025-05-15T15:12:52.026168538Z" level=info msg="containerd successfully booted in 0.369729s" May 15 15:12:52.026376 systemd[1]: Started containerd.service - containerd container runtime. May 15 15:12:52.042954 coreos-metadata[1549]: May 15 15:12:52.042 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 15 15:12:52.043331 kernel: EDAC MC: Ver: 3.0.0 May 15 15:12:52.055863 coreos-metadata[1549]: May 15 15:12:52.055 INFO Fetch successful May 15 15:12:52.067135 unknown[1549]: wrote ssh authorized keys file for user: core May 15 15:12:52.097678 update-ssh-keys[1614]: Updated "/home/core/.ssh/authorized_keys" May 15 15:12:52.099399 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 15:12:52.104228 systemd[1]: Finished sshkeys.service. May 15 15:12:52.135783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:12:52.198800 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 15:12:52.224314 tar[1498]: linux-amd64/LICENSE May 15 15:12:52.224732 tar[1498]: linux-amd64/README.md May 15 15:12:52.227612 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 15:12:52.232206 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 15:12:52.244798 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 15:12:52.247207 systemd[1]: issuegen.service: Deactivated successfully. May 15 15:12:52.247456 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 15:12:52.250061 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 15:12:52.276669 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 15:12:52.278916 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 15:12:52.281490 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 15:12:52.281780 systemd[1]: Reached target getty.target - Login Prompts. May 15 15:12:52.421382 systemd-networkd[1449]: eth1: Gained IPv6LL May 15 15:12:52.422382 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. May 15 15:12:52.424654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 15:12:52.426010 systemd[1]: Reached target network-online.target - Network is Online. May 15 15:12:52.428487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:12:52.432291 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 15:12:52.458833 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 15:12:53.126055 systemd-networkd[1449]: eth0: Gained IPv6LL May 15 15:12:53.128406 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. May 15 15:12:53.309687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:12:53.310334 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 15:12:53.311572 systemd[1]: Startup finished in 3.141s (kernel) + 5.288s (initrd) + 5.453s (userspace) = 13.883s. May 15 15:12:53.319523 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 15:12:53.906478 kubelet[1657]: E0515 15:12:53.906409 1657 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 15:12:53.909860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 15:12:53.910077 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 15:12:53.910776 systemd[1]: kubelet.service: Consumed 1.128s CPU time, 243M memory peak. May 15 15:12:56.116610 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 15:12:56.118396 systemd[1]: Started sshd@0-165.232.158.142:22-139.178.68.195:35642.service - OpenSSH per-connection server daemon (139.178.68.195:35642). May 15 15:12:56.208033 sshd[1670]: Accepted publickey for core from 139.178.68.195 port 35642 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:12:56.209998 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:12:56.218044 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 15:12:56.219331 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 15:12:56.231843 systemd-logind[1490]: New session 1 of user core. May 15 15:12:56.243577 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 15:12:56.246930 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 15:12:56.263124 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 15:12:56.266096 systemd-logind[1490]: New session c1 of user core. May 15 15:12:56.414088 systemd[1674]: Queued start job for default target default.target. May 15 15:12:56.420799 systemd[1674]: Created slice app.slice - User Application Slice. May 15 15:12:56.420844 systemd[1674]: Reached target paths.target - Paths. May 15 15:12:56.420905 systemd[1674]: Reached target timers.target - Timers. May 15 15:12:56.422746 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 15:12:56.435585 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 15:12:56.435743 systemd[1674]: Reached target sockets.target - Sockets. May 15 15:12:56.435805 systemd[1674]: Reached target basic.target - Basic System. May 15 15:12:56.435856 systemd[1674]: Reached target default.target - Main User Target. May 15 15:12:56.435898 systemd[1674]: Startup finished in 162ms. May 15 15:12:56.435945 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 15:12:56.443438 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 15:12:56.513559 systemd[1]: Started sshd@1-165.232.158.142:22-139.178.68.195:35648.service - OpenSSH per-connection server daemon (139.178.68.195:35648). May 15 15:12:56.570863 sshd[1685]: Accepted publickey for core from 139.178.68.195 port 35648 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:12:56.572424 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:12:56.579065 systemd-logind[1490]: New session 2 of user core. May 15 15:12:56.581353 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 15:12:56.641588 sshd[1687]: Connection closed by 139.178.68.195 port 35648 May 15 15:12:56.642164 sshd-session[1685]: pam_unix(sshd:session): session closed for user core May 15 15:12:56.652998 systemd[1]: sshd@1-165.232.158.142:22-139.178.68.195:35648.service: Deactivated successfully. May 15 15:12:56.655139 systemd[1]: session-2.scope: Deactivated successfully. May 15 15:12:56.656555 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. May 15 15:12:56.659767 systemd[1]: Started sshd@2-165.232.158.142:22-139.178.68.195:35654.service - OpenSSH per-connection server daemon (139.178.68.195:35654). May 15 15:12:56.661251 systemd-logind[1490]: Removed session 2. May 15 15:12:56.722599 sshd[1693]: Accepted publickey for core from 139.178.68.195 port 35654 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:12:56.724073 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:12:56.730448 systemd-logind[1490]: New session 3 of user core. May 15 15:12:56.741457 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 15:12:56.797256 sshd[1695]: Connection closed by 139.178.68.195 port 35654 May 15 15:12:56.797975 sshd-session[1693]: pam_unix(sshd:session): session closed for user core May 15 15:12:56.807128 systemd[1]: sshd@2-165.232.158.142:22-139.178.68.195:35654.service: Deactivated successfully. May 15 15:12:56.809022 systemd[1]: session-3.scope: Deactivated successfully. May 15 15:12:56.809976 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. May 15 15:12:56.813120 systemd[1]: Started sshd@3-165.232.158.142:22-139.178.68.195:35660.service - OpenSSH per-connection server daemon (139.178.68.195:35660). May 15 15:12:56.815555 systemd-logind[1490]: Removed session 3. May 15 15:12:56.869611 sshd[1701]: Accepted publickey for core from 139.178.68.195 port 35660 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:12:56.871256 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:12:56.877868 systemd-logind[1490]: New session 4 of user core. May 15 15:12:56.888464 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 15:12:56.949769 sshd[1703]: Connection closed by 139.178.68.195 port 35660 May 15 15:12:56.950520 sshd-session[1701]: pam_unix(sshd:session): session closed for user core May 15 15:12:56.965139 systemd[1]: sshd@3-165.232.158.142:22-139.178.68.195:35660.service: Deactivated successfully. May 15 15:12:56.968420 systemd[1]: session-4.scope: Deactivated successfully. May 15 15:12:56.970248 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. May 15 15:12:56.973413 systemd[1]: Started sshd@4-165.232.158.142:22-139.178.68.195:35676.service - OpenSSH per-connection server daemon (139.178.68.195:35676). May 15 15:12:56.975134 systemd-logind[1490]: Removed session 4. May 15 15:12:57.037736 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 35676 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:12:57.039224 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:12:57.044412 systemd-logind[1490]: New session 5 of user core. May 15 15:12:57.054418 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 15:12:57.122072 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 15:12:57.122393 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 15:12:57.138697 sudo[1712]: pam_unix(sudo:session): session closed for user root May 15 15:12:57.144196 sshd[1711]: Connection closed by 139.178.68.195 port 35676 May 15 15:12:57.143226 sshd-session[1709]: pam_unix(sshd:session): session closed for user core May 15 15:12:57.161016 systemd[1]: sshd@4-165.232.158.142:22-139.178.68.195:35676.service: Deactivated successfully. May 15 15:12:57.162962 systemd[1]: session-5.scope: Deactivated successfully. May 15 15:12:57.163774 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. May 15 15:12:57.167864 systemd[1]: Started sshd@5-165.232.158.142:22-139.178.68.195:35686.service - OpenSSH per-connection server daemon (139.178.68.195:35686). May 15 15:12:57.170259 systemd-logind[1490]: Removed session 5. May 15 15:12:57.221911 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 35686 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:12:57.223537 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:12:57.229228 systemd-logind[1490]: New session 6 of user core. May 15 15:12:57.236398 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 15:12:57.295429 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 15:12:57.295743 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 15:12:57.300726 sudo[1722]: pam_unix(sudo:session): session closed for user root May 15 15:12:57.307878 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 15:12:57.308401 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 15:12:57.320199 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 15:12:57.369191 augenrules[1744]: No rules May 15 15:12:57.370894 systemd[1]: audit-rules.service: Deactivated successfully. May 15 15:12:57.371288 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 15:12:57.372904 sudo[1721]: pam_unix(sudo:session): session closed for user root May 15 15:12:57.375977 sshd[1720]: Connection closed by 139.178.68.195 port 35686 May 15 15:12:57.377502 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 15 15:12:57.385747 systemd[1]: sshd@5-165.232.158.142:22-139.178.68.195:35686.service: Deactivated successfully. May 15 15:12:57.387602 systemd[1]: session-6.scope: Deactivated successfully. May 15 15:12:57.388454 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. May 15 15:12:57.391512 systemd[1]: Started sshd@6-165.232.158.142:22-139.178.68.195:35702.service - OpenSSH per-connection server daemon (139.178.68.195:35702). May 15 15:12:57.392610 systemd-logind[1490]: Removed session 6. May 15 15:12:57.447283 sshd[1753]: Accepted publickey for core from 139.178.68.195 port 35702 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:12:57.448748 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:12:57.453744 systemd-logind[1490]: New session 7 of user core. May 15 15:12:57.462509 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 15:12:57.520470 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 15:12:57.520750 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 15:12:57.987260 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 15:12:58.010689 (dockerd)[1774]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 15:12:58.299003 dockerd[1774]: time="2025-05-15T15:12:58.298857669Z" level=info msg="Starting up" May 15 15:12:58.301541 dockerd[1774]: time="2025-05-15T15:12:58.301506639Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 15:12:58.328885 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2668972247-merged.mount: Deactivated successfully. May 15 15:12:58.347718 systemd[1]: var-lib-docker-metacopy\x2dcheck2189468167-merged.mount: Deactivated successfully. May 15 15:12:58.371542 dockerd[1774]: time="2025-05-15T15:12:58.370930075Z" level=info msg="Loading containers: start." May 15 15:12:58.382199 kernel: Initializing XFRM netlink socket May 15 15:12:58.598057 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. May 15 15:12:58.606333 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. May 15 15:12:58.642712 systemd-networkd[1449]: docker0: Link UP May 15 15:12:58.643073 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. May 15 15:12:58.644942 dockerd[1774]: time="2025-05-15T15:12:58.644830057Z" level=info msg="Loading containers: done." May 15 15:12:58.659780 dockerd[1774]: time="2025-05-15T15:12:58.659726209Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 15:12:58.659947 dockerd[1774]: time="2025-05-15T15:12:58.659822457Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 15:12:58.659947 dockerd[1774]: time="2025-05-15T15:12:58.659933651Z" level=info msg="Initializing buildkit" May 15 15:12:58.683507 dockerd[1774]: time="2025-05-15T15:12:58.683449931Z" level=info msg="Completed buildkit initialization" May 15 15:12:58.688148 dockerd[1774]: time="2025-05-15T15:12:58.687798235Z" level=info msg="Daemon has completed initialization" May 15 15:12:58.688045 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 15:12:58.689041 dockerd[1774]: time="2025-05-15T15:12:58.688848130Z" level=info msg="API listen on /run/docker.sock" May 15 15:12:59.761905 containerd[1566]: time="2025-05-15T15:12:59.761493130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 15:13:00.289328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083652537.mount: Deactivated successfully. May 15 15:13:02.071087 containerd[1566]: time="2025-05-15T15:13:02.071014999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:02.072573 containerd[1566]: time="2025-05-15T15:13:02.072516257Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 15 15:13:02.073470 containerd[1566]: time="2025-05-15T15:13:02.073274377Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:02.078112 containerd[1566]: time="2025-05-15T15:13:02.077271346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:02.079662 containerd[1566]: time="2025-05-15T15:13:02.079606033Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.318055132s" May 15 15:13:02.079662 containerd[1566]: time="2025-05-15T15:13:02.079657605Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 15:13:02.124084 containerd[1566]: time="2025-05-15T15:13:02.123999077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 15:13:03.704348 systemd[1]: Started sshd@7-165.232.158.142:22-218.92.0.166:33785.service - OpenSSH per-connection server daemon (218.92.0.166:33785). May 15 15:13:04.035773 containerd[1566]: time="2025-05-15T15:13:04.035327012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:04.036545 containerd[1566]: time="2025-05-15T15:13:04.036501910Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 15 15:13:04.037230 containerd[1566]: time="2025-05-15T15:13:04.037196412Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:04.039655 containerd[1566]: time="2025-05-15T15:13:04.039582795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:04.042192 containerd[1566]: time="2025-05-15T15:13:04.040575048Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.916227672s" May 15 15:13:04.042192 containerd[1566]: time="2025-05-15T15:13:04.040612611Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 15:13:04.069889 containerd[1566]: time="2025-05-15T15:13:04.069847201Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 15:13:04.160742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 15:13:04.163332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:13:04.309083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:13:04.321789 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 15:13:04.386069 kubelet[2075]: E0515 15:13:04.386004 2075 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 15:13:04.392369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 15:13:04.392566 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 15:13:04.393394 systemd[1]: kubelet.service: Consumed 187ms CPU time, 96.3M memory peak. May 15 15:13:04.836952 sshd-session[2083]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:13:05.302917 containerd[1566]: time="2025-05-15T15:13:05.302781645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:05.303910 containerd[1566]: time="2025-05-15T15:13:05.303872116Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 15 15:13:05.306010 containerd[1566]: time="2025-05-15T15:13:05.305922873Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:05.308517 containerd[1566]: time="2025-05-15T15:13:05.308462555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:05.309585 containerd[1566]: time="2025-05-15T15:13:05.309455786Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.239569812s" May 15 15:13:05.309585 containerd[1566]: time="2025-05-15T15:13:05.309492511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 15:13:05.332362 containerd[1566]: time="2025-05-15T15:13:05.332306476Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 15:13:06.040491 sshd[2058]: PAM: Permission denied for root from 218.92.0.166 May 15 15:13:06.341574 sshd-session[2099]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:13:06.367643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636039408.mount: Deactivated successfully. May 15 15:13:06.942977 containerd[1566]: time="2025-05-15T15:13:06.942917081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:06.944744 containerd[1566]: time="2025-05-15T15:13:06.944674802Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 15 15:13:06.946195 containerd[1566]: time="2025-05-15T15:13:06.945551336Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:06.947131 containerd[1566]: time="2025-05-15T15:13:06.947105553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:06.948600 containerd[1566]: time="2025-05-15T15:13:06.948572333Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.616219634s" May 15 15:13:06.948721 containerd[1566]: time="2025-05-15T15:13:06.948707404Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 15:13:06.970988 containerd[1566]: time="2025-05-15T15:13:06.970866509Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:13:06.972556 systemd-resolved[1398]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 15 15:13:07.441630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352052902.mount: Deactivated successfully. May 15 15:13:07.822191 sshd[2058]: PAM: Permission denied for root from 218.92.0.166 May 15 15:13:08.123063 sshd-session[2155]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:13:08.181199 containerd[1566]: time="2025-05-15T15:13:08.181099079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:08.183286 containerd[1566]: time="2025-05-15T15:13:08.183227907Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 15:13:08.183986 containerd[1566]: time="2025-05-15T15:13:08.183914043Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:08.186616 containerd[1566]: time="2025-05-15T15:13:08.186555717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:08.188047 containerd[1566]: time="2025-05-15T15:13:08.187633788Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.216334324s" May 15 15:13:08.188047 containerd[1566]: time="2025-05-15T15:13:08.187687139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:13:08.212370 containerd[1566]: time="2025-05-15T15:13:08.212327082Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 15:13:08.653433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480322103.mount: Deactivated successfully. May 15 15:13:08.658092 containerd[1566]: time="2025-05-15T15:13:08.658019462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:08.659075 containerd[1566]: time="2025-05-15T15:13:08.659023072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 15 15:13:08.659810 containerd[1566]: time="2025-05-15T15:13:08.659761473Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:08.662005 containerd[1566]: time="2025-05-15T15:13:08.661920950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:08.663038 containerd[1566]: time="2025-05-15T15:13:08.662865915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 450.492411ms" May 15 15:13:08.663038 containerd[1566]: time="2025-05-15T15:13:08.662913008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 15:13:08.697328 containerd[1566]: time="2025-05-15T15:13:08.697166436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 15:13:09.156589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124288973.mount: Deactivated successfully. May 15 15:13:09.738443 sshd[2058]: PAM: Permission denied for root from 218.92.0.166 May 15 15:13:09.888414 sshd[2058]: Received disconnect from 218.92.0.166 port 33785:11: [preauth] May 15 15:13:09.889230 sshd[2058]: Disconnected from authenticating user root 218.92.0.166 port 33785 [preauth] May 15 15:13:09.891929 systemd[1]: sshd@7-165.232.158.142:22-218.92.0.166:33785.service: Deactivated successfully. May 15 15:13:10.021402 systemd-resolved[1398]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 15 15:13:10.902857 containerd[1566]: time="2025-05-15T15:13:10.902795701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:10.903781 containerd[1566]: time="2025-05-15T15:13:10.903749065Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 15 15:13:10.904468 containerd[1566]: time="2025-05-15T15:13:10.904247059Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:10.906580 containerd[1566]: time="2025-05-15T15:13:10.906526103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:10.907829 containerd[1566]: time="2025-05-15T15:13:10.907428594Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.209901527s" May 15 15:13:10.907829 containerd[1566]: time="2025-05-15T15:13:10.907473856Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 15:13:13.704633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:13:13.705577 systemd[1]: kubelet.service: Consumed 187ms CPU time, 96.3M memory peak. May 15 15:13:13.710491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:13:13.741473 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-7.scope)... May 15 15:13:13.741662 systemd[1]: Reloading... May 15 15:13:13.877220 zram_generator::config[2348]: No configuration found. May 15 15:13:13.997041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 15:13:14.125861 systemd[1]: Reloading finished in 383 ms. May 15 15:13:14.181452 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 15:13:14.181546 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 15:13:14.182106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:13:14.182267 systemd[1]: kubelet.service: Consumed 105ms CPU time, 83.7M memory peak. May 15 15:13:14.184138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:13:14.320208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:13:14.334069 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 15:13:14.384277 kubelet[2402]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 15:13:14.384638 kubelet[2402]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 15:13:14.384710 kubelet[2402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 15:13:14.387748 kubelet[2402]: I0515 15:13:14.387671 2402 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 15:13:14.692275 kubelet[2402]: I0515 15:13:14.692154 2402 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 15:13:14.692275 kubelet[2402]: I0515 15:13:14.692192 2402 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 15:13:14.692649 kubelet[2402]: I0515 15:13:14.692448 2402 server.go:927] "Client rotation is on, will bootstrap in background" May 15 15:13:14.721239 kubelet[2402]: I0515 15:13:14.720459 2402 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 15:13:14.721239 kubelet[2402]: E0515 15:13:14.721134 2402 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://165.232.158.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.733950 kubelet[2402]: I0515 15:13:14.733918 2402 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 15:13:14.737780 kubelet[2402]: I0515 15:13:14.737712 2402 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 15:13:14.738253 kubelet[2402]: I0515 15:13:14.737933 2402 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-3982d56781","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 15:13:14.738469 kubelet[2402]: I0515 15:13:14.738455 2402 topology_manager.go:138] "Creating topology manager with none policy" May 15 15:13:14.738530 kubelet[2402]: I0515 15:13:14.738522 2402 container_manager_linux.go:301] "Creating device plugin manager" May 15 15:13:14.738711 kubelet[2402]: I0515 15:13:14.738698 2402 state_mem.go:36] "Initialized new in-memory state store" May 15 15:13:14.739653 kubelet[2402]: I0515 15:13:14.739631 2402 kubelet.go:400] "Attempting to sync node with API server" May 15 15:13:14.739758 kubelet[2402]: I0515 15:13:14.739749 2402 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 15:13:14.739827 kubelet[2402]: I0515 15:13:14.739820 2402 kubelet.go:312] "Adding apiserver pod source" May 15 15:13:14.739879 kubelet[2402]: I0515 15:13:14.739873 2402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 15:13:14.743244 kubelet[2402]: W0515 15:13:14.743188 2402 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.158.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-3982d56781&limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.743352 kubelet[2402]: E0515 15:13:14.743342 2402 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://165.232.158.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-3982d56781&limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.743647 kubelet[2402]: W0515 15:13:14.743619 2402 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.158.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.743972 kubelet[2402]: E0515 15:13:14.743956 2402 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://165.232.158.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.744157 kubelet[2402]: I0515 15:13:14.744144 2402 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 15:13:14.745554 kubelet[2402]: I0515 15:13:14.745533 2402 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 15:13:14.745692 kubelet[2402]: W0515 15:13:14.745682 2402 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 15:13:14.746601 kubelet[2402]: I0515 15:13:14.746583 2402 server.go:1264] "Started kubelet" May 15 15:13:14.747226 kubelet[2402]: I0515 15:13:14.747199 2402 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 15:13:14.748816 kubelet[2402]: I0515 15:13:14.748256 2402 server.go:455] "Adding debug handlers to kubelet server" May 15 15:13:14.752384 kubelet[2402]: I0515 15:13:14.752356 2402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 15:13:14.752485 kubelet[2402]: I0515 15:13:14.752355 2402 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 15:13:14.752689 kubelet[2402]: I0515 15:13:14.752676 2402 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 15:13:14.753032 kubelet[2402]: E0515 15:13:14.752907 2402 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.158.142:6443/api/v1/namespaces/default/events\": dial tcp 165.232.158.142:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-3982d56781.183fbc18b5187fb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-3982d56781,UID:ci-4334.0.0-a-3982d56781,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-3982d56781,},FirstTimestamp:2025-05-15 15:13:14.746560438 +0000 UTC m=+0.406730962,LastTimestamp:2025-05-15 15:13:14.746560438 +0000 UTC m=+0.406730962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-3982d56781,}" May 15 15:13:14.755216 kubelet[2402]: I0515 15:13:14.755147 2402 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 15:13:14.758197 kubelet[2402]: I0515 15:13:14.757619 2402 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 15:13:14.758197 kubelet[2402]: I0515 15:13:14.757718 2402 reconciler.go:26] "Reconciler: start to sync state" May 15 15:13:14.759323 kubelet[2402]: E0515 15:13:14.759064 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.158.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-3982d56781?timeout=10s\": dial tcp 165.232.158.142:6443: connect: connection refused" interval="200ms" May 15 15:13:14.759323 kubelet[2402]: W0515 15:13:14.759217 2402 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.158.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.759323 kubelet[2402]: E0515 15:13:14.759291 2402 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://165.232.158.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.763344 kubelet[2402]: I0515 15:13:14.760484 2402 factory.go:221] Registration of the systemd container factory successfully May 15 15:13:14.763344 kubelet[2402]: I0515 15:13:14.760577 2402 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 15:13:14.772473 kubelet[2402]: I0515 15:13:14.772444 2402 factory.go:221] Registration of the containerd container factory successfully May 15 15:13:14.784386 kubelet[2402]: I0515 15:13:14.784332 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 15:13:14.786455 kubelet[2402]: I0515 15:13:14.786423 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 15:13:14.786455 kubelet[2402]: I0515 15:13:14.786458 2402 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 15:13:14.786610 kubelet[2402]: I0515 15:13:14.786480 2402 kubelet.go:2337] "Starting kubelet main sync loop" May 15 15:13:14.786610 kubelet[2402]: E0515 15:13:14.786526 2402 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 15:13:14.793615 kubelet[2402]: W0515 15:13:14.793561 2402 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.158.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.793901 kubelet[2402]: E0515 15:13:14.793883 2402 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://165.232.158.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:14.796762 kubelet[2402]: E0515 15:13:14.796720 2402 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 15:13:14.799345 kubelet[2402]: I0515 15:13:14.799326 2402 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 15:13:14.799469 kubelet[2402]: I0515 15:13:14.799459 2402 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 15:13:14.799529 kubelet[2402]: I0515 15:13:14.799523 2402 state_mem.go:36] "Initialized new in-memory state store" May 15 15:13:14.800501 kubelet[2402]: I0515 15:13:14.800482 2402 policy_none.go:49] "None policy: Start" May 15 15:13:14.801442 kubelet[2402]: I0515 15:13:14.801427 2402 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 15:13:14.801527 kubelet[2402]: I0515 15:13:14.801520 2402 state_mem.go:35] "Initializing new in-memory state store" May 15 15:13:14.806870 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 15:13:14.818629 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 15:13:14.823922 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 15:13:14.830421 kubelet[2402]: I0515 15:13:14.830389 2402 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 15:13:14.831075 kubelet[2402]: I0515 15:13:14.831022 2402 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 15:13:14.831153 kubelet[2402]: I0515 15:13:14.831144 2402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 15:13:14.833837 kubelet[2402]: E0515 15:13:14.833807 2402 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-3982d56781\" not found" May 15 15:13:14.857572 kubelet[2402]: I0515 15:13:14.857511 2402 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-3982d56781" May 15 15:13:14.858005 kubelet[2402]: E0515 15:13:14.857976 2402 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.158.142:6443/api/v1/nodes\": dial tcp 165.232.158.142:6443: connect: connection refused" node="ci-4334.0.0-a-3982d56781" May 15 15:13:14.887532 kubelet[2402]: I0515 15:13:14.887419 2402 topology_manager.go:215] "Topology Admit Handler" podUID="1b3fade00b91a1bcba7e3e18884a9024" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:14.888898 kubelet[2402]: I0515 15:13:14.888859 2402 topology_manager.go:215] "Topology Admit Handler" podUID="73f83460d803d640cb3d407a2e1cff6b" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:14.890084 kubelet[2402]: I0515 15:13:14.889620 2402 topology_manager.go:215] "Topology Admit Handler" podUID="3279190589c9766307c874a69b63040e" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:13:14.901432 systemd[1]: Created slice kubepods-burstable-pod73f83460d803d640cb3d407a2e1cff6b.slice - libcontainer container kubepods-burstable-pod73f83460d803d640cb3d407a2e1cff6b.slice. May 15 15:13:14.919040 systemd[1]: Created slice kubepods-burstable-pod1b3fade00b91a1bcba7e3e18884a9024.slice - libcontainer container kubepods-burstable-pod1b3fade00b91a1bcba7e3e18884a9024.slice. May 15 15:13:14.923643 systemd[1]: Created slice kubepods-burstable-pod3279190589c9766307c874a69b63040e.slice - libcontainer container kubepods-burstable-pod3279190589c9766307c874a69b63040e.slice. May 15 15:13:14.959798 kubelet[2402]: E0515 15:13:14.959644 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.158.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-3982d56781?timeout=10s\": dial tcp 165.232.158.142:6443: connect: connection refused" interval="400ms" May 15 15:13:15.059447 kubelet[2402]: I0515 15:13:15.059153 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b3fade00b91a1bcba7e3e18884a9024-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-3982d56781\" (UID: \"1b3fade00b91a1bcba7e3e18884a9024\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:15.059447 kubelet[2402]: I0515 15:13:15.059221 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b3fade00b91a1bcba7e3e18884a9024-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-3982d56781\" (UID: \"1b3fade00b91a1bcba7e3e18884a9024\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:15.059447 kubelet[2402]: I0515 15:13:15.059250 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:15.059447 kubelet[2402]: I0515 15:13:15.059274 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:15.059447 kubelet[2402]: I0515 15:13:15.059292 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b3fade00b91a1bcba7e3e18884a9024-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-3982d56781\" (UID: \"1b3fade00b91a1bcba7e3e18884a9024\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:15.060444 kubelet[2402]: I0515 15:13:15.059308 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:15.060444 kubelet[2402]: I0515 15:13:15.059322 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:15.060444 kubelet[2402]: I0515 15:13:15.059340 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:15.060444 kubelet[2402]: I0515 15:13:15.059356 2402 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3279190589c9766307c874a69b63040e-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-3982d56781\" (UID: \"3279190589c9766307c874a69b63040e\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:13:15.060444 kubelet[2402]: I0515 15:13:15.059909 2402 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-3982d56781" May 15 15:13:15.060444 kubelet[2402]: E0515 15:13:15.060320 2402 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.158.142:6443/api/v1/nodes\": dial tcp 165.232.158.142:6443: connect: connection refused" node="ci-4334.0.0-a-3982d56781" May 15 15:13:15.215424 kubelet[2402]: E0515 15:13:15.215293 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:15.216937 containerd[1566]: time="2025-05-15T15:13:15.216893097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-3982d56781,Uid:73f83460d803d640cb3d407a2e1cff6b,Namespace:kube-system,Attempt:0,}" May 15 15:13:15.219885 systemd-resolved[1398]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 15 15:13:15.222765 kubelet[2402]: E0515 15:13:15.222489 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:15.227544 kubelet[2402]: E0515 15:13:15.226847 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:15.227682 containerd[1566]: time="2025-05-15T15:13:15.227613174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-3982d56781,Uid:3279190589c9766307c874a69b63040e,Namespace:kube-system,Attempt:0,}" May 15 15:13:15.227852 containerd[1566]: time="2025-05-15T15:13:15.227822894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-3982d56781,Uid:1b3fade00b91a1bcba7e3e18884a9024,Namespace:kube-system,Attempt:0,}" May 15 15:13:15.360664 kubelet[2402]: E0515 15:13:15.360596 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.158.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-3982d56781?timeout=10s\": dial tcp 165.232.158.142:6443: connect: connection refused" interval="800ms" May 15 15:13:15.461374 kubelet[2402]: I0515 15:13:15.461274 2402 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-3982d56781" May 15 15:13:15.462168 kubelet[2402]: E0515 15:13:15.461619 2402 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.158.142:6443/api/v1/nodes\": dial tcp 165.232.158.142:6443: connect: connection refused" node="ci-4334.0.0-a-3982d56781" May 15 15:13:15.640491 kubelet[2402]: W0515 15:13:15.639940 2402 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.158.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:15.640491 kubelet[2402]: E0515 15:13:15.640024 2402 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://165.232.158.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:15.648666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881320476.mount: Deactivated successfully. May 15 15:13:15.652968 containerd[1566]: time="2025-05-15T15:13:15.652923007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:13:15.653943 containerd[1566]: time="2025-05-15T15:13:15.653905709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:13:15.655866 containerd[1566]: time="2025-05-15T15:13:15.654919857Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:13:15.655866 containerd[1566]: time="2025-05-15T15:13:15.655629305Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 15:13:15.655866 containerd[1566]: time="2025-05-15T15:13:15.655763494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 15 15:13:15.656014 containerd[1566]: time="2025-05-15T15:13:15.655942275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 15 15:13:15.656374 containerd[1566]: time="2025-05-15T15:13:15.656349120Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:13:15.659751 containerd[1566]: time="2025-05-15T15:13:15.659702969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:13:15.660415 containerd[1566]: time="2025-05-15T15:13:15.660385663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 441.661367ms" May 15 15:13:15.662874 containerd[1566]: time="2025-05-15T15:13:15.662835781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 433.659136ms" May 15 15:13:15.667334 containerd[1566]: time="2025-05-15T15:13:15.666936567Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 437.048223ms" May 15 15:13:15.673221 kubelet[2402]: W0515 15:13:15.672682 2402 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.158.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-3982d56781&limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:15.673221 kubelet[2402]: E0515 15:13:15.672750 2402 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://165.232.158.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-3982d56781&limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:15.762870 containerd[1566]: time="2025-05-15T15:13:15.762818935Z" level=info msg="connecting to shim 8fe4133008ba93c9f647dd4424f1ab606f5517d2180fb0ece4ddecc7ebfda4d5" address="unix:///run/containerd/s/9bb9170c845ed2f4ad950d6227cefd3f74e752ee6b1a5e760e10fcd153dc29af" namespace=k8s.io protocol=ttrpc version=3 May 15 15:13:15.768390 containerd[1566]: time="2025-05-15T15:13:15.768347523Z" level=info msg="connecting to shim 6a77b017f5a31a4c8f66bf57e8c009e04eb10a4bfe214358f1e1c73cd49bc221" address="unix:///run/containerd/s/e06f38991dd2569335d12907f5c237225286fe1c8ddcf45052585e4affb6e706" namespace=k8s.io protocol=ttrpc version=3 May 15 15:13:15.769326 containerd[1566]: time="2025-05-15T15:13:15.769231920Z" level=info msg="connecting to shim 857d627af0dd34dd2bbff925bd58435852b503b10ef75eb835c523829fdf6f9f" address="unix:///run/containerd/s/4693d96ad89a32c94562dc48442251d6806468786e86d2ec48fd40c5ee5db7eb" namespace=k8s.io protocol=ttrpc version=3 May 15 15:13:15.861638 systemd[1]: Started cri-containerd-6a77b017f5a31a4c8f66bf57e8c009e04eb10a4bfe214358f1e1c73cd49bc221.scope - libcontainer container 6a77b017f5a31a4c8f66bf57e8c009e04eb10a4bfe214358f1e1c73cd49bc221. May 15 15:13:15.865495 systemd[1]: Started cri-containerd-857d627af0dd34dd2bbff925bd58435852b503b10ef75eb835c523829fdf6f9f.scope - libcontainer container 857d627af0dd34dd2bbff925bd58435852b503b10ef75eb835c523829fdf6f9f. May 15 15:13:15.867033 systemd[1]: Started cri-containerd-8fe4133008ba93c9f647dd4424f1ab606f5517d2180fb0ece4ddecc7ebfda4d5.scope - libcontainer container 8fe4133008ba93c9f647dd4424f1ab606f5517d2180fb0ece4ddecc7ebfda4d5. May 15 15:13:15.955922 containerd[1566]: time="2025-05-15T15:13:15.955374260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-3982d56781,Uid:1b3fade00b91a1bcba7e3e18884a9024,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a77b017f5a31a4c8f66bf57e8c009e04eb10a4bfe214358f1e1c73cd49bc221\"" May 15 15:13:15.959082 kubelet[2402]: E0515 15:13:15.958345 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:15.966310 containerd[1566]: time="2025-05-15T15:13:15.966168071Z" level=info msg="CreateContainer within sandbox \"6a77b017f5a31a4c8f66bf57e8c009e04eb10a4bfe214358f1e1c73cd49bc221\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 15:13:15.984488 containerd[1566]: time="2025-05-15T15:13:15.983522585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-3982d56781,Uid:73f83460d803d640cb3d407a2e1cff6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fe4133008ba93c9f647dd4424f1ab606f5517d2180fb0ece4ddecc7ebfda4d5\"" May 15 15:13:15.987402 kubelet[2402]: E0515 15:13:15.987371 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:15.996273 containerd[1566]: time="2025-05-15T15:13:15.996223047Z" level=info msg="Container e8e0f9ee26a47dd4c3d4a1adf52f5e2ecab0b4956cbc683cf8d5f219db45606e: CDI devices from CRI Config.CDIDevices: []" May 15 15:13:15.996796 containerd[1566]: time="2025-05-15T15:13:15.996771988Z" level=info msg="CreateContainer within sandbox \"8fe4133008ba93c9f647dd4424f1ab606f5517d2180fb0ece4ddecc7ebfda4d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 15:13:15.999412 containerd[1566]: time="2025-05-15T15:13:15.999371209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-3982d56781,Uid:3279190589c9766307c874a69b63040e,Namespace:kube-system,Attempt:0,} returns sandbox id \"857d627af0dd34dd2bbff925bd58435852b503b10ef75eb835c523829fdf6f9f\"" May 15 15:13:16.000711 kubelet[2402]: E0515 15:13:16.000635 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:16.005219 containerd[1566]: time="2025-05-15T15:13:16.004708803Z" level=info msg="CreateContainer within sandbox \"857d627af0dd34dd2bbff925bd58435852b503b10ef75eb835c523829fdf6f9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 15:13:16.008302 containerd[1566]: time="2025-05-15T15:13:16.008263485Z" level=info msg="CreateContainer within sandbox \"6a77b017f5a31a4c8f66bf57e8c009e04eb10a4bfe214358f1e1c73cd49bc221\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e8e0f9ee26a47dd4c3d4a1adf52f5e2ecab0b4956cbc683cf8d5f219db45606e\"" May 15 15:13:16.010149 containerd[1566]: time="2025-05-15T15:13:16.008928855Z" level=info msg="Container 9293ff7239276bb58e34dc0d64f8831979f35e2e86403f32a76e231105b90f36: CDI devices from CRI Config.CDIDevices: []" May 15 15:13:16.011508 containerd[1566]: time="2025-05-15T15:13:16.011469804Z" level=info msg="StartContainer for \"e8e0f9ee26a47dd4c3d4a1adf52f5e2ecab0b4956cbc683cf8d5f219db45606e\"" May 15 15:13:16.012616 containerd[1566]: time="2025-05-15T15:13:16.012586505Z" level=info msg="Container ad3c9cd64e9093007275f3daafc3557c6f837ac3bb1dcc8079d4cb905ed23d22: CDI devices from CRI Config.CDIDevices: []" May 15 15:13:16.014972 containerd[1566]: time="2025-05-15T15:13:16.014922559Z" level=info msg="connecting to shim e8e0f9ee26a47dd4c3d4a1adf52f5e2ecab0b4956cbc683cf8d5f219db45606e" address="unix:///run/containerd/s/e06f38991dd2569335d12907f5c237225286fe1c8ddcf45052585e4affb6e706" protocol=ttrpc version=3 May 15 15:13:16.018578 containerd[1566]: time="2025-05-15T15:13:16.018537225Z" level=info msg="CreateContainer within sandbox \"857d627af0dd34dd2bbff925bd58435852b503b10ef75eb835c523829fdf6f9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ad3c9cd64e9093007275f3daafc3557c6f837ac3bb1dcc8079d4cb905ed23d22\"" May 15 15:13:16.019157 containerd[1566]: time="2025-05-15T15:13:16.019127750Z" level=info msg="StartContainer for \"ad3c9cd64e9093007275f3daafc3557c6f837ac3bb1dcc8079d4cb905ed23d22\"" May 15 15:13:16.020736 containerd[1566]: time="2025-05-15T15:13:16.020695533Z" level=info msg="connecting to shim ad3c9cd64e9093007275f3daafc3557c6f837ac3bb1dcc8079d4cb905ed23d22" address="unix:///run/containerd/s/4693d96ad89a32c94562dc48442251d6806468786e86d2ec48fd40c5ee5db7eb" protocol=ttrpc version=3 May 15 15:13:16.024676 containerd[1566]: time="2025-05-15T15:13:16.024540143Z" level=info msg="CreateContainer within sandbox \"8fe4133008ba93c9f647dd4424f1ab606f5517d2180fb0ece4ddecc7ebfda4d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9293ff7239276bb58e34dc0d64f8831979f35e2e86403f32a76e231105b90f36\"" May 15 15:13:16.027056 containerd[1566]: time="2025-05-15T15:13:16.027002891Z" level=info msg="StartContainer for \"9293ff7239276bb58e34dc0d64f8831979f35e2e86403f32a76e231105b90f36\"" May 15 15:13:16.028942 containerd[1566]: time="2025-05-15T15:13:16.028899417Z" level=info msg="connecting to shim 9293ff7239276bb58e34dc0d64f8831979f35e2e86403f32a76e231105b90f36" address="unix:///run/containerd/s/9bb9170c845ed2f4ad950d6227cefd3f74e752ee6b1a5e760e10fcd153dc29af" protocol=ttrpc version=3 May 15 15:13:16.051544 systemd[1]: Started cri-containerd-e8e0f9ee26a47dd4c3d4a1adf52f5e2ecab0b4956cbc683cf8d5f219db45606e.scope - libcontainer container e8e0f9ee26a47dd4c3d4a1adf52f5e2ecab0b4956cbc683cf8d5f219db45606e. May 15 15:13:16.055328 systemd[1]: Started cri-containerd-ad3c9cd64e9093007275f3daafc3557c6f837ac3bb1dcc8079d4cb905ed23d22.scope - libcontainer container ad3c9cd64e9093007275f3daafc3557c6f837ac3bb1dcc8079d4cb905ed23d22. May 15 15:13:16.070380 systemd[1]: Started cri-containerd-9293ff7239276bb58e34dc0d64f8831979f35e2e86403f32a76e231105b90f36.scope - libcontainer container 9293ff7239276bb58e34dc0d64f8831979f35e2e86403f32a76e231105b90f36. May 15 15:13:16.101212 kubelet[2402]: W0515 15:13:16.101065 2402 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.158.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:16.101449 kubelet[2402]: E0515 15:13:16.101167 2402 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://165.232.158.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:16.129627 containerd[1566]: time="2025-05-15T15:13:16.129585514Z" level=info msg="StartContainer for \"e8e0f9ee26a47dd4c3d4a1adf52f5e2ecab0b4956cbc683cf8d5f219db45606e\" returns successfully" May 15 15:13:16.161974 kubelet[2402]: E0515 15:13:16.161701 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.158.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-3982d56781?timeout=10s\": dial tcp 165.232.158.142:6443: connect: connection refused" interval="1.6s" May 15 15:13:16.162141 containerd[1566]: time="2025-05-15T15:13:16.161876980Z" level=info msg="StartContainer for \"9293ff7239276bb58e34dc0d64f8831979f35e2e86403f32a76e231105b90f36\" returns successfully" May 15 15:13:16.196835 containerd[1566]: time="2025-05-15T15:13:16.196761316Z" level=info msg="StartContainer for \"ad3c9cd64e9093007275f3daafc3557c6f837ac3bb1dcc8079d4cb905ed23d22\" returns successfully" May 15 15:13:16.264878 kubelet[2402]: I0515 15:13:16.264768 2402 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-3982d56781" May 15 15:13:16.265927 kubelet[2402]: E0515 15:13:16.265445 2402 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://165.232.158.142:6443/api/v1/nodes\": dial tcp 165.232.158.142:6443: connect: connection refused" node="ci-4334.0.0-a-3982d56781" May 15 15:13:16.276206 kubelet[2402]: W0515 15:13:16.276104 2402 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.158.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:16.276206 kubelet[2402]: E0515 15:13:16.276190 2402 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://165.232.158.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 165.232.158.142:6443: connect: connection refused May 15 15:13:16.812540 kubelet[2402]: E0515 15:13:16.811989 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:16.814024 kubelet[2402]: E0515 15:13:16.813995 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:16.818599 kubelet[2402]: E0515 15:13:16.818568 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:17.823254 kubelet[2402]: E0515 15:13:17.823212 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:17.867978 kubelet[2402]: I0515 15:13:17.867944 2402 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-3982d56781" May 15 15:13:18.091594 kubelet[2402]: E0515 15:13:18.091233 2402 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-3982d56781\" not found" node="ci-4334.0.0-a-3982d56781" May 15 15:13:18.230710 kubelet[2402]: I0515 15:13:18.230637 2402 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-3982d56781" May 15 15:13:18.742264 kubelet[2402]: I0515 15:13:18.741832 2402 apiserver.go:52] "Watching apiserver" May 15 15:13:18.758706 kubelet[2402]: I0515 15:13:18.758658 2402 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 15:13:18.830913 kubelet[2402]: E0515 15:13:18.830860 2402 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-3982d56781\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:18.831450 kubelet[2402]: E0515 15:13:18.831373 2402 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:20.288392 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-7.scope)... May 15 15:13:20.288805 systemd[1]: Reloading... May 15 15:13:20.411213 zram_generator::config[2717]: No configuration found. May 15 15:13:20.548210 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 15:13:20.711473 systemd[1]: Reloading finished in 421 ms. May 15 15:13:20.740525 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:13:20.755557 systemd[1]: kubelet.service: Deactivated successfully. May 15 15:13:20.755805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:13:20.755880 systemd[1]: kubelet.service: Consumed 820ms CPU time, 108.5M memory peak. May 15 15:13:20.758297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:13:20.909444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:13:20.924401 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 15:13:21.020797 kubelet[2768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 15:13:21.020797 kubelet[2768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 15:13:21.020797 kubelet[2768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 15:13:21.020797 kubelet[2768]: I0515 15:13:21.020550 2768 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 15:13:21.034007 kubelet[2768]: I0515 15:13:21.033956 2768 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 15:13:21.034007 kubelet[2768]: I0515 15:13:21.033986 2768 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 15:13:21.034344 kubelet[2768]: I0515 15:13:21.034316 2768 server.go:927] "Client rotation is on, will bootstrap in background" May 15 15:13:21.036141 kubelet[2768]: I0515 15:13:21.036098 2768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 15:13:21.037903 kubelet[2768]: I0515 15:13:21.037863 2768 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 15:13:21.048133 kubelet[2768]: I0515 15:13:21.048089 2768 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 15:13:21.048474 kubelet[2768]: I0515 15:13:21.048426 2768 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 15:13:21.048689 kubelet[2768]: I0515 15:13:21.048467 2768 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-3982d56781","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 15:13:21.048862 kubelet[2768]: I0515 15:13:21.048707 2768 topology_manager.go:138] "Creating topology manager with none policy" May 15 15:13:21.048862 kubelet[2768]: I0515 15:13:21.048724 2768 container_manager_linux.go:301] "Creating device plugin manager" May 15 15:13:21.048862 kubelet[2768]: I0515 15:13:21.048781 2768 state_mem.go:36] "Initialized new in-memory state store" May 15 15:13:21.048990 kubelet[2768]: I0515 15:13:21.048932 2768 kubelet.go:400] "Attempting to sync node with API server" May 15 15:13:21.048990 kubelet[2768]: I0515 15:13:21.048947 2768 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 15:13:21.049254 kubelet[2768]: I0515 15:13:21.049226 2768 kubelet.go:312] "Adding apiserver pod source" May 15 15:13:21.049377 kubelet[2768]: I0515 15:13:21.049258 2768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 15:13:21.050871 kubelet[2768]: I0515 15:13:21.050844 2768 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 15:13:21.053194 kubelet[2768]: I0515 15:13:21.051068 2768 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 15:13:21.053194 kubelet[2768]: I0515 15:13:21.051757 2768 server.go:1264] "Started kubelet" May 15 15:13:21.055403 kubelet[2768]: I0515 15:13:21.054899 2768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 15:13:21.068354 kubelet[2768]: I0515 15:13:21.068302 2768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 15:13:21.069653 kubelet[2768]: I0515 15:13:21.069313 2768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 15:13:21.081844 kubelet[2768]: I0515 15:13:21.081804 2768 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 15:13:21.083724 kubelet[2768]: I0515 15:13:21.083685 2768 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 15:13:21.088320 kubelet[2768]: I0515 15:13:21.084115 2768 reconciler.go:26] "Reconciler: start to sync state" May 15 15:13:21.088320 kubelet[2768]: I0515 15:13:21.086349 2768 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 15:13:21.091155 kubelet[2768]: I0515 15:13:21.091126 2768 server.go:455] "Adding debug handlers to kubelet server" May 15 15:13:21.097822 kubelet[2768]: I0515 15:13:21.097769 2768 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 15:13:21.100154 kubelet[2768]: E0515 15:13:21.100096 2768 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 15:13:21.102200 kubelet[2768]: I0515 15:13:21.101589 2768 factory.go:221] Registration of the containerd container factory successfully May 15 15:13:21.102200 kubelet[2768]: I0515 15:13:21.101617 2768 factory.go:221] Registration of the systemd container factory successfully May 15 15:13:21.111424 kubelet[2768]: I0515 15:13:21.111389 2768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 15:13:21.113123 kubelet[2768]: I0515 15:13:21.113089 2768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 15:13:21.113316 kubelet[2768]: I0515 15:13:21.113304 2768 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 15:13:21.113407 kubelet[2768]: I0515 15:13:21.113398 2768 kubelet.go:2337] "Starting kubelet main sync loop" May 15 15:13:21.113540 kubelet[2768]: E0515 15:13:21.113517 2768 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 15:13:21.171515 kubelet[2768]: I0515 15:13:21.171405 2768 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 15:13:21.171908 kubelet[2768]: I0515 15:13:21.171686 2768 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 15:13:21.171908 kubelet[2768]: I0515 15:13:21.171720 2768 state_mem.go:36] "Initialized new in-memory state store" May 15 15:13:21.172627 kubelet[2768]: I0515 15:13:21.172608 2768 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 15:13:21.172790 kubelet[2768]: I0515 15:13:21.172733 2768 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 15:13:21.172903 kubelet[2768]: I0515 15:13:21.172893 2768 policy_none.go:49] "None policy: Start" May 15 15:13:21.174538 kubelet[2768]: I0515 15:13:21.174515 2768 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 15:13:21.174664 kubelet[2768]: I0515 15:13:21.174647 2768 state_mem.go:35] "Initializing new in-memory state store" May 15 15:13:21.175185 kubelet[2768]: I0515 15:13:21.175157 2768 state_mem.go:75] "Updated machine memory state" May 15 15:13:21.184780 kubelet[2768]: I0515 15:13:21.183988 2768 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-3982d56781" May 15 15:13:21.190521 kubelet[2768]: I0515 15:13:21.190435 2768 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 15:13:21.190651 kubelet[2768]: I0515 15:13:21.190611 2768 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 15:13:21.190747 kubelet[2768]: I0515 15:13:21.190733 2768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 15:13:21.197950 kubelet[2768]: I0515 15:13:21.197920 2768 kubelet_node_status.go:112] "Node was previously registered" node="ci-4334.0.0-a-3982d56781" May 15 15:13:21.199012 kubelet[2768]: I0515 15:13:21.198985 2768 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-3982d56781" May 15 15:13:21.214233 kubelet[2768]: I0515 15:13:21.214001 2768 topology_manager.go:215] "Topology Admit Handler" podUID="1b3fade00b91a1bcba7e3e18884a9024" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:21.216999 kubelet[2768]: I0515 15:13:21.216733 2768 topology_manager.go:215] "Topology Admit Handler" podUID="73f83460d803d640cb3d407a2e1cff6b" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:21.218329 kubelet[2768]: I0515 15:13:21.218159 2768 topology_manager.go:215] "Topology Admit Handler" podUID="3279190589c9766307c874a69b63040e" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:13:21.238596 kubelet[2768]: W0515 15:13:21.238390 2768 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 15:13:21.239011 kubelet[2768]: W0515 15:13:21.238982 2768 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 15:13:21.239281 kubelet[2768]: W0515 15:13:21.239264 2768 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 15:13:21.389740 kubelet[2768]: I0515 15:13:21.389428 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:21.389740 kubelet[2768]: I0515 15:13:21.389485 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:21.389740 kubelet[2768]: I0515 15:13:21.389511 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:21.389740 kubelet[2768]: I0515 15:13:21.389541 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3279190589c9766307c874a69b63040e-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-3982d56781\" (UID: \"3279190589c9766307c874a69b63040e\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:13:21.389740 kubelet[2768]: I0515 15:13:21.389564 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b3fade00b91a1bcba7e3e18884a9024-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-3982d56781\" (UID: \"1b3fade00b91a1bcba7e3e18884a9024\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:21.390002 kubelet[2768]: I0515 15:13:21.389581 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b3fade00b91a1bcba7e3e18884a9024-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-3982d56781\" (UID: \"1b3fade00b91a1bcba7e3e18884a9024\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:21.390002 kubelet[2768]: I0515 15:13:21.389597 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:21.390002 kubelet[2768]: I0515 15:13:21.389612 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f83460d803d640cb3d407a2e1cff6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-3982d56781\" (UID: \"73f83460d803d640cb3d407a2e1cff6b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:21.390002 kubelet[2768]: I0515 15:13:21.389629 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b3fade00b91a1bcba7e3e18884a9024-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-3982d56781\" (UID: \"1b3fade00b91a1bcba7e3e18884a9024\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:21.542716 kubelet[2768]: E0515 15:13:21.541231 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:21.542716 kubelet[2768]: E0515 15:13:21.541353 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:21.542716 kubelet[2768]: E0515 15:13:21.541635 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:22.062260 kubelet[2768]: I0515 15:13:22.062026 2768 apiserver.go:52] "Watching apiserver" May 15 15:13:22.089398 kubelet[2768]: I0515 15:13:22.089356 2768 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 15:13:22.146676 kubelet[2768]: E0515 15:13:22.146638 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:22.148486 kubelet[2768]: E0515 15:13:22.148251 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:22.159101 kubelet[2768]: W0515 15:13:22.159073 2768 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 15:13:22.159784 kubelet[2768]: E0515 15:13:22.159246 2768 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-3982d56781\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:22.159784 kubelet[2768]: E0515 15:13:22.159625 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:22.282937 kubelet[2768]: I0515 15:13:22.282050 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" podStartSLOduration=1.282028628 podStartE2EDuration="1.282028628s" podCreationTimestamp="2025-05-15 15:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:13:22.264638878 +0000 UTC m=+1.333812674" watchObservedRunningTime="2025-05-15 15:13:22.282028628 +0000 UTC m=+1.351202423" May 15 15:13:22.314308 kubelet[2768]: I0515 15:13:22.313751 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" podStartSLOduration=1.313726337 podStartE2EDuration="1.313726337s" podCreationTimestamp="2025-05-15 15:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:13:22.283406013 +0000 UTC m=+1.352579800" watchObservedRunningTime="2025-05-15 15:13:22.313726337 +0000 UTC m=+1.382900135" May 15 15:13:22.349609 kubelet[2768]: I0515 15:13:22.349553 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" podStartSLOduration=1.349522372 podStartE2EDuration="1.349522372s" podCreationTimestamp="2025-05-15 15:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:13:22.315725506 +0000 UTC m=+1.384899302" watchObservedRunningTime="2025-05-15 15:13:22.349522372 +0000 UTC m=+1.418696150" May 15 15:13:23.147975 kubelet[2768]: E0515 15:13:23.147937 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:24.156791 kubelet[2768]: E0515 15:13:24.156744 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:24.266621 kubelet[2768]: E0515 15:13:24.266522 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:26.593871 sudo[1756]: pam_unix(sudo:session): session closed for user root May 15 15:13:26.597408 sshd[1755]: Connection closed by 139.178.68.195 port 35702 May 15 15:13:26.598274 sshd-session[1753]: pam_unix(sshd:session): session closed for user core May 15 15:13:26.605127 systemd[1]: sshd@6-165.232.158.142:22-139.178.68.195:35702.service: Deactivated successfully. May 15 15:13:26.608310 systemd[1]: session-7.scope: Deactivated successfully. May 15 15:13:26.608729 systemd[1]: session-7.scope: Consumed 4.971s CPU time, 187.3M memory peak. May 15 15:13:26.610677 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. May 15 15:13:26.612719 systemd-logind[1490]: Removed session 7. May 15 15:13:28.939218 systemd-timesyncd[1413]: Contacted time server 45.79.51.42:123 (2.flatcar.pool.ntp.org). May 15 15:13:28.939296 systemd-timesyncd[1413]: Initial clock synchronization to Thu 2025-05-15 15:13:29.309445 UTC. May 15 15:13:30.800736 kubelet[2768]: E0515 15:13:30.800612 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:31.171313 kubelet[2768]: E0515 15:13:31.170515 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:32.072913 kubelet[2768]: E0515 15:13:32.072823 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:32.173196 kubelet[2768]: E0515 15:13:32.173127 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:34.261351 kubelet[2768]: I0515 15:13:34.261240 2768 topology_manager.go:215] "Topology Admit Handler" podUID="af8460a0-071d-4aa9-83fa-36dd1c683543" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-csfpd" May 15 15:13:34.280224 kubelet[2768]: I0515 15:13:34.278528 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af8460a0-071d-4aa9-83fa-36dd1c683543-var-lib-calico\") pod \"tigera-operator-797db67f8-csfpd\" (UID: \"af8460a0-071d-4aa9-83fa-36dd1c683543\") " pod="tigera-operator/tigera-operator-797db67f8-csfpd" May 15 15:13:34.280224 kubelet[2768]: I0515 15:13:34.278572 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvw94\" (UniqueName: \"kubernetes.io/projected/af8460a0-071d-4aa9-83fa-36dd1c683543-kube-api-access-nvw94\") pod \"tigera-operator-797db67f8-csfpd\" (UID: \"af8460a0-071d-4aa9-83fa-36dd1c683543\") " pod="tigera-operator/tigera-operator-797db67f8-csfpd" May 15 15:13:34.279124 systemd[1]: Created slice kubepods-besteffort-podaf8460a0_071d_4aa9_83fa_36dd1c683543.slice - libcontainer container kubepods-besteffort-podaf8460a0_071d_4aa9_83fa_36dd1c683543.slice. May 15 15:13:34.298578 kubelet[2768]: I0515 15:13:34.298545 2768 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 15:13:34.299397 containerd[1566]: time="2025-05-15T15:13:34.299138151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 15:13:34.301402 kubelet[2768]: I0515 15:13:34.301365 2768 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 15:13:34.303796 kubelet[2768]: E0515 15:13:34.303759 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:34.556329 kubelet[2768]: I0515 15:13:34.555451 2768 topology_manager.go:215] "Topology Admit Handler" podUID="e292b39c-7ed7-4e91-a119-847390cada75" podNamespace="kube-system" podName="kube-proxy-xq2kw" May 15 15:13:34.565269 systemd[1]: Created slice kubepods-besteffort-pode292b39c_7ed7_4e91_a119_847390cada75.slice - libcontainer container kubepods-besteffort-pode292b39c_7ed7_4e91_a119_847390cada75.slice. May 15 15:13:34.581317 kubelet[2768]: I0515 15:13:34.581151 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e292b39c-7ed7-4e91-a119-847390cada75-lib-modules\") pod \"kube-proxy-xq2kw\" (UID: \"e292b39c-7ed7-4e91-a119-847390cada75\") " pod="kube-system/kube-proxy-xq2kw" May 15 15:13:34.581317 kubelet[2768]: I0515 15:13:34.581245 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzc6\" (UniqueName: \"kubernetes.io/projected/e292b39c-7ed7-4e91-a119-847390cada75-kube-api-access-lxzc6\") pod \"kube-proxy-xq2kw\" (UID: \"e292b39c-7ed7-4e91-a119-847390cada75\") " pod="kube-system/kube-proxy-xq2kw" May 15 15:13:34.581317 kubelet[2768]: I0515 15:13:34.581291 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e292b39c-7ed7-4e91-a119-847390cada75-xtables-lock\") pod \"kube-proxy-xq2kw\" (UID: \"e292b39c-7ed7-4e91-a119-847390cada75\") " pod="kube-system/kube-proxy-xq2kw" May 15 15:13:34.581317 kubelet[2768]: I0515 15:13:34.581315 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e292b39c-7ed7-4e91-a119-847390cada75-kube-proxy\") pod \"kube-proxy-xq2kw\" (UID: \"e292b39c-7ed7-4e91-a119-847390cada75\") " pod="kube-system/kube-proxy-xq2kw" May 15 15:13:34.591274 containerd[1566]: time="2025-05-15T15:13:34.591209737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-csfpd,Uid:af8460a0-071d-4aa9-83fa-36dd1c683543,Namespace:tigera-operator,Attempt:0,}" May 15 15:13:34.615894 containerd[1566]: time="2025-05-15T15:13:34.615832697Z" level=info msg="connecting to shim f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5" address="unix:///run/containerd/s/56854889aa11294de10f31d6b18b6672d1c524e48d6c6d4f5e6e0b68aeab5838" namespace=k8s.io protocol=ttrpc version=3 May 15 15:13:34.655475 systemd[1]: Started cri-containerd-f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5.scope - libcontainer container f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5. May 15 15:13:34.724256 containerd[1566]: time="2025-05-15T15:13:34.724171396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-csfpd,Uid:af8460a0-071d-4aa9-83fa-36dd1c683543,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\"" May 15 15:13:34.730074 containerd[1566]: time="2025-05-15T15:13:34.729808531Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 15:13:34.870809 kubelet[2768]: E0515 15:13:34.870336 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:34.871391 containerd[1566]: time="2025-05-15T15:13:34.871323019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xq2kw,Uid:e292b39c-7ed7-4e91-a119-847390cada75,Namespace:kube-system,Attempt:0,}" May 15 15:13:34.890084 containerd[1566]: time="2025-05-15T15:13:34.889992832Z" level=info msg="connecting to shim 5e486ff388b5ee53883ff1aef0715ea4e1d3ba5690043b7f96d8a7751a337486" address="unix:///run/containerd/s/1e84c1478b4b5952f2bf1c6da30db5aaf106ef187612d27505a929492a572408" namespace=k8s.io protocol=ttrpc version=3 May 15 15:13:34.919426 systemd[1]: Started cri-containerd-5e486ff388b5ee53883ff1aef0715ea4e1d3ba5690043b7f96d8a7751a337486.scope - libcontainer container 5e486ff388b5ee53883ff1aef0715ea4e1d3ba5690043b7f96d8a7751a337486. May 15 15:13:34.952576 containerd[1566]: time="2025-05-15T15:13:34.952537949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xq2kw,Uid:e292b39c-7ed7-4e91-a119-847390cada75,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e486ff388b5ee53883ff1aef0715ea4e1d3ba5690043b7f96d8a7751a337486\"" May 15 15:13:34.956064 kubelet[2768]: E0515 15:13:34.956006 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:34.959775 containerd[1566]: time="2025-05-15T15:13:34.959722470Z" level=info msg="CreateContainer within sandbox \"5e486ff388b5ee53883ff1aef0715ea4e1d3ba5690043b7f96d8a7751a337486\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 15:13:34.974004 containerd[1566]: time="2025-05-15T15:13:34.973955862Z" level=info msg="Container 16ef5c89d6faa0bfc2dc1190e9be1e021c33fa178e5d2ad6ab624e62d17910d4: CDI devices from CRI Config.CDIDevices: []" May 15 15:13:34.980101 containerd[1566]: time="2025-05-15T15:13:34.980049120Z" level=info msg="CreateContainer within sandbox \"5e486ff388b5ee53883ff1aef0715ea4e1d3ba5690043b7f96d8a7751a337486\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16ef5c89d6faa0bfc2dc1190e9be1e021c33fa178e5d2ad6ab624e62d17910d4\"" May 15 15:13:34.980964 containerd[1566]: time="2025-05-15T15:13:34.980908037Z" level=info msg="StartContainer for \"16ef5c89d6faa0bfc2dc1190e9be1e021c33fa178e5d2ad6ab624e62d17910d4\"" May 15 15:13:34.982486 containerd[1566]: time="2025-05-15T15:13:34.982419417Z" level=info msg="connecting to shim 16ef5c89d6faa0bfc2dc1190e9be1e021c33fa178e5d2ad6ab624e62d17910d4" address="unix:///run/containerd/s/1e84c1478b4b5952f2bf1c6da30db5aaf106ef187612d27505a929492a572408" protocol=ttrpc version=3 May 15 15:13:35.005411 systemd[1]: Started cri-containerd-16ef5c89d6faa0bfc2dc1190e9be1e021c33fa178e5d2ad6ab624e62d17910d4.scope - libcontainer container 16ef5c89d6faa0bfc2dc1190e9be1e021c33fa178e5d2ad6ab624e62d17910d4. May 15 15:13:35.053937 containerd[1566]: time="2025-05-15T15:13:35.053880521Z" level=info msg="StartContainer for \"16ef5c89d6faa0bfc2dc1190e9be1e021c33fa178e5d2ad6ab624e62d17910d4\" returns successfully" May 15 15:13:35.183926 kubelet[2768]: E0515 15:13:35.183342 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:36.248134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405325578.mount: Deactivated successfully. May 15 15:13:36.429491 update_engine[1494]: I20250515 15:13:36.429414 1494 update_attempter.cc:509] Updating boot flags... May 15 15:13:37.099014 containerd[1566]: time="2025-05-15T15:13:37.098854499Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:37.100079 containerd[1566]: time="2025-05-15T15:13:37.099901707Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 15:13:37.100666 containerd[1566]: time="2025-05-15T15:13:37.100632232Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:37.102515 containerd[1566]: time="2025-05-15T15:13:37.102485475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:37.103459 containerd[1566]: time="2025-05-15T15:13:37.103327461Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.373478438s" May 15 15:13:37.103459 containerd[1566]: time="2025-05-15T15:13:37.103369637Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 15:13:37.105862 containerd[1566]: time="2025-05-15T15:13:37.105827786Z" level=info msg="CreateContainer within sandbox \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 15:13:37.114151 containerd[1566]: time="2025-05-15T15:13:37.112378713Z" level=info msg="Container 4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a: CDI devices from CRI Config.CDIDevices: []" May 15 15:13:37.125323 containerd[1566]: time="2025-05-15T15:13:37.125157005Z" level=info msg="CreateContainer within sandbox \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\"" May 15 15:13:37.127374 containerd[1566]: time="2025-05-15T15:13:37.127321387Z" level=info msg="StartContainer for \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\"" May 15 15:13:37.128433 containerd[1566]: time="2025-05-15T15:13:37.128392081Z" level=info msg="connecting to shim 4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a" address="unix:///run/containerd/s/56854889aa11294de10f31d6b18b6672d1c524e48d6c6d4f5e6e0b68aeab5838" protocol=ttrpc version=3 May 15 15:13:37.156481 systemd[1]: Started cri-containerd-4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a.scope - libcontainer container 4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a. May 15 15:13:37.200211 containerd[1566]: time="2025-05-15T15:13:37.199923331Z" level=info msg="StartContainer for \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" returns successfully" May 15 15:13:38.207270 kubelet[2768]: I0515 15:13:38.207044 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xq2kw" podStartSLOduration=4.207021581 podStartE2EDuration="4.207021581s" podCreationTimestamp="2025-05-15 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:13:35.195771651 +0000 UTC m=+14.264945449" watchObservedRunningTime="2025-05-15 15:13:38.207021581 +0000 UTC m=+17.276195417" May 15 15:13:40.503204 kubelet[2768]: I0515 15:13:40.503115 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-csfpd" podStartSLOduration=4.126717761 podStartE2EDuration="6.503092108s" podCreationTimestamp="2025-05-15 15:13:34 +0000 UTC" firstStartedPulling="2025-05-15 15:13:34.728055162 +0000 UTC m=+13.797228937" lastFinishedPulling="2025-05-15 15:13:37.104429507 +0000 UTC m=+16.173603284" observedRunningTime="2025-05-15 15:13:38.207632838 +0000 UTC m=+17.276806638" watchObservedRunningTime="2025-05-15 15:13:40.503092108 +0000 UTC m=+19.572265884" May 15 15:13:40.504161 kubelet[2768]: I0515 15:13:40.504119 2768 topology_manager.go:215] "Topology Admit Handler" podUID="4f0bffdd-7012-4221-a31e-e913bcd4e0ff" podNamespace="calico-system" podName="calico-typha-64b5f48db9-jvlhw" May 15 15:13:40.514081 systemd[1]: Created slice kubepods-besteffort-pod4f0bffdd_7012_4221_a31e_e913bcd4e0ff.slice - libcontainer container kubepods-besteffort-pod4f0bffdd_7012_4221_a31e_e913bcd4e0ff.slice. May 15 15:13:40.520034 kubelet[2768]: I0515 15:13:40.519997 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4f0bffdd-7012-4221-a31e-e913bcd4e0ff-typha-certs\") pod \"calico-typha-64b5f48db9-jvlhw\" (UID: \"4f0bffdd-7012-4221-a31e-e913bcd4e0ff\") " pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:13:40.520034 kubelet[2768]: I0515 15:13:40.520039 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f0bffdd-7012-4221-a31e-e913bcd4e0ff-tigera-ca-bundle\") pod \"calico-typha-64b5f48db9-jvlhw\" (UID: \"4f0bffdd-7012-4221-a31e-e913bcd4e0ff\") " pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:13:40.520216 kubelet[2768]: I0515 15:13:40.520059 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtgls\" (UniqueName: \"kubernetes.io/projected/4f0bffdd-7012-4221-a31e-e913bcd4e0ff-kube-api-access-gtgls\") pod \"calico-typha-64b5f48db9-jvlhw\" (UID: \"4f0bffdd-7012-4221-a31e-e913bcd4e0ff\") " pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:13:40.643036 kubelet[2768]: I0515 15:13:40.641344 2768 topology_manager.go:215] "Topology Admit Handler" podUID="ea3f9278-e4ee-4dca-80e1-48db54fe37e5" podNamespace="calico-system" podName="calico-node-56p29" May 15 15:13:40.676729 systemd[1]: Created slice kubepods-besteffort-podea3f9278_e4ee_4dca_80e1_48db54fe37e5.slice - libcontainer container kubepods-besteffort-podea3f9278_e4ee_4dca_80e1_48db54fe37e5.slice. May 15 15:13:40.721226 kubelet[2768]: I0515 15:13:40.721127 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-cni-log-dir\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.722026 kubelet[2768]: I0515 15:13:40.721982 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-lib-modules\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.722203 kubelet[2768]: I0515 15:13:40.722134 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-cni-bin-dir\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.722360 kubelet[2768]: I0515 15:13:40.722157 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-var-run-calico\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.722360 kubelet[2768]: I0515 15:13:40.722329 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-node-certs\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.722560 kubelet[2768]: I0515 15:13:40.722346 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxm84\" (UniqueName: \"kubernetes.io/projected/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-kube-api-access-mxm84\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.722836 kubelet[2768]: I0515 15:13:40.722786 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-xtables-lock\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.722836 kubelet[2768]: I0515 15:13:40.722810 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-cni-net-dir\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.723268 kubelet[2768]: I0515 15:13:40.722827 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-flexvol-driver-host\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.723403 kubelet[2768]: I0515 15:13:40.723390 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-policysync\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.723502 kubelet[2768]: I0515 15:13:40.723492 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-tigera-ca-bundle\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.723647 kubelet[2768]: I0515 15:13:40.723563 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea3f9278-e4ee-4dca-80e1-48db54fe37e5-var-lib-calico\") pod \"calico-node-56p29\" (UID: \"ea3f9278-e4ee-4dca-80e1-48db54fe37e5\") " pod="calico-system/calico-node-56p29" May 15 15:13:40.751199 kubelet[2768]: I0515 15:13:40.750874 2768 topology_manager.go:215] "Topology Admit Handler" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" podNamespace="calico-system" podName="csi-node-driver-ssx6b" May 15 15:13:40.753649 kubelet[2768]: E0515 15:13:40.752637 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:13:40.817891 kubelet[2768]: E0515 15:13:40.817519 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:40.818156 containerd[1566]: time="2025-05-15T15:13:40.818116316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64b5f48db9-jvlhw,Uid:4f0bffdd-7012-4221-a31e-e913bcd4e0ff,Namespace:calico-system,Attempt:0,}" May 15 15:13:40.824956 kubelet[2768]: I0515 15:13:40.824862 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7521021f-77bb-4466-96bd-6730a9b2c004-socket-dir\") pod \"csi-node-driver-ssx6b\" (UID: \"7521021f-77bb-4466-96bd-6730a9b2c004\") " pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:40.824956 kubelet[2768]: I0515 15:13:40.824900 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7521021f-77bb-4466-96bd-6730a9b2c004-registration-dir\") pod \"csi-node-driver-ssx6b\" (UID: \"7521021f-77bb-4466-96bd-6730a9b2c004\") " pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:40.825600 kubelet[2768]: I0515 15:13:40.825441 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7521021f-77bb-4466-96bd-6730a9b2c004-kubelet-dir\") pod \"csi-node-driver-ssx6b\" (UID: \"7521021f-77bb-4466-96bd-6730a9b2c004\") " pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:40.826029 kubelet[2768]: I0515 15:13:40.825817 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7521021f-77bb-4466-96bd-6730a9b2c004-varrun\") pod \"csi-node-driver-ssx6b\" (UID: \"7521021f-77bb-4466-96bd-6730a9b2c004\") " pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:40.826029 kubelet[2768]: I0515 15:13:40.825852 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2dvr\" (UniqueName: \"kubernetes.io/projected/7521021f-77bb-4466-96bd-6730a9b2c004-kube-api-access-h2dvr\") pod \"csi-node-driver-ssx6b\" (UID: \"7521021f-77bb-4466-96bd-6730a9b2c004\") " pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:40.863746 containerd[1566]: time="2025-05-15T15:13:40.863623825Z" level=info msg="connecting to shim 48269e473060af608a041f4fa834d42aa366c540101a20dc7da1759f36286853" address="unix:///run/containerd/s/a67c59d28036aff2a22a1eda0de342bd6567df1a3a5b29fb857c107fd849d67b" namespace=k8s.io protocol=ttrpc version=3 May 15 15:13:40.866293 kubelet[2768]: E0515 15:13:40.866128 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.866293 kubelet[2768]: W0515 15:13:40.866159 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.866293 kubelet[2768]: E0515 15:13:40.866219 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.874498 kubelet[2768]: E0515 15:13:40.874455 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.874758 kubelet[2768]: W0515 15:13:40.874718 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.875212 kubelet[2768]: E0515 15:13:40.874984 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.902395 systemd[1]: Started cri-containerd-48269e473060af608a041f4fa834d42aa366c540101a20dc7da1759f36286853.scope - libcontainer container 48269e473060af608a041f4fa834d42aa366c540101a20dc7da1759f36286853. May 15 15:13:40.928314 kubelet[2768]: E0515 15:13:40.928282 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.928314 kubelet[2768]: W0515 15:13:40.928305 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.928314 kubelet[2768]: E0515 15:13:40.928329 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.928753 kubelet[2768]: E0515 15:13:40.928500 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.928753 kubelet[2768]: W0515 15:13:40.928507 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.928753 kubelet[2768]: E0515 15:13:40.928522 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.928753 kubelet[2768]: E0515 15:13:40.928671 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.928753 kubelet[2768]: W0515 15:13:40.928677 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.928753 kubelet[2768]: E0515 15:13:40.928691 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.929684 kubelet[2768]: E0515 15:13:40.928822 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.929684 kubelet[2768]: W0515 15:13:40.928828 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.929684 kubelet[2768]: E0515 15:13:40.928840 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.929684 kubelet[2768]: E0515 15:13:40.928965 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.929684 kubelet[2768]: W0515 15:13:40.928971 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.929684 kubelet[2768]: E0515 15:13:40.928979 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.929684 kubelet[2768]: E0515 15:13:40.929130 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.929684 kubelet[2768]: W0515 15:13:40.929137 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.929684 kubelet[2768]: E0515 15:13:40.929149 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.929684 kubelet[2768]: E0515 15:13:40.929296 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.930737 kubelet[2768]: W0515 15:13:40.929303 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.930737 kubelet[2768]: E0515 15:13:40.929316 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.930737 kubelet[2768]: E0515 15:13:40.929451 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.930737 kubelet[2768]: W0515 15:13:40.929460 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.930737 kubelet[2768]: E0515 15:13:40.929470 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.931696 kubelet[2768]: E0515 15:13:40.931666 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.932044 kubelet[2768]: W0515 15:13:40.931830 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.932777 kubelet[2768]: E0515 15:13:40.932715 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.933733 kubelet[2768]: E0515 15:13:40.933359 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.933733 kubelet[2768]: W0515 15:13:40.933379 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.933948 kubelet[2768]: E0515 15:13:40.933917 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.934452 kubelet[2768]: E0515 15:13:40.934328 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.934452 kubelet[2768]: W0515 15:13:40.934341 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.934452 kubelet[2768]: E0515 15:13:40.934378 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.935475 kubelet[2768]: E0515 15:13:40.935408 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.935898 kubelet[2768]: W0515 15:13:40.935820 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.935898 kubelet[2768]: E0515 15:13:40.935888 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.936401 kubelet[2768]: E0515 15:13:40.936326 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.936752 kubelet[2768]: W0515 15:13:40.936578 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.936752 kubelet[2768]: E0515 15:13:40.936615 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.937442 kubelet[2768]: E0515 15:13:40.937381 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.937442 kubelet[2768]: W0515 15:13:40.937396 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.937442 kubelet[2768]: E0515 15:13:40.937422 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.938194 kubelet[2768]: E0515 15:13:40.937844 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.938194 kubelet[2768]: W0515 15:13:40.937857 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.938194 kubelet[2768]: E0515 15:13:40.937881 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.938975 kubelet[2768]: E0515 15:13:40.938883 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.938975 kubelet[2768]: W0515 15:13:40.938897 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.938975 kubelet[2768]: E0515 15:13:40.938923 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.939762 kubelet[2768]: E0515 15:13:40.939692 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.939762 kubelet[2768]: W0515 15:13:40.939707 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.939762 kubelet[2768]: E0515 15:13:40.939732 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.940487 kubelet[2768]: E0515 15:13:40.940397 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.940487 kubelet[2768]: W0515 15:13:40.940411 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.940926 kubelet[2768]: E0515 15:13:40.940899 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.941264 kubelet[2768]: E0515 15:13:40.941221 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.941264 kubelet[2768]: W0515 15:13:40.941234 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.941964 kubelet[2768]: E0515 15:13:40.941656 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.942344 kubelet[2768]: E0515 15:13:40.942172 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.942344 kubelet[2768]: W0515 15:13:40.942201 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.942344 kubelet[2768]: E0515 15:13:40.942224 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.943075 kubelet[2768]: E0515 15:13:40.943012 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.943075 kubelet[2768]: W0515 15:13:40.943027 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.943075 kubelet[2768]: E0515 15:13:40.943050 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.943576 kubelet[2768]: E0515 15:13:40.943503 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.944242 kubelet[2768]: W0515 15:13:40.944222 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.944357 kubelet[2768]: E0515 15:13:40.944341 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.944702 kubelet[2768]: E0515 15:13:40.944597 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.944702 kubelet[2768]: W0515 15:13:40.944610 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.944779 kubelet[2768]: E0515 15:13:40.944746 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.945046 kubelet[2768]: E0515 15:13:40.945001 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.945241 kubelet[2768]: W0515 15:13:40.945101 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.945241 kubelet[2768]: E0515 15:13:40.945128 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.946000 kubelet[2768]: E0515 15:13:40.945945 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.946000 kubelet[2768]: W0515 15:13:40.945958 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.946000 kubelet[2768]: E0515 15:13:40.945971 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.964758 kubelet[2768]: E0515 15:13:40.964613 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:40.964758 kubelet[2768]: W0515 15:13:40.964636 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:40.964758 kubelet[2768]: E0515 15:13:40.964657 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:40.972227 containerd[1566]: time="2025-05-15T15:13:40.971602770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64b5f48db9-jvlhw,Uid:4f0bffdd-7012-4221-a31e-e913bcd4e0ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"48269e473060af608a041f4fa834d42aa366c540101a20dc7da1759f36286853\"" May 15 15:13:40.974117 kubelet[2768]: E0515 15:13:40.974076 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:40.976082 containerd[1566]: time="2025-05-15T15:13:40.975899970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 15:13:40.981732 kubelet[2768]: E0515 15:13:40.981680 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:40.982587 containerd[1566]: time="2025-05-15T15:13:40.982477912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-56p29,Uid:ea3f9278-e4ee-4dca-80e1-48db54fe37e5,Namespace:calico-system,Attempt:0,}" May 15 15:13:41.013695 containerd[1566]: time="2025-05-15T15:13:41.013373849Z" level=info msg="connecting to shim b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1" address="unix:///run/containerd/s/6fb3aa088f0edaf21bd87e29aee2457469ddeb6e744a09ed5e9b09221fe5d21e" namespace=k8s.io protocol=ttrpc version=3 May 15 15:13:41.056445 systemd[1]: Started cri-containerd-b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1.scope - libcontainer container b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1. May 15 15:13:41.111447 containerd[1566]: time="2025-05-15T15:13:41.111383274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-56p29,Uid:ea3f9278-e4ee-4dca-80e1-48db54fe37e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1\"" May 15 15:13:41.113340 kubelet[2768]: E0515 15:13:41.113306 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:42.114515 kubelet[2768]: E0515 15:13:42.114349 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:13:42.970809 containerd[1566]: time="2025-05-15T15:13:42.970723259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:42.971880 containerd[1566]: time="2025-05-15T15:13:42.971833655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 15:13:42.972762 containerd[1566]: time="2025-05-15T15:13:42.972683636Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:42.975135 containerd[1566]: time="2025-05-15T15:13:42.974683835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:42.975811 containerd[1566]: time="2025-05-15T15:13:42.975773289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.9998298s" May 15 15:13:42.975955 containerd[1566]: time="2025-05-15T15:13:42.975935106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 15:13:42.977225 containerd[1566]: time="2025-05-15T15:13:42.977161682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 15:13:42.999169 containerd[1566]: time="2025-05-15T15:13:42.999131733Z" level=info msg="CreateContainer within sandbox \"48269e473060af608a041f4fa834d42aa366c540101a20dc7da1759f36286853\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 15:13:43.010946 containerd[1566]: time="2025-05-15T15:13:43.007763727Z" level=info msg="Container 8da904aec2d51f6459bbc4f2e823ec5dd3b16f9e8be5026f0560940009e47f70: CDI devices from CRI Config.CDIDevices: []" May 15 15:13:43.015374 containerd[1566]: time="2025-05-15T15:13:43.015306927Z" level=info msg="CreateContainer within sandbox \"48269e473060af608a041f4fa834d42aa366c540101a20dc7da1759f36286853\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8da904aec2d51f6459bbc4f2e823ec5dd3b16f9e8be5026f0560940009e47f70\"" May 15 15:13:43.016391 containerd[1566]: time="2025-05-15T15:13:43.016362497Z" level=info msg="StartContainer for \"8da904aec2d51f6459bbc4f2e823ec5dd3b16f9e8be5026f0560940009e47f70\"" May 15 15:13:43.019049 containerd[1566]: time="2025-05-15T15:13:43.018972480Z" level=info msg="connecting to shim 8da904aec2d51f6459bbc4f2e823ec5dd3b16f9e8be5026f0560940009e47f70" address="unix:///run/containerd/s/a67c59d28036aff2a22a1eda0de342bd6567df1a3a5b29fb857c107fd849d67b" protocol=ttrpc version=3 May 15 15:13:43.048391 systemd[1]: Started cri-containerd-8da904aec2d51f6459bbc4f2e823ec5dd3b16f9e8be5026f0560940009e47f70.scope - libcontainer container 8da904aec2d51f6459bbc4f2e823ec5dd3b16f9e8be5026f0560940009e47f70. May 15 15:13:43.105003 containerd[1566]: time="2025-05-15T15:13:43.104872073Z" level=info msg="StartContainer for \"8da904aec2d51f6459bbc4f2e823ec5dd3b16f9e8be5026f0560940009e47f70\" returns successfully" May 15 15:13:43.225550 kubelet[2768]: E0515 15:13:43.224844 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:43.230403 kubelet[2768]: E0515 15:13:43.230372 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.230403 kubelet[2768]: W0515 15:13:43.230397 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.230403 kubelet[2768]: E0515 15:13:43.230418 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.230744 kubelet[2768]: E0515 15:13:43.230668 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.230744 kubelet[2768]: W0515 15:13:43.230681 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.230744 kubelet[2768]: E0515 15:13:43.230693 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.230877 kubelet[2768]: E0515 15:13:43.230853 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.230877 kubelet[2768]: W0515 15:13:43.230864 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.230877 kubelet[2768]: E0515 15:13:43.230872 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.231518 kubelet[2768]: E0515 15:13:43.231023 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.231518 kubelet[2768]: W0515 15:13:43.231030 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.231518 kubelet[2768]: E0515 15:13:43.231038 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.231518 kubelet[2768]: E0515 15:13:43.231242 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.231518 kubelet[2768]: W0515 15:13:43.231249 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.231518 kubelet[2768]: E0515 15:13:43.231258 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.231518 kubelet[2768]: E0515 15:13:43.231467 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.231518 kubelet[2768]: W0515 15:13:43.231475 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.231518 kubelet[2768]: E0515 15:13:43.231484 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.232140 kubelet[2768]: E0515 15:13:43.231627 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.232140 kubelet[2768]: W0515 15:13:43.231636 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.232140 kubelet[2768]: E0515 15:13:43.231648 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.232140 kubelet[2768]: E0515 15:13:43.231976 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.232140 kubelet[2768]: W0515 15:13:43.231986 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.232140 kubelet[2768]: E0515 15:13:43.231996 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.233313 kubelet[2768]: E0515 15:13:43.232155 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.233313 kubelet[2768]: W0515 15:13:43.232161 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.233313 kubelet[2768]: E0515 15:13:43.232168 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.233313 kubelet[2768]: E0515 15:13:43.232940 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.233313 kubelet[2768]: W0515 15:13:43.232951 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.233313 kubelet[2768]: E0515 15:13:43.232962 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.233313 kubelet[2768]: E0515 15:13:43.233132 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.234246 kubelet[2768]: W0515 15:13:43.233140 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.234328 kubelet[2768]: E0515 15:13:43.234254 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.234466 kubelet[2768]: E0515 15:13:43.234455 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.234466 kubelet[2768]: W0515 15:13:43.234464 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.234544 kubelet[2768]: E0515 15:13:43.234475 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.234675 kubelet[2768]: E0515 15:13:43.234661 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.234723 kubelet[2768]: W0515 15:13:43.234682 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.234723 kubelet[2768]: E0515 15:13:43.234695 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.234937 kubelet[2768]: E0515 15:13:43.234863 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.234937 kubelet[2768]: W0515 15:13:43.234875 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.234937 kubelet[2768]: E0515 15:13:43.234885 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.235080 kubelet[2768]: E0515 15:13:43.235067 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.235080 kubelet[2768]: W0515 15:13:43.235079 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.235133 kubelet[2768]: E0515 15:13:43.235089 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.249458 kubelet[2768]: E0515 15:13:43.249423 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.249458 kubelet[2768]: W0515 15:13:43.249449 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.249712 kubelet[2768]: E0515 15:13:43.249472 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.250543 kubelet[2768]: E0515 15:13:43.250497 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.250543 kubelet[2768]: W0515 15:13:43.250512 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.250543 kubelet[2768]: E0515 15:13:43.250537 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.250755 kubelet[2768]: E0515 15:13:43.250743 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.250755 kubelet[2768]: W0515 15:13:43.250753 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.250923 kubelet[2768]: E0515 15:13:43.250766 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.251147 kubelet[2768]: E0515 15:13:43.251129 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.251325 kubelet[2768]: W0515 15:13:43.251241 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.251325 kubelet[2768]: E0515 15:13:43.251266 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.252582 kubelet[2768]: E0515 15:13:43.252447 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.252582 kubelet[2768]: W0515 15:13:43.252466 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.252582 kubelet[2768]: E0515 15:13:43.252485 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.252884 kubelet[2768]: E0515 15:13:43.252784 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.252884 kubelet[2768]: W0515 15:13:43.252799 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.252884 kubelet[2768]: E0515 15:13:43.252826 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.253136 kubelet[2768]: E0515 15:13:43.253045 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.253136 kubelet[2768]: W0515 15:13:43.253055 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.253136 kubelet[2768]: E0515 15:13:43.253076 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.253783 kubelet[2768]: E0515 15:13:43.253299 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.253783 kubelet[2768]: W0515 15:13:43.253668 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.253783 kubelet[2768]: E0515 15:13:43.253706 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.254004 kubelet[2768]: E0515 15:13:43.253993 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.254071 kubelet[2768]: W0515 15:13:43.254062 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.254226 kubelet[2768]: E0515 15:13:43.254154 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.255425 kubelet[2768]: E0515 15:13:43.255390 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.255425 kubelet[2768]: W0515 15:13:43.255405 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.255425 kubelet[2768]: E0515 15:13:43.255422 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.255923 kubelet[2768]: E0515 15:13:43.255556 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.255923 kubelet[2768]: W0515 15:13:43.255565 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.255923 kubelet[2768]: E0515 15:13:43.255574 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.255923 kubelet[2768]: E0515 15:13:43.255727 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.255923 kubelet[2768]: W0515 15:13:43.255734 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.256304 kubelet[2768]: E0515 15:13:43.255741 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.257059 kubelet[2768]: E0515 15:13:43.256521 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.257059 kubelet[2768]: W0515 15:13:43.256535 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.257059 kubelet[2768]: E0515 15:13:43.256546 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.257325 kubelet[2768]: E0515 15:13:43.257311 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.257388 kubelet[2768]: W0515 15:13:43.257379 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.257463 kubelet[2768]: E0515 15:13:43.257446 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.260200 kubelet[2768]: E0515 15:13:43.259978 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.260200 kubelet[2768]: W0515 15:13:43.259994 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.260396 kubelet[2768]: E0515 15:13:43.260359 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.260626 kubelet[2768]: W0515 15:13:43.260449 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.260626 kubelet[2768]: E0515 15:13:43.260468 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.260626 kubelet[2768]: E0515 15:13:43.260395 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.261196 kubelet[2768]: E0515 15:13:43.261033 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.261196 kubelet[2768]: W0515 15:13:43.261047 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.261196 kubelet[2768]: E0515 15:13:43.261065 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:43.263498 kubelet[2768]: E0515 15:13:43.263483 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:43.263614 kubelet[2768]: W0515 15:13:43.263601 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:43.263667 kubelet[2768]: E0515 15:13:43.263659 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.114817 kubelet[2768]: E0515 15:13:44.114674 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:13:44.224406 kubelet[2768]: I0515 15:13:44.224351 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 15:13:44.225217 kubelet[2768]: E0515 15:13:44.225191 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:44.242662 kubelet[2768]: E0515 15:13:44.242521 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.242662 kubelet[2768]: W0515 15:13:44.242549 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.242662 kubelet[2768]: E0515 15:13:44.242577 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.242944 kubelet[2768]: E0515 15:13:44.242749 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.242944 kubelet[2768]: W0515 15:13:44.242756 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.242944 kubelet[2768]: E0515 15:13:44.242766 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.242944 kubelet[2768]: E0515 15:13:44.242922 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.242944 kubelet[2768]: W0515 15:13:44.242929 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.242944 kubelet[2768]: E0515 15:13:44.242938 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.243098 kubelet[2768]: E0515 15:13:44.243064 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.243098 kubelet[2768]: W0515 15:13:44.243071 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.243098 kubelet[2768]: E0515 15:13:44.243078 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.243281 kubelet[2768]: E0515 15:13:44.243257 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.243281 kubelet[2768]: W0515 15:13:44.243265 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.243281 kubelet[2768]: E0515 15:13:44.243273 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.243410 kubelet[2768]: E0515 15:13:44.243398 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.243410 kubelet[2768]: W0515 15:13:44.243406 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.243477 kubelet[2768]: E0515 15:13:44.243416 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.243556 kubelet[2768]: E0515 15:13:44.243545 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.243556 kubelet[2768]: W0515 15:13:44.243554 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.243615 kubelet[2768]: E0515 15:13:44.243562 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.243701 kubelet[2768]: E0515 15:13:44.243690 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.243701 kubelet[2768]: W0515 15:13:44.243698 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.243766 kubelet[2768]: E0515 15:13:44.243705 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.243843 kubelet[2768]: E0515 15:13:44.243834 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.243873 kubelet[2768]: W0515 15:13:44.243842 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.243873 kubelet[2768]: E0515 15:13:44.243855 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.244025 kubelet[2768]: E0515 15:13:44.244014 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.244025 kubelet[2768]: W0515 15:13:44.244023 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.244110 kubelet[2768]: E0515 15:13:44.244031 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.244150 kubelet[2768]: E0515 15:13:44.244141 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.244150 kubelet[2768]: W0515 15:13:44.244150 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.244235 kubelet[2768]: E0515 15:13:44.244156 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.244307 kubelet[2768]: E0515 15:13:44.244298 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.244307 kubelet[2768]: W0515 15:13:44.244306 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.244362 kubelet[2768]: E0515 15:13:44.244313 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.244453 kubelet[2768]: E0515 15:13:44.244439 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.244453 kubelet[2768]: W0515 15:13:44.244450 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.244518 kubelet[2768]: E0515 15:13:44.244459 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.244589 kubelet[2768]: E0515 15:13:44.244579 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.244589 kubelet[2768]: W0515 15:13:44.244587 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.244651 kubelet[2768]: E0515 15:13:44.244594 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.244722 kubelet[2768]: E0515 15:13:44.244712 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.244722 kubelet[2768]: W0515 15:13:44.244720 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.244773 kubelet[2768]: E0515 15:13:44.244727 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.257420 kubelet[2768]: E0515 15:13:44.257381 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.257843 kubelet[2768]: W0515 15:13:44.257616 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.257843 kubelet[2768]: E0515 15:13:44.257654 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.258005 kubelet[2768]: E0515 15:13:44.257993 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.258271 kubelet[2768]: W0515 15:13:44.258054 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.258271 kubelet[2768]: E0515 15:13:44.258076 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.258489 kubelet[2768]: E0515 15:13:44.258472 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.258575 kubelet[2768]: W0515 15:13:44.258560 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.258657 kubelet[2768]: E0515 15:13:44.258644 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.258952 kubelet[2768]: E0515 15:13:44.258912 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.258952 kubelet[2768]: W0515 15:13:44.258931 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.258952 kubelet[2768]: E0515 15:13:44.258948 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.259100 kubelet[2768]: E0515 15:13:44.259095 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.259129 kubelet[2768]: W0515 15:13:44.259102 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.259129 kubelet[2768]: E0515 15:13:44.259115 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.259274 kubelet[2768]: E0515 15:13:44.259261 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.259274 kubelet[2768]: W0515 15:13:44.259270 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.259369 kubelet[2768]: E0515 15:13:44.259347 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.259489 kubelet[2768]: E0515 15:13:44.259474 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.259489 kubelet[2768]: W0515 15:13:44.259483 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.259625 kubelet[2768]: E0515 15:13:44.259605 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.259691 kubelet[2768]: E0515 15:13:44.259678 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.259691 kubelet[2768]: W0515 15:13:44.259689 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.259826 kubelet[2768]: E0515 15:13:44.259702 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.259878 kubelet[2768]: E0515 15:13:44.259870 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.259919 kubelet[2768]: W0515 15:13:44.259878 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.259919 kubelet[2768]: E0515 15:13:44.259898 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.260090 kubelet[2768]: E0515 15:13:44.260066 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.260090 kubelet[2768]: W0515 15:13:44.260086 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.260226 kubelet[2768]: E0515 15:13:44.260116 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.260312 kubelet[2768]: E0515 15:13:44.260306 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.260366 kubelet[2768]: W0515 15:13:44.260313 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.260366 kubelet[2768]: E0515 15:13:44.260330 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.260928 kubelet[2768]: E0515 15:13:44.260888 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.260928 kubelet[2768]: W0515 15:13:44.260904 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.260928 kubelet[2768]: E0515 15:13:44.260919 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.261294 kubelet[2768]: E0515 15:13:44.261280 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.261294 kubelet[2768]: W0515 15:13:44.261291 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.261440 kubelet[2768]: E0515 15:13:44.261409 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.261544 kubelet[2768]: E0515 15:13:44.261466 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.261544 kubelet[2768]: W0515 15:13:44.261473 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.261544 kubelet[2768]: E0515 15:13:44.261500 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.261647 kubelet[2768]: E0515 15:13:44.261599 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.261647 kubelet[2768]: W0515 15:13:44.261605 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.261647 kubelet[2768]: E0515 15:13:44.261622 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.261780 kubelet[2768]: E0515 15:13:44.261763 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.261780 kubelet[2768]: W0515 15:13:44.261773 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.261837 kubelet[2768]: E0515 15:13:44.261781 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.261939 kubelet[2768]: E0515 15:13:44.261925 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.261939 kubelet[2768]: W0515 15:13:44.261935 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.261991 kubelet[2768]: E0515 15:13:44.261942 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.262372 kubelet[2768]: E0515 15:13:44.262352 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:13:44.262372 kubelet[2768]: W0515 15:13:44.262366 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:13:44.262464 kubelet[2768]: E0515 15:13:44.262376 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:13:44.914482 containerd[1566]: time="2025-05-15T15:13:44.913987772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:44.914872 containerd[1566]: time="2025-05-15T15:13:44.914725523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 15:13:44.915856 containerd[1566]: time="2025-05-15T15:13:44.915822356Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:44.923292 containerd[1566]: time="2025-05-15T15:13:44.923209339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:44.925197 containerd[1566]: time="2025-05-15T15:13:44.925075960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.947864911s" May 15 15:13:44.925197 containerd[1566]: time="2025-05-15T15:13:44.925115447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 15:13:44.929661 containerd[1566]: time="2025-05-15T15:13:44.929261190Z" level=info msg="CreateContainer within sandbox \"b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 15:13:44.936497 containerd[1566]: time="2025-05-15T15:13:44.936448562Z" level=info msg="Container 939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a: CDI devices from CRI Config.CDIDevices: []" May 15 15:13:44.944788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111410660.mount: Deactivated successfully. May 15 15:13:44.960250 containerd[1566]: time="2025-05-15T15:13:44.960208183Z" level=info msg="CreateContainer within sandbox \"b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a\"" May 15 15:13:44.961482 containerd[1566]: time="2025-05-15T15:13:44.961204159Z" level=info msg="StartContainer for \"939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a\"" May 15 15:13:44.962929 containerd[1566]: time="2025-05-15T15:13:44.962897434Z" level=info msg="connecting to shim 939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a" address="unix:///run/containerd/s/6fb3aa088f0edaf21bd87e29aee2457469ddeb6e744a09ed5e9b09221fe5d21e" protocol=ttrpc version=3 May 15 15:13:44.994493 systemd[1]: Started cri-containerd-939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a.scope - libcontainer container 939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a. May 15 15:13:45.052253 containerd[1566]: time="2025-05-15T15:13:45.052012049Z" level=info msg="StartContainer for \"939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a\" returns successfully" May 15 15:13:45.070260 systemd[1]: cri-containerd-939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a.scope: Deactivated successfully. May 15 15:13:45.092570 containerd[1566]: time="2025-05-15T15:13:45.092476591Z" level=info msg="received exit event container_id:\"939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a\" id:\"939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a\" pid:3409 exited_at:{seconds:1747322025 nanos:72783292}" May 15 15:13:45.092873 containerd[1566]: time="2025-05-15T15:13:45.092652796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a\" id:\"939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a\" pid:3409 exited_at:{seconds:1747322025 nanos:72783292}" May 15 15:13:45.130480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-939a5c377ffdb225cabfacff48cc6f8f4979eafa2be00b38cdac90cb5c52a72a-rootfs.mount: Deactivated successfully. May 15 15:13:45.230402 kubelet[2768]: E0515 15:13:45.229797 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:45.233449 containerd[1566]: time="2025-05-15T15:13:45.233403891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 15:13:45.250805 kubelet[2768]: I0515 15:13:45.250712 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64b5f48db9-jvlhw" podStartSLOduration=3.249272477 podStartE2EDuration="5.250689834s" podCreationTimestamp="2025-05-15 15:13:40 +0000 UTC" firstStartedPulling="2025-05-15 15:13:40.97558864 +0000 UTC m=+20.044762416" lastFinishedPulling="2025-05-15 15:13:42.977005998 +0000 UTC m=+22.046179773" observedRunningTime="2025-05-15 15:13:43.241426889 +0000 UTC m=+22.310600686" watchObservedRunningTime="2025-05-15 15:13:45.250689834 +0000 UTC m=+24.319863633" May 15 15:13:46.114845 kubelet[2768]: E0515 15:13:46.114432 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:13:48.114142 kubelet[2768]: E0515 15:13:48.114036 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:13:49.481102 kubelet[2768]: I0515 15:13:49.481013 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 15:13:49.484376 kubelet[2768]: E0515 15:13:49.482278 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:50.115461 kubelet[2768]: E0515 15:13:50.115408 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:13:50.136431 containerd[1566]: time="2025-05-15T15:13:50.136373157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:50.137926 containerd[1566]: time="2025-05-15T15:13:50.137544674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 15:13:50.138236 containerd[1566]: time="2025-05-15T15:13:50.138199745Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:50.140748 containerd[1566]: time="2025-05-15T15:13:50.140685966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:13:50.141236 containerd[1566]: time="2025-05-15T15:13:50.141207537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.907756667s" May 15 15:13:50.141421 containerd[1566]: time="2025-05-15T15:13:50.141242118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 15:13:50.146842 containerd[1566]: time="2025-05-15T15:13:50.146191258Z" level=info msg="CreateContainer within sandbox \"b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 15:13:50.159026 containerd[1566]: time="2025-05-15T15:13:50.158980340Z" level=info msg="Container 0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573: CDI devices from CRI Config.CDIDevices: []" May 15 15:13:50.162567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144309911.mount: Deactivated successfully. May 15 15:13:50.185902 containerd[1566]: time="2025-05-15T15:13:50.185110348Z" level=info msg="CreateContainer within sandbox \"b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573\"" May 15 15:13:50.188526 containerd[1566]: time="2025-05-15T15:13:50.188268763Z" level=info msg="StartContainer for \"0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573\"" May 15 15:13:50.193488 containerd[1566]: time="2025-05-15T15:13:50.192798941Z" level=info msg="connecting to shim 0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573" address="unix:///run/containerd/s/6fb3aa088f0edaf21bd87e29aee2457469ddeb6e744a09ed5e9b09221fe5d21e" protocol=ttrpc version=3 May 15 15:13:50.241423 systemd[1]: Started cri-containerd-0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573.scope - libcontainer container 0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573. May 15 15:13:50.278204 kubelet[2768]: E0515 15:13:50.276955 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:50.335394 containerd[1566]: time="2025-05-15T15:13:50.335343432Z" level=info msg="StartContainer for \"0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573\" returns successfully" May 15 15:13:50.915272 systemd[1]: cri-containerd-0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573.scope: Deactivated successfully. May 15 15:13:50.916845 systemd[1]: cri-containerd-0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573.scope: Consumed 576ms CPU time, 141M memory peak, 3.1M read from disk, 154M written to disk. May 15 15:13:50.939478 containerd[1566]: time="2025-05-15T15:13:50.939343747Z" level=info msg="received exit event container_id:\"0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573\" id:\"0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573\" pid:3474 exited_at:{seconds:1747322030 nanos:916461137}" May 15 15:13:50.940392 containerd[1566]: time="2025-05-15T15:13:50.940335057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573\" id:\"0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573\" pid:3474 exited_at:{seconds:1747322030 nanos:916461137}" May 15 15:13:50.962449 kubelet[2768]: I0515 15:13:50.960753 2768 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 15:13:50.982330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c6cc72dde307607e6facc64be44e75ca700023c4ad3fddc2b983ae45bc9d573-rootfs.mount: Deactivated successfully. May 15 15:13:51.073619 kubelet[2768]: I0515 15:13:51.073007 2768 topology_manager.go:215] "Topology Admit Handler" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nzhxw" May 15 15:13:51.084194 kubelet[2768]: I0515 15:13:51.084149 2768 topology_manager.go:215] "Topology Admit Handler" podUID="4fffeabb-bc8e-44cd-82d2-acd711ecce53" podNamespace="calico-apiserver" podName="calico-apiserver-6888964bcc-b2qkf" May 15 15:13:51.085406 systemd[1]: Created slice kubepods-burstable-pod9cbb0523_a6f6_461c_a2a5_fad5b947b233.slice - libcontainer container kubepods-burstable-pod9cbb0523_a6f6_461c_a2a5_fad5b947b233.slice. May 15 15:13:51.087335 kubelet[2768]: I0515 15:13:51.087279 2768 topology_manager.go:215] "Topology Admit Handler" podUID="85795e54-736b-42e9-a348-a1b529022653" podNamespace="calico-system" podName="calico-kube-controllers-5595bbd956-4ksb6" May 15 15:13:51.089066 kubelet[2768]: I0515 15:13:51.088835 2768 topology_manager.go:215] "Topology Admit Handler" podUID="ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f" podNamespace="calico-apiserver" podName="calico-apiserver-6888964bcc-h57x6" May 15 15:13:51.097186 kubelet[2768]: I0515 15:13:51.097033 2768 topology_manager.go:215] "Topology Admit Handler" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zchv5" May 15 15:13:51.108026 kubelet[2768]: W0515 15:13:51.107915 2768 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4334.0.0-a-3982d56781" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4334.0.0-a-3982d56781' and this object May 15 15:13:51.108026 kubelet[2768]: E0515 15:13:51.107985 2768 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4334.0.0-a-3982d56781" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4334.0.0-a-3982d56781' and this object May 15 15:13:51.116201 kubelet[2768]: I0515 15:13:51.114450 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwpf5\" (UniqueName: \"kubernetes.io/projected/ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f-kube-api-access-nwpf5\") pod \"calico-apiserver-6888964bcc-h57x6\" (UID: \"ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f\") " pod="calico-apiserver/calico-apiserver-6888964bcc-h57x6" May 15 15:13:51.116201 kubelet[2768]: I0515 15:13:51.114506 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cbb0523-a6f6-461c-a2a5-fad5b947b233-config-volume\") pod \"coredns-7db6d8ff4d-nzhxw\" (UID: \"9cbb0523-a6f6-461c-a2a5-fad5b947b233\") " pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:13:51.116201 kubelet[2768]: I0515 15:13:51.114530 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4fffeabb-bc8e-44cd-82d2-acd711ecce53-calico-apiserver-certs\") pod \"calico-apiserver-6888964bcc-b2qkf\" (UID: \"4fffeabb-bc8e-44cd-82d2-acd711ecce53\") " pod="calico-apiserver/calico-apiserver-6888964bcc-b2qkf" May 15 15:13:51.116201 kubelet[2768]: I0515 15:13:51.114553 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfnzg\" (UniqueName: \"kubernetes.io/projected/4fffeabb-bc8e-44cd-82d2-acd711ecce53-kube-api-access-qfnzg\") pod \"calico-apiserver-6888964bcc-b2qkf\" (UID: \"4fffeabb-bc8e-44cd-82d2-acd711ecce53\") " pod="calico-apiserver/calico-apiserver-6888964bcc-b2qkf" May 15 15:13:51.116201 kubelet[2768]: I0515 15:13:51.114574 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad4b350-5146-45de-9d05-ced32cc472bb-config-volume\") pod \"coredns-7db6d8ff4d-zchv5\" (UID: \"1ad4b350-5146-45de-9d05-ced32cc472bb\") " pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:13:51.116451 kubelet[2768]: I0515 15:13:51.114596 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f-calico-apiserver-certs\") pod \"calico-apiserver-6888964bcc-h57x6\" (UID: \"ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f\") " pod="calico-apiserver/calico-apiserver-6888964bcc-h57x6" May 15 15:13:51.116451 kubelet[2768]: I0515 15:13:51.114614 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478kp\" (UniqueName: \"kubernetes.io/projected/1ad4b350-5146-45de-9d05-ced32cc472bb-kube-api-access-478kp\") pod \"coredns-7db6d8ff4d-zchv5\" (UID: \"1ad4b350-5146-45de-9d05-ced32cc472bb\") " pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:13:51.116451 kubelet[2768]: I0515 15:13:51.114630 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84svn\" (UniqueName: \"kubernetes.io/projected/9cbb0523-a6f6-461c-a2a5-fad5b947b233-kube-api-access-84svn\") pod \"coredns-7db6d8ff4d-nzhxw\" (UID: \"9cbb0523-a6f6-461c-a2a5-fad5b947b233\") " pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:13:51.116451 kubelet[2768]: I0515 15:13:51.114652 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85795e54-736b-42e9-a348-a1b529022653-tigera-ca-bundle\") pod \"calico-kube-controllers-5595bbd956-4ksb6\" (UID: \"85795e54-736b-42e9-a348-a1b529022653\") " pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:13:51.116451 kubelet[2768]: I0515 15:13:51.114674 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6flx\" (UniqueName: \"kubernetes.io/projected/85795e54-736b-42e9-a348-a1b529022653-kube-api-access-n6flx\") pod \"calico-kube-controllers-5595bbd956-4ksb6\" (UID: \"85795e54-736b-42e9-a348-a1b529022653\") " pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:13:51.121790 systemd[1]: Created slice kubepods-besteffort-pod4fffeabb_bc8e_44cd_82d2_acd711ecce53.slice - libcontainer container kubepods-besteffort-pod4fffeabb_bc8e_44cd_82d2_acd711ecce53.slice. May 15 15:13:51.135677 systemd[1]: Created slice kubepods-besteffort-pod85795e54_736b_42e9_a348_a1b529022653.slice - libcontainer container kubepods-besteffort-pod85795e54_736b_42e9_a348_a1b529022653.slice. May 15 15:13:51.146427 systemd[1]: Created slice kubepods-besteffort-podecf1af4b_6060_4bcc_b89d_a9b340cfcc7f.slice - libcontainer container kubepods-besteffort-podecf1af4b_6060_4bcc_b89d_a9b340cfcc7f.slice. May 15 15:13:51.157892 systemd[1]: Created slice kubepods-burstable-pod1ad4b350_5146_45de_9d05_ced32cc472bb.slice - libcontainer container kubepods-burstable-pod1ad4b350_5146_45de_9d05_ced32cc472bb.slice. May 15 15:13:51.285195 kubelet[2768]: E0515 15:13:51.284778 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:51.286300 containerd[1566]: time="2025-05-15T15:13:51.286271763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 15:13:51.316648 kubelet[2768]: I0515 15:13:51.316597 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:13:51.317076 kubelet[2768]: I0515 15:13:51.316799 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:13:51.321156 kubelet[2768]: I0515 15:13:51.321136 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:13:51.339523 kubelet[2768]: I0515 15:13:51.339477 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:13:51.340166 kubelet[2768]: I0515 15:13:51.339954 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6888964bcc-h57x6","calico-apiserver/calico-apiserver-6888964bcc-b2qkf","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-node-56p29","calico-system/csi-node-driver-ssx6b","tigera-operator/tigera-operator-797db67f8-csfpd","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:13:51.340653 kubelet[2768]: E0515 15:13:51.340591 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[calico-apiserver-certs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="calico-apiserver/calico-apiserver-6888964bcc-h57x6" podUID="ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f" May 15 15:13:51.408081 kubelet[2768]: E0515 15:13:51.408035 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:51.409208 containerd[1566]: time="2025-05-15T15:13:51.408764774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:13:51.443153 containerd[1566]: time="2025-05-15T15:13:51.442857440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:13:51.467383 kubelet[2768]: E0515 15:13:51.466974 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:51.469362 containerd[1566]: time="2025-05-15T15:13:51.467683965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:13:51.598632 containerd[1566]: time="2025-05-15T15:13:51.598465742Z" level=error msg="Failed to destroy network for sandbox \"3563dfb25e1bc9ef8834950f6cd5f1f8e4f756802b01d9decf81e95a8e6892f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.600295 containerd[1566]: time="2025-05-15T15:13:51.600144957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3563dfb25e1bc9ef8834950f6cd5f1f8e4f756802b01d9decf81e95a8e6892f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.601034 kubelet[2768]: E0515 15:13:51.600521 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3563dfb25e1bc9ef8834950f6cd5f1f8e4f756802b01d9decf81e95a8e6892f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.601034 kubelet[2768]: E0515 15:13:51.600595 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3563dfb25e1bc9ef8834950f6cd5f1f8e4f756802b01d9decf81e95a8e6892f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:13:51.601034 kubelet[2768]: E0515 15:13:51.600616 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3563dfb25e1bc9ef8834950f6cd5f1f8e4f756802b01d9decf81e95a8e6892f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:13:51.601992 kubelet[2768]: E0515 15:13:51.600687 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3563dfb25e1bc9ef8834950f6cd5f1f8e4f756802b01d9decf81e95a8e6892f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:13:51.615858 containerd[1566]: time="2025-05-15T15:13:51.615800805Z" level=error msg="Failed to destroy network for sandbox \"03c3134848f960d03785e2f9f209201d11ceabb7a1966851aa437c07d99f3505\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.617014 containerd[1566]: time="2025-05-15T15:13:51.616965638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c3134848f960d03785e2f9f209201d11ceabb7a1966851aa437c07d99f3505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.617500 kubelet[2768]: E0515 15:13:51.617465 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c3134848f960d03785e2f9f209201d11ceabb7a1966851aa437c07d99f3505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.618103 kubelet[2768]: E0515 15:13:51.617641 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c3134848f960d03785e2f9f209201d11ceabb7a1966851aa437c07d99f3505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:13:51.618103 kubelet[2768]: E0515 15:13:51.617676 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c3134848f960d03785e2f9f209201d11ceabb7a1966851aa437c07d99f3505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:13:51.618103 kubelet[2768]: E0515 15:13:51.617723 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03c3134848f960d03785e2f9f209201d11ceabb7a1966851aa437c07d99f3505\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:13:51.620362 containerd[1566]: time="2025-05-15T15:13:51.620319484Z" level=error msg="Failed to destroy network for sandbox \"83028b18e7b8fd862213be519f6c9600d6b674fa8e2c72afabc3220a43ed302a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.621558 containerd[1566]: time="2025-05-15T15:13:51.621500257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83028b18e7b8fd862213be519f6c9600d6b674fa8e2c72afabc3220a43ed302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.622057 kubelet[2768]: E0515 15:13:51.622026 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83028b18e7b8fd862213be519f6c9600d6b674fa8e2c72afabc3220a43ed302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:51.622294 kubelet[2768]: E0515 15:13:51.622256 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83028b18e7b8fd862213be519f6c9600d6b674fa8e2c72afabc3220a43ed302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:13:51.622609 kubelet[2768]: E0515 15:13:51.622588 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83028b18e7b8fd862213be519f6c9600d6b674fa8e2c72afabc3220a43ed302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:13:51.622750 kubelet[2768]: E0515 15:13:51.622711 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83028b18e7b8fd862213be519f6c9600d6b674fa8e2c72afabc3220a43ed302a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:13:52.121544 systemd[1]: Created slice kubepods-besteffort-pod7521021f_77bb_4466_96bd_6730a9b2c004.slice - libcontainer container kubepods-besteffort-pod7521021f_77bb_4466_96bd_6730a9b2c004.slice. May 15 15:13:52.126108 containerd[1566]: time="2025-05-15T15:13:52.126031019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:13:52.190436 containerd[1566]: time="2025-05-15T15:13:52.190353165Z" level=error msg="Failed to destroy network for sandbox \"6d8ce0e930189bed27e3266eb366265f460f8d485e933b36d5158606435c6954\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:52.191537 containerd[1566]: time="2025-05-15T15:13:52.191481371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d8ce0e930189bed27e3266eb366265f460f8d485e933b36d5158606435c6954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:52.192038 kubelet[2768]: E0515 15:13:52.191723 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d8ce0e930189bed27e3266eb366265f460f8d485e933b36d5158606435c6954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:52.192038 kubelet[2768]: E0515 15:13:52.191782 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d8ce0e930189bed27e3266eb366265f460f8d485e933b36d5158606435c6954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:52.192038 kubelet[2768]: E0515 15:13:52.191802 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d8ce0e930189bed27e3266eb366265f460f8d485e933b36d5158606435c6954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:52.192576 kubelet[2768]: E0515 15:13:52.191853 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d8ce0e930189bed27e3266eb366265f460f8d485e933b36d5158606435c6954\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:13:52.295275 kubelet[2768]: I0515 15:13:52.295245 2768 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6888964bcc-h57x6" May 15 15:13:52.295275 kubelet[2768]: I0515 15:13:52.295271 2768 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6888964bcc-h57x6"] May 15 15:13:52.327375 kubelet[2768]: I0515 15:13:52.326613 2768 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwpf5\" (UniqueName: \"kubernetes.io/projected/ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f-kube-api-access-nwpf5\") pod \"ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f\" (UID: \"ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f\") " May 15 15:13:52.327375 kubelet[2768]: I0515 15:13:52.326671 2768 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f-calico-apiserver-certs\") pod \"ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f\" (UID: \"ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f\") " May 15 15:13:52.329637 containerd[1566]: time="2025-05-15T15:13:52.329561051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6888964bcc-b2qkf,Uid:4fffeabb-bc8e-44cd-82d2-acd711ecce53,Namespace:calico-apiserver,Attempt:0,}" May 15 15:13:52.352584 systemd[1]: var-lib-kubelet-pods-ecf1af4b\x2d6060\x2d4bcc\x2db89d\x2da9b340cfcc7f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 15:13:52.354772 kubelet[2768]: I0515 15:13:52.354718 2768 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f" (UID: "ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 15:13:52.359837 kubelet[2768]: I0515 15:13:52.359773 2768 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f-kube-api-access-nwpf5" (OuterVolumeSpecName: "kube-api-access-nwpf5") pod "ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f" (UID: "ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f"). InnerVolumeSpecName "kube-api-access-nwpf5". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 15:13:52.361720 systemd[1]: var-lib-kubelet-pods-ecf1af4b\x2d6060\x2d4bcc\x2db89d\x2da9b340cfcc7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnwpf5.mount: Deactivated successfully. May 15 15:13:52.427634 kubelet[2768]: I0515 15:13:52.427491 2768 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nwpf5\" (UniqueName: \"kubernetes.io/projected/ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f-kube-api-access-nwpf5\") on node \"ci-4334.0.0-a-3982d56781\" DevicePath \"\"" May 15 15:13:52.427634 kubelet[2768]: I0515 15:13:52.427528 2768 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecf1af4b-6060-4bcc-b89d-a9b340cfcc7f-calico-apiserver-certs\") on node \"ci-4334.0.0-a-3982d56781\" DevicePath \"\"" May 15 15:13:52.448657 containerd[1566]: time="2025-05-15T15:13:52.448598017Z" level=error msg="Failed to destroy network for sandbox \"e346c27f4d674ac47587718d51812632957704661c35e1bba068ddd7ecb2a557\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:52.451677 containerd[1566]: time="2025-05-15T15:13:52.451607370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6888964bcc-b2qkf,Uid:4fffeabb-bc8e-44cd-82d2-acd711ecce53,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e346c27f4d674ac47587718d51812632957704661c35e1bba068ddd7ecb2a557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:52.452141 systemd[1]: run-netns-cni\x2d69acf399\x2dd9fd\x2dfc14\x2d9e81\x2dd9ba25f998ba.mount: Deactivated successfully. May 15 15:13:52.454355 kubelet[2768]: E0515 15:13:52.453060 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e346c27f4d674ac47587718d51812632957704661c35e1bba068ddd7ecb2a557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:13:52.454355 kubelet[2768]: E0515 15:13:52.453136 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e346c27f4d674ac47587718d51812632957704661c35e1bba068ddd7ecb2a557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6888964bcc-b2qkf" May 15 15:13:52.454355 kubelet[2768]: E0515 15:13:52.453161 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e346c27f4d674ac47587718d51812632957704661c35e1bba068ddd7ecb2a557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6888964bcc-b2qkf" May 15 15:13:52.455109 kubelet[2768]: E0515 15:13:52.453561 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6888964bcc-b2qkf_calico-apiserver(4fffeabb-bc8e-44cd-82d2-acd711ecce53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6888964bcc-b2qkf_calico-apiserver(4fffeabb-bc8e-44cd-82d2-acd711ecce53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e346c27f4d674ac47587718d51812632957704661c35e1bba068ddd7ecb2a557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6888964bcc-b2qkf" podUID="4fffeabb-bc8e-44cd-82d2-acd711ecce53" May 15 15:13:53.134434 systemd[1]: Removed slice kubepods-besteffort-podecf1af4b_6060_4bcc_b89d_a9b340cfcc7f.slice - libcontainer container kubepods-besteffort-podecf1af4b_6060_4bcc_b89d_a9b340cfcc7f.slice. May 15 15:13:54.296200 kubelet[2768]: I0515 15:13:54.296066 2768 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6888964bcc-h57x6"] May 15 15:13:54.322205 kubelet[2768]: I0515 15:13:54.322029 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:13:54.322205 kubelet[2768]: I0515 15:13:54.322081 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:13:54.325153 kubelet[2768]: I0515 15:13:54.325121 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:13:54.348867 kubelet[2768]: I0515 15:13:54.348834 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:13:54.349040 kubelet[2768]: I0515 15:13:54.348965 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6888964bcc-b2qkf","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-node-56p29","calico-system/csi-node-driver-ssx6b","tigera-operator/tigera-operator-797db67f8-csfpd","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:13:54.356451 kubelet[2768]: I0515 15:13:54.356419 2768 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6888964bcc-b2qkf" May 15 15:13:54.356451 kubelet[2768]: I0515 15:13:54.356451 2768 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6888964bcc-b2qkf"] May 15 15:13:54.446160 kubelet[2768]: I0515 15:13:54.446105 2768 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfnzg\" (UniqueName: \"kubernetes.io/projected/4fffeabb-bc8e-44cd-82d2-acd711ecce53-kube-api-access-qfnzg\") pod \"4fffeabb-bc8e-44cd-82d2-acd711ecce53\" (UID: \"4fffeabb-bc8e-44cd-82d2-acd711ecce53\") " May 15 15:13:54.446732 kubelet[2768]: I0515 15:13:54.446377 2768 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4fffeabb-bc8e-44cd-82d2-acd711ecce53-calico-apiserver-certs\") pod \"4fffeabb-bc8e-44cd-82d2-acd711ecce53\" (UID: \"4fffeabb-bc8e-44cd-82d2-acd711ecce53\") " May 15 15:13:54.456214 kubelet[2768]: I0515 15:13:54.453115 2768 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fffeabb-bc8e-44cd-82d2-acd711ecce53-kube-api-access-qfnzg" (OuterVolumeSpecName: "kube-api-access-qfnzg") pod "4fffeabb-bc8e-44cd-82d2-acd711ecce53" (UID: "4fffeabb-bc8e-44cd-82d2-acd711ecce53"). InnerVolumeSpecName "kube-api-access-qfnzg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 15:13:54.458681 systemd[1]: var-lib-kubelet-pods-4fffeabb\x2dbc8e\x2d44cd\x2d82d2\x2dacd711ecce53-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 15:13:54.458831 systemd[1]: var-lib-kubelet-pods-4fffeabb\x2dbc8e\x2d44cd\x2d82d2\x2dacd711ecce53-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqfnzg.mount: Deactivated successfully. May 15 15:13:54.462029 kubelet[2768]: I0515 15:13:54.461692 2768 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fffeabb-bc8e-44cd-82d2-acd711ecce53-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "4fffeabb-bc8e-44cd-82d2-acd711ecce53" (UID: "4fffeabb-bc8e-44cd-82d2-acd711ecce53"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 15:13:54.548248 kubelet[2768]: I0515 15:13:54.547289 2768 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4fffeabb-bc8e-44cd-82d2-acd711ecce53-calico-apiserver-certs\") on node \"ci-4334.0.0-a-3982d56781\" DevicePath \"\"" May 15 15:13:54.548691 kubelet[2768]: I0515 15:13:54.548494 2768 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qfnzg\" (UniqueName: \"kubernetes.io/projected/4fffeabb-bc8e-44cd-82d2-acd711ecce53-kube-api-access-qfnzg\") on node \"ci-4334.0.0-a-3982d56781\" DevicePath \"\"" May 15 15:13:55.121899 systemd[1]: Removed slice kubepods-besteffort-pod4fffeabb_bc8e_44cd_82d2_acd711ecce53.slice - libcontainer container kubepods-besteffort-pod4fffeabb_bc8e_44cd_82d2_acd711ecce53.slice. May 15 15:13:55.356730 kubelet[2768]: I0515 15:13:55.356663 2768 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6888964bcc-b2qkf"] May 15 15:13:55.375557 kubelet[2768]: I0515 15:13:55.374716 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:13:55.375557 kubelet[2768]: I0515 15:13:55.374757 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:13:55.384336 kubelet[2768]: I0515 15:13:55.383558 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:13:55.405452 kubelet[2768]: I0515 15:13:55.404908 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:13:55.405452 kubelet[2768]: I0515 15:13:55.405032 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-node-56p29","calico-system/csi-node-driver-ssx6b","tigera-operator/tigera-operator-797db67f8-csfpd","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:13:55.405452 kubelet[2768]: E0515 15:13:55.405080 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:13:55.405452 kubelet[2768]: E0515 15:13:55.405094 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:13:55.405452 kubelet[2768]: E0515 15:13:55.405102 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:13:55.405452 kubelet[2768]: E0515 15:13:55.405113 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:13:55.405452 kubelet[2768]: E0515 15:13:55.405123 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:55.409743 containerd[1566]: time="2025-05-15T15:13:55.409667663Z" level=info msg="StopContainer for \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" with timeout 60 (s)" May 15 15:13:55.423780 containerd[1566]: time="2025-05-15T15:13:55.423737852Z" level=info msg="Stop container \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" with signal terminated" May 15 15:13:55.481548 systemd[1]: cri-containerd-4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a.scope: Deactivated successfully. May 15 15:13:55.481847 systemd[1]: cri-containerd-4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a.scope: Consumed 1.407s CPU time, 31.3M memory peak, 388K read from disk. May 15 15:13:55.485233 containerd[1566]: time="2025-05-15T15:13:55.485138461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" id:\"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" pid:3127 exited_at:{seconds:1747322035 nanos:484195364}" May 15 15:13:55.486283 containerd[1566]: time="2025-05-15T15:13:55.486211178Z" level=info msg="received exit event container_id:\"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" id:\"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" pid:3127 exited_at:{seconds:1747322035 nanos:484195364}" May 15 15:13:55.529780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a-rootfs.mount: Deactivated successfully. May 15 15:13:55.542903 containerd[1566]: time="2025-05-15T15:13:55.542845802Z" level=info msg="StopContainer for \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" returns successfully" May 15 15:13:55.544000 containerd[1566]: time="2025-05-15T15:13:55.543968223Z" level=info msg="StopPodSandbox for \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\"" May 15 15:13:55.550971 containerd[1566]: time="2025-05-15T15:13:55.550923666Z" level=info msg="Container to stop \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 15:13:55.563520 systemd[1]: cri-containerd-f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5.scope: Deactivated successfully. May 15 15:13:55.570416 containerd[1566]: time="2025-05-15T15:13:55.570288775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" id:\"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" pid:2876 exit_status:137 exited_at:{seconds:1747322035 nanos:569446243}" May 15 15:13:55.622451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5-rootfs.mount: Deactivated successfully. May 15 15:13:55.624946 containerd[1566]: time="2025-05-15T15:13:55.624901489Z" level=info msg="shim disconnected" id=f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5 namespace=k8s.io May 15 15:13:55.625366 containerd[1566]: time="2025-05-15T15:13:55.625341974Z" level=warning msg="cleaning up after shim disconnected" id=f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5 namespace=k8s.io May 15 15:13:55.652459 containerd[1566]: time="2025-05-15T15:13:55.625448199Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 15:13:55.682988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5-shm.mount: Deactivated successfully. May 15 15:13:55.688096 containerd[1566]: time="2025-05-15T15:13:55.687253608Z" level=info msg="received exit event sandbox_id:\"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" exit_status:137 exited_at:{seconds:1747322035 nanos:569446243}" May 15 15:13:55.693779 containerd[1566]: time="2025-05-15T15:13:55.693715628Z" level=info msg="TearDown network for sandbox \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" successfully" May 15 15:13:55.694268 containerd[1566]: time="2025-05-15T15:13:55.694241213Z" level=info msg="StopPodSandbox for \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" returns successfully" May 15 15:13:55.703547 kubelet[2768]: I0515 15:13:55.703498 2768 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-797db67f8-csfpd" May 15 15:13:55.703547 kubelet[2768]: I0515 15:13:55.703542 2768 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-797db67f8-csfpd"] May 15 15:13:55.740691 kubelet[2768]: I0515 15:13:55.740611 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rvt5f" nodeCondition=["DiskPressure"] May 15 15:13:55.758451 kubelet[2768]: I0515 15:13:55.758405 2768 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvw94\" (UniqueName: \"kubernetes.io/projected/af8460a0-071d-4aa9-83fa-36dd1c683543-kube-api-access-nvw94\") pod \"af8460a0-071d-4aa9-83fa-36dd1c683543\" (UID: \"af8460a0-071d-4aa9-83fa-36dd1c683543\") " May 15 15:13:55.758451 kubelet[2768]: I0515 15:13:55.758446 2768 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af8460a0-071d-4aa9-83fa-36dd1c683543-var-lib-calico\") pod \"af8460a0-071d-4aa9-83fa-36dd1c683543\" (UID: \"af8460a0-071d-4aa9-83fa-36dd1c683543\") " May 15 15:13:55.758696 kubelet[2768]: I0515 15:13:55.758539 2768 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af8460a0-071d-4aa9-83fa-36dd1c683543-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "af8460a0-071d-4aa9-83fa-36dd1c683543" (UID: "af8460a0-071d-4aa9-83fa-36dd1c683543"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 15:13:55.770601 systemd[1]: var-lib-kubelet-pods-af8460a0\x2d071d\x2d4aa9\x2d83fa\x2d36dd1c683543-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnvw94.mount: Deactivated successfully. May 15 15:13:55.771015 kubelet[2768]: I0515 15:13:55.770674 2768 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8460a0-071d-4aa9-83fa-36dd1c683543-kube-api-access-nvw94" (OuterVolumeSpecName: "kube-api-access-nvw94") pod "af8460a0-071d-4aa9-83fa-36dd1c683543" (UID: "af8460a0-071d-4aa9-83fa-36dd1c683543"). InnerVolumeSpecName "kube-api-access-nvw94". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 15:13:55.792289 kubelet[2768]: I0515 15:13:55.792130 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6jt5b" nodeCondition=["DiskPressure"] May 15 15:13:55.849333 kubelet[2768]: I0515 15:13:55.849182 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-d7g6c" nodeCondition=["DiskPressure"] May 15 15:13:55.858931 kubelet[2768]: I0515 15:13:55.858892 2768 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af8460a0-071d-4aa9-83fa-36dd1c683543-var-lib-calico\") on node \"ci-4334.0.0-a-3982d56781\" DevicePath \"\"" May 15 15:13:55.859146 kubelet[2768]: I0515 15:13:55.859109 2768 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nvw94\" (UniqueName: \"kubernetes.io/projected/af8460a0-071d-4aa9-83fa-36dd1c683543-kube-api-access-nvw94\") on node \"ci-4334.0.0-a-3982d56781\" DevicePath \"\"" May 15 15:13:55.901337 kubelet[2768]: I0515 15:13:55.900790 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mwdgn" nodeCondition=["DiskPressure"] May 15 15:13:55.950257 kubelet[2768]: I0515 15:13:55.949118 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-t5qrt" nodeCondition=["DiskPressure"] May 15 15:13:55.999451 kubelet[2768]: I0515 15:13:55.999385 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6zg6j" nodeCondition=["DiskPressure"] May 15 15:13:56.103666 kubelet[2768]: I0515 15:13:56.103604 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-85gj5" nodeCondition=["DiskPressure"] May 15 15:13:56.215919 kubelet[2768]: I0515 15:13:56.214941 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-flmjs" nodeCondition=["DiskPressure"] May 15 15:13:56.302629 kubelet[2768]: I0515 15:13:56.302495 2768 scope.go:117] "RemoveContainer" containerID="4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a" May 15 15:13:56.308859 containerd[1566]: time="2025-05-15T15:13:56.308813904Z" level=info msg="RemoveContainer for \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\"" May 15 15:13:56.311899 kubelet[2768]: I0515 15:13:56.311854 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-dsbqx" nodeCondition=["DiskPressure"] May 15 15:13:56.315412 systemd[1]: Removed slice kubepods-besteffort-podaf8460a0_071d_4aa9_83fa_36dd1c683543.slice - libcontainer container kubepods-besteffort-podaf8460a0_071d_4aa9_83fa_36dd1c683543.slice. May 15 15:13:56.315540 systemd[1]: kubepods-besteffort-podaf8460a0_071d_4aa9_83fa_36dd1c683543.slice: Consumed 1.440s CPU time, 31.5M memory peak, 388K read from disk. May 15 15:13:56.320077 containerd[1566]: time="2025-05-15T15:13:56.320024474Z" level=info msg="RemoveContainer for \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" returns successfully" May 15 15:13:56.320350 kubelet[2768]: I0515 15:13:56.320327 2768 scope.go:117] "RemoveContainer" containerID="4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a" May 15 15:13:56.320645 containerd[1566]: time="2025-05-15T15:13:56.320599426Z" level=error msg="ContainerStatus for \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\": not found" May 15 15:13:56.320991 kubelet[2768]: E0515 15:13:56.320735 2768 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\": not found" containerID="4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a" May 15 15:13:56.320991 kubelet[2768]: I0515 15:13:56.320766 2768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a"} err="failed to get container status \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f09d63c0f67c8ad55fbcece1f67c152b4ee1eefc29b37cad1586dcf23c6f79a\": not found" May 15 15:13:56.387559 kubelet[2768]: I0515 15:13:56.386558 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-xhjd2" nodeCondition=["DiskPressure"] May 15 15:13:56.489043 kubelet[2768]: I0515 15:13:56.488997 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-4spdh" nodeCondition=["DiskPressure"] May 15 15:13:56.558996 kubelet[2768]: I0515 15:13:56.558952 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-q8zpg" nodeCondition=["DiskPressure"] May 15 15:13:56.691918 kubelet[2768]: I0515 15:13:56.691827 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-nftvg" nodeCondition=["DiskPressure"] May 15 15:13:56.704063 kubelet[2768]: I0515 15:13:56.704002 2768 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-797db67f8-csfpd"] May 15 15:13:56.736549 kubelet[2768]: I0515 15:13:56.736508 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:13:56.736857 kubelet[2768]: I0515 15:13:56.736588 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:13:56.739847 containerd[1566]: time="2025-05-15T15:13:56.739594933Z" level=info msg="StopPodSandbox for \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\"" May 15 15:13:56.739847 containerd[1566]: time="2025-05-15T15:13:56.739728820Z" level=info msg="TearDown network for sandbox \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" successfully" May 15 15:13:56.739847 containerd[1566]: time="2025-05-15T15:13:56.739741195Z" level=info msg="StopPodSandbox for \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" returns successfully" May 15 15:13:56.741412 containerd[1566]: time="2025-05-15T15:13:56.741381441Z" level=info msg="RemovePodSandbox for \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\"" May 15 15:13:56.742267 containerd[1566]: time="2025-05-15T15:13:56.742001035Z" level=info msg="Forcibly stopping sandbox \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\"" May 15 15:13:56.742267 containerd[1566]: time="2025-05-15T15:13:56.742124139Z" level=info msg="TearDown network for sandbox \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" successfully" May 15 15:13:56.744819 containerd[1566]: time="2025-05-15T15:13:56.744788739Z" level=info msg="Ensure that sandbox f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5 in task-service has been cleanup successfully" May 15 15:13:56.748206 containerd[1566]: time="2025-05-15T15:13:56.748154265Z" level=info msg="RemovePodSandbox \"f1e02c03230530ca6526554e1f91257bbcbf03a32f5a90e20a71b0bcc79a64f5\" returns successfully" May 15 15:13:56.749876 kubelet[2768]: I0515 15:13:56.749694 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:13:56.776772 kubelet[2768]: I0515 15:13:56.776660 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:13:56.777061 kubelet[2768]: I0515 15:13:56.776939 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","calico-system/csi-node-driver-ssx6b","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:13:56.777061 kubelet[2768]: E0515 15:13:56.776995 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:13:56.777061 kubelet[2768]: E0515 15:13:56.777028 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:13:56.777061 kubelet[2768]: E0515 15:13:56.777046 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:13:56.777061 kubelet[2768]: E0515 15:13:56.777055 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:13:56.777307 kubelet[2768]: E0515 15:13:56.777101 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:13:56.777307 kubelet[2768]: E0515 15:13:56.777123 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:13:56.777307 kubelet[2768]: E0515 15:13:56.777132 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:13:56.777307 kubelet[2768]: E0515 15:13:56.777155 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:13:56.777307 kubelet[2768]: E0515 15:13:56.777167 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:13:56.777307 kubelet[2768]: E0515 15:13:56.777201 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:13:56.777307 kubelet[2768]: I0515 15:13:56.777215 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:13:56.842544 kubelet[2768]: I0515 15:13:56.841892 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mtrkq" nodeCondition=["DiskPressure"] May 15 15:13:56.989710 kubelet[2768]: I0515 15:13:56.988905 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-wq6bj" nodeCondition=["DiskPressure"] May 15 15:13:57.141689 kubelet[2768]: I0515 15:13:57.141078 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-x2sw6" nodeCondition=["DiskPressure"] May 15 15:13:57.286151 kubelet[2768]: I0515 15:13:57.286093 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-j9lps" nodeCondition=["DiskPressure"] May 15 15:13:57.370563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807661961.mount: Deactivated successfully. May 15 15:13:57.372687 containerd[1566]: time="2025-05-15T15:13:57.372501733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount807661961: mkdir /var/lib/containerd/tmpmounts/containerd-mount807661961/usr/lib/.build-id/cc: no space left on device" May 15 15:13:57.372687 containerd[1566]: time="2025-05-15T15:13:57.372596643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 15:13:57.372880 kubelet[2768]: E0515 15:13:57.372773 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount807661961: mkdir /var/lib/containerd/tmpmounts/containerd-mount807661961/usr/lib/.build-id/cc: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:13:57.372880 kubelet[2768]: E0515 15:13:57.372826 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount807661961: mkdir /var/lib/containerd/tmpmounts/containerd-mount807661961/usr/lib/.build-id/cc: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:13:57.377252 kubelet[2768]: E0515 15:13:57.377167 2768 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxm84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-56p29_calico-system(ea3f9278-e4ee-4dca-80e1-48db54fe37e5): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount807661961: mkdir /var/lib/containerd/tmpmounts/containerd-mount807661961/usr/lib/.build-id/cc: no space left on device May 15 15:13:57.377534 kubelet[2768]: E0515 15:13:57.377246 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount807661961: mkdir /var/lib/containerd/tmpmounts/containerd-mount807661961/usr/lib/.build-id/cc: no space left on device\"" pod="calico-system/calico-node-56p29" podUID="ea3f9278-e4ee-4dca-80e1-48db54fe37e5" May 15 15:13:57.432902 kubelet[2768]: I0515 15:13:57.432047 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-5df9v" nodeCondition=["DiskPressure"] May 15 15:13:57.594168 kubelet[2768]: I0515 15:13:57.594127 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rdmzj" nodeCondition=["DiskPressure"] May 15 15:13:57.735515 kubelet[2768]: I0515 15:13:57.735436 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-8rfjj" nodeCondition=["DiskPressure"] May 15 15:13:57.886477 kubelet[2768]: I0515 15:13:57.886340 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-4qnx8" nodeCondition=["DiskPressure"] May 15 15:13:58.034777 kubelet[2768]: I0515 15:13:58.034614 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mjrzn" nodeCondition=["DiskPressure"] May 15 15:13:58.287035 kubelet[2768]: I0515 15:13:58.286745 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-b5hrn" nodeCondition=["DiskPressure"] May 15 15:13:58.310541 kubelet[2768]: E0515 15:13:58.310507 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:13:58.311331 kubelet[2768]: E0515 15:13:58.311267 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-56p29" podUID="ea3f9278-e4ee-4dca-80e1-48db54fe37e5" May 15 15:13:58.434148 kubelet[2768]: I0515 15:13:58.432660 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-hj48j" nodeCondition=["DiskPressure"] May 15 15:13:58.587963 kubelet[2768]: I0515 15:13:58.587830 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-7tqzd" nodeCondition=["DiskPressure"] May 15 15:13:58.684352 kubelet[2768]: I0515 15:13:58.684309 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rj7sz" nodeCondition=["DiskPressure"] May 15 15:13:58.785729 kubelet[2768]: I0515 15:13:58.785672 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-8tcmm" nodeCondition=["DiskPressure"] May 15 15:13:58.886286 kubelet[2768]: I0515 15:13:58.886129 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-7kfh9" nodeCondition=["DiskPressure"] May 15 15:13:58.983529 kubelet[2768]: I0515 15:13:58.983403 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-pgtbg" nodeCondition=["DiskPressure"] May 15 15:13:59.185570 kubelet[2768]: I0515 15:13:59.185424 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-9nhts" nodeCondition=["DiskPressure"] May 15 15:13:59.283981 kubelet[2768]: I0515 15:13:59.283933 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-js28n" nodeCondition=["DiskPressure"] May 15 15:13:59.385193 kubelet[2768]: I0515 15:13:59.385106 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-lngf2" nodeCondition=["DiskPressure"] May 15 15:13:59.485903 kubelet[2768]: I0515 15:13:59.485838 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-w5v2v" nodeCondition=["DiskPressure"] May 15 15:13:59.533810 kubelet[2768]: I0515 15:13:59.533298 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-vw7rp" nodeCondition=["DiskPressure"] May 15 15:13:59.636525 kubelet[2768]: I0515 15:13:59.636471 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-db9g2" nodeCondition=["DiskPressure"] May 15 15:13:59.731746 kubelet[2768]: I0515 15:13:59.731694 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-r2nkj" nodeCondition=["DiskPressure"] May 15 15:13:59.834524 kubelet[2768]: I0515 15:13:59.834344 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-xmzng" nodeCondition=["DiskPressure"] May 15 15:13:59.933244 kubelet[2768]: I0515 15:13:59.933184 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-gd4s2" nodeCondition=["DiskPressure"] May 15 15:14:00.034262 kubelet[2768]: I0515 15:14:00.034198 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qzsb9" nodeCondition=["DiskPressure"] May 15 15:14:00.137814 kubelet[2768]: I0515 15:14:00.137577 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-s965m" nodeCondition=["DiskPressure"] May 15 15:14:00.237348 kubelet[2768]: I0515 15:14:00.237294 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-kpnlc" nodeCondition=["DiskPressure"] May 15 15:14:00.333967 kubelet[2768]: I0515 15:14:00.333920 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qzw4m" nodeCondition=["DiskPressure"] May 15 15:14:00.433277 kubelet[2768]: I0515 15:14:00.432758 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-lcfwf" nodeCondition=["DiskPressure"] May 15 15:14:00.636372 kubelet[2768]: I0515 15:14:00.636247 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-2556b" nodeCondition=["DiskPressure"] May 15 15:14:00.733426 kubelet[2768]: I0515 15:14:00.732608 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zcx4x" nodeCondition=["DiskPressure"] May 15 15:14:00.837627 kubelet[2768]: I0515 15:14:00.837523 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-vn2rd" nodeCondition=["DiskPressure"] May 15 15:14:01.036717 kubelet[2768]: I0515 15:14:01.036378 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-b28zf" nodeCondition=["DiskPressure"] May 15 15:14:01.137233 kubelet[2768]: I0515 15:14:01.137142 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-dnltd" nodeCondition=["DiskPressure"] May 15 15:14:01.237798 kubelet[2768]: I0515 15:14:01.237658 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-bpgcz" nodeCondition=["DiskPressure"] May 15 15:14:01.338835 kubelet[2768]: I0515 15:14:01.338682 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6x5jx" nodeCondition=["DiskPressure"] May 15 15:14:01.397691 kubelet[2768]: I0515 15:14:01.397577 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mkrtv" nodeCondition=["DiskPressure"] May 15 15:14:01.491461 kubelet[2768]: I0515 15:14:01.491295 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-ddcct" nodeCondition=["DiskPressure"] May 15 15:14:01.591902 kubelet[2768]: I0515 15:14:01.591713 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-sdc72" nodeCondition=["DiskPressure"] May 15 15:14:01.638766 kubelet[2768]: I0515 15:14:01.638682 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6lrw9" nodeCondition=["DiskPressure"] May 15 15:14:01.742522 kubelet[2768]: I0515 15:14:01.742453 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-xmxsc" nodeCondition=["DiskPressure"] May 15 15:14:01.941324 kubelet[2768]: I0515 15:14:01.940731 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-skp8v" nodeCondition=["DiskPressure"] May 15 15:14:02.048125 kubelet[2768]: I0515 15:14:02.045957 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6crh7" nodeCondition=["DiskPressure"] May 15 15:14:02.138532 kubelet[2768]: I0515 15:14:02.138472 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-9p5dc" nodeCondition=["DiskPressure"] May 15 15:14:02.240438 kubelet[2768]: I0515 15:14:02.240366 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-dcw8l" nodeCondition=["DiskPressure"] May 15 15:14:02.338704 kubelet[2768]: I0515 15:14:02.338633 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-dws8d" nodeCondition=["DiskPressure"] May 15 15:14:02.540628 kubelet[2768]: I0515 15:14:02.539851 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-7kmpm" nodeCondition=["DiskPressure"] May 15 15:14:02.652923 kubelet[2768]: I0515 15:14:02.652161 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-b2zsb" nodeCondition=["DiskPressure"] May 15 15:14:02.706366 kubelet[2768]: I0515 15:14:02.706245 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-4cxpv" nodeCondition=["DiskPressure"] May 15 15:14:02.791903 kubelet[2768]: I0515 15:14:02.791277 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-h5wt5" nodeCondition=["DiskPressure"] May 15 15:14:02.890226 kubelet[2768]: I0515 15:14:02.889831 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-cxncn" nodeCondition=["DiskPressure"] May 15 15:14:02.987378 kubelet[2768]: I0515 15:14:02.986602 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-dbz6p" nodeCondition=["DiskPressure"] May 15 15:14:03.115077 containerd[1566]: time="2025-05-15T15:14:03.114767669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:14:03.193063 kubelet[2768]: I0515 15:14:03.192665 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-tv55w" nodeCondition=["DiskPressure"] May 15 15:14:03.229457 containerd[1566]: time="2025-05-15T15:14:03.229341456Z" level=error msg="Failed to destroy network for sandbox \"2698a47dbc9ac9c9f240ce5d935c6fab1e11006cc5a88ad66d0b7c0212e41ce9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:03.232527 containerd[1566]: time="2025-05-15T15:14:03.232451129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2698a47dbc9ac9c9f240ce5d935c6fab1e11006cc5a88ad66d0b7c0212e41ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:03.233659 kubelet[2768]: E0515 15:14:03.233588 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2698a47dbc9ac9c9f240ce5d935c6fab1e11006cc5a88ad66d0b7c0212e41ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:03.233789 kubelet[2768]: E0515 15:14:03.233677 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2698a47dbc9ac9c9f240ce5d935c6fab1e11006cc5a88ad66d0b7c0212e41ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:03.233789 kubelet[2768]: E0515 15:14:03.233711 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2698a47dbc9ac9c9f240ce5d935c6fab1e11006cc5a88ad66d0b7c0212e41ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:03.233789 kubelet[2768]: E0515 15:14:03.233771 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2698a47dbc9ac9c9f240ce5d935c6fab1e11006cc5a88ad66d0b7c0212e41ce9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:14:03.234267 systemd[1]: run-netns-cni\x2dff04a334\x2d71db\x2d6771\x2dc42c\x2d5ec09a07b7e2.mount: Deactivated successfully. May 15 15:14:03.287713 kubelet[2768]: I0515 15:14:03.287640 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-hpx5x" nodeCondition=["DiskPressure"] May 15 15:14:03.389448 kubelet[2768]: I0515 15:14:03.389292 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-vdvct" nodeCondition=["DiskPressure"] May 15 15:14:03.587360 kubelet[2768]: I0515 15:14:03.587304 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-x6d9j" nodeCondition=["DiskPressure"] May 15 15:14:03.689229 kubelet[2768]: I0515 15:14:03.688700 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6x668" nodeCondition=["DiskPressure"] May 15 15:14:03.787677 kubelet[2768]: I0515 15:14:03.787371 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-92c4n" nodeCondition=["DiskPressure"] May 15 15:14:03.986579 kubelet[2768]: I0515 15:14:03.986533 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-pqgdv" nodeCondition=["DiskPressure"] May 15 15:14:04.086722 kubelet[2768]: I0515 15:14:04.086654 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-bts7r" nodeCondition=["DiskPressure"] May 15 15:14:04.115062 kubelet[2768]: E0515 15:14:04.115008 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:04.116618 containerd[1566]: time="2025-05-15T15:14:04.116575661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:14:04.117136 containerd[1566]: time="2025-05-15T15:14:04.117069638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:14:04.193557 kubelet[2768]: I0515 15:14:04.193442 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-2pf7m" nodeCondition=["DiskPressure"] May 15 15:14:04.245079 containerd[1566]: time="2025-05-15T15:14:04.244921705Z" level=error msg="Failed to destroy network for sandbox \"f9f5244b103893d0b0c44ca048414f90ab1895d813620f35bdec14dceee6485d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:04.250871 systemd[1]: run-netns-cni\x2d8c208e88\x2db169\x2dd9c3\x2df272\x2d48689f46a03b.mount: Deactivated successfully. May 15 15:14:04.253313 containerd[1566]: time="2025-05-15T15:14:04.253250588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9f5244b103893d0b0c44ca048414f90ab1895d813620f35bdec14dceee6485d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:04.254014 kubelet[2768]: E0515 15:14:04.253979 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9f5244b103893d0b0c44ca048414f90ab1895d813620f35bdec14dceee6485d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:04.254098 kubelet[2768]: E0515 15:14:04.254038 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9f5244b103893d0b0c44ca048414f90ab1895d813620f35bdec14dceee6485d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:04.254098 kubelet[2768]: E0515 15:14:04.254061 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9f5244b103893d0b0c44ca048414f90ab1895d813620f35bdec14dceee6485d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:04.254191 kubelet[2768]: E0515 15:14:04.254103 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9f5244b103893d0b0c44ca048414f90ab1895d813620f35bdec14dceee6485d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:14:04.277479 containerd[1566]: time="2025-05-15T15:14:04.274714080Z" level=error msg="Failed to destroy network for sandbox \"fb7efcc1b0c2f507fc21833a7f793eac34ab1e78e6402f8f26c22fdb0957cdbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:04.278618 systemd[1]: run-netns-cni\x2d95ae1362\x2d1b13\x2dd4c6\x2dd2ec\x2d44a74c7e1c18.mount: Deactivated successfully. May 15 15:14:04.279067 containerd[1566]: time="2025-05-15T15:14:04.279018368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb7efcc1b0c2f507fc21833a7f793eac34ab1e78e6402f8f26c22fdb0957cdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:04.279539 kubelet[2768]: E0515 15:14:04.279497 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb7efcc1b0c2f507fc21833a7f793eac34ab1e78e6402f8f26c22fdb0957cdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:04.281205 kubelet[2768]: E0515 15:14:04.281124 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb7efcc1b0c2f507fc21833a7f793eac34ab1e78e6402f8f26c22fdb0957cdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:04.281205 kubelet[2768]: E0515 15:14:04.281194 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb7efcc1b0c2f507fc21833a7f793eac34ab1e78e6402f8f26c22fdb0957cdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:04.281331 kubelet[2768]: E0515 15:14:04.281281 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb7efcc1b0c2f507fc21833a7f793eac34ab1e78e6402f8f26c22fdb0957cdbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:14:04.294003 kubelet[2768]: I0515 15:14:04.293844 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-7zjmg" nodeCondition=["DiskPressure"] May 15 15:14:04.384909 kubelet[2768]: I0515 15:14:04.384854 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rnrgf" nodeCondition=["DiskPressure"] May 15 15:14:04.485052 kubelet[2768]: I0515 15:14:04.484983 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-7lxlw" nodeCondition=["DiskPressure"] May 15 15:14:04.586636 kubelet[2768]: I0515 15:14:04.586505 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-lx2mw" nodeCondition=["DiskPressure"] May 15 15:14:04.787024 kubelet[2768]: I0515 15:14:04.786947 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-2sl8h" nodeCondition=["DiskPressure"] May 15 15:14:04.886945 kubelet[2768]: I0515 15:14:04.886821 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rg4kr" nodeCondition=["DiskPressure"] May 15 15:14:04.988080 kubelet[2768]: I0515 15:14:04.988022 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-9tnc7" nodeCondition=["DiskPressure"] May 15 15:14:05.188322 kubelet[2768]: I0515 15:14:05.188186 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mrr9g" nodeCondition=["DiskPressure"] May 15 15:14:05.286125 kubelet[2768]: I0515 15:14:05.286059 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-z6h9q" nodeCondition=["DiskPressure"] May 15 15:14:05.396439 kubelet[2768]: I0515 15:14:05.395875 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-znrjz" nodeCondition=["DiskPressure"] May 15 15:14:05.485895 kubelet[2768]: I0515 15:14:05.485833 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-gr5tr" nodeCondition=["DiskPressure"] May 15 15:14:05.601576 kubelet[2768]: I0515 15:14:05.601514 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-t7z7h" nodeCondition=["DiskPressure"] May 15 15:14:05.788432 kubelet[2768]: I0515 15:14:05.788148 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-trvh8" nodeCondition=["DiskPressure"] May 15 15:14:05.887591 kubelet[2768]: I0515 15:14:05.887527 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zl9nc" nodeCondition=["DiskPressure"] May 15 15:14:05.935713 kubelet[2768]: I0515 15:14:05.935615 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-fjkjv" nodeCondition=["DiskPressure"] May 15 15:14:06.036233 kubelet[2768]: I0515 15:14:06.036181 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-75t9t" nodeCondition=["DiskPressure"] May 15 15:14:06.114352 kubelet[2768]: E0515 15:14:06.114229 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:06.115290 containerd[1566]: time="2025-05-15T15:14:06.114969384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:14:06.197239 containerd[1566]: time="2025-05-15T15:14:06.197154296Z" level=error msg="Failed to destroy network for sandbox \"65aca078219bba6fd9e0c204b132bdddf8b44c96bf8cd0fcb0afe0b13be5a040\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:06.199966 containerd[1566]: time="2025-05-15T15:14:06.199890473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65aca078219bba6fd9e0c204b132bdddf8b44c96bf8cd0fcb0afe0b13be5a040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:06.200439 kubelet[2768]: E0515 15:14:06.200405 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65aca078219bba6fd9e0c204b132bdddf8b44c96bf8cd0fcb0afe0b13be5a040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:06.200631 kubelet[2768]: E0515 15:14:06.200530 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65aca078219bba6fd9e0c204b132bdddf8b44c96bf8cd0fcb0afe0b13be5a040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:06.200524 systemd[1]: run-netns-cni\x2dcbf94965\x2da5ab\x2df759\x2da5f1\x2d6b9354c00bce.mount: Deactivated successfully. May 15 15:14:06.201752 kubelet[2768]: E0515 15:14:06.201385 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65aca078219bba6fd9e0c204b132bdddf8b44c96bf8cd0fcb0afe0b13be5a040\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:06.201752 kubelet[2768]: E0515 15:14:06.201487 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65aca078219bba6fd9e0c204b132bdddf8b44c96bf8cd0fcb0afe0b13be5a040\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:14:06.240740 kubelet[2768]: I0515 15:14:06.240607 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6299k" nodeCondition=["DiskPressure"] May 15 15:14:06.335602 kubelet[2768]: I0515 15:14:06.335543 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-4xw8x" nodeCondition=["DiskPressure"] May 15 15:14:06.437064 kubelet[2768]: I0515 15:14:06.436888 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6vxg9" nodeCondition=["DiskPressure"] May 15 15:14:06.537126 kubelet[2768]: I0515 15:14:06.537082 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rfkl2" nodeCondition=["DiskPressure"] May 15 15:14:06.636819 kubelet[2768]: I0515 15:14:06.636764 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-p74b7" nodeCondition=["DiskPressure"] May 15 15:14:06.735974 kubelet[2768]: I0515 15:14:06.735920 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-h4w2f" nodeCondition=["DiskPressure"] May 15 15:14:06.802196 kubelet[2768]: I0515 15:14:06.802128 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:14:06.802196 kubelet[2768]: I0515 15:14:06.802188 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:14:06.805646 kubelet[2768]: I0515 15:14:06.805612 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:14:06.822465 kubelet[2768]: I0515 15:14:06.822427 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:14:06.822641 kubelet[2768]: I0515 15:14:06.822564 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/csi-node-driver-ssx6b","calico-system/calico-node-56p29","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:14:06.822641 kubelet[2768]: E0515 15:14:06.822616 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:06.822641 kubelet[2768]: E0515 15:14:06.822630 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:06.822641 kubelet[2768]: E0515 15:14:06.822641 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:06.822898 kubelet[2768]: E0515 15:14:06.822652 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:06.822898 kubelet[2768]: E0515 15:14:06.822663 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:14:06.822898 kubelet[2768]: E0515 15:14:06.822679 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:14:06.822898 kubelet[2768]: E0515 15:14:06.822692 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:14:06.822898 kubelet[2768]: E0515 15:14:06.822706 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:14:06.822898 kubelet[2768]: E0515 15:14:06.822720 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:14:06.822898 kubelet[2768]: E0515 15:14:06.822735 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:14:06.822898 kubelet[2768]: I0515 15:14:06.822750 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:14:06.839118 kubelet[2768]: I0515 15:14:06.839065 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-tn2hl" nodeCondition=["DiskPressure"] May 15 15:14:06.938084 kubelet[2768]: I0515 15:14:06.938036 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-hqdk2" nodeCondition=["DiskPressure"] May 15 15:14:07.039941 kubelet[2768]: I0515 15:14:07.039793 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-22b46" nodeCondition=["DiskPressure"] May 15 15:14:07.135996 kubelet[2768]: I0515 15:14:07.135941 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mjddj" nodeCondition=["DiskPressure"] May 15 15:14:07.236642 kubelet[2768]: I0515 15:14:07.236570 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-l585b" nodeCondition=["DiskPressure"] May 15 15:14:07.437086 kubelet[2768]: I0515 15:14:07.436913 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-czqpb" nodeCondition=["DiskPressure"] May 15 15:14:07.540925 kubelet[2768]: I0515 15:14:07.540857 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-9h8kb" nodeCondition=["DiskPressure"] May 15 15:14:07.639316 kubelet[2768]: I0515 15:14:07.639221 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-ppxxj" nodeCondition=["DiskPressure"] May 15 15:14:07.736200 kubelet[2768]: I0515 15:14:07.736148 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-hwlnx" nodeCondition=["DiskPressure"] May 15 15:14:07.790653 kubelet[2768]: I0515 15:14:07.790573 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-vdflb" nodeCondition=["DiskPressure"] May 15 15:14:07.886566 kubelet[2768]: I0515 15:14:07.886516 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-ts42f" nodeCondition=["DiskPressure"] May 15 15:14:07.988047 kubelet[2768]: I0515 15:14:07.987739 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-5zq4h" nodeCondition=["DiskPressure"] May 15 15:14:08.088150 kubelet[2768]: I0515 15:14:08.088095 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-nqhsf" nodeCondition=["DiskPressure"] May 15 15:14:08.188865 kubelet[2768]: I0515 15:14:08.188816 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-69mlr" nodeCondition=["DiskPressure"] May 15 15:14:08.287904 kubelet[2768]: I0515 15:14:08.287063 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-h8l8v" nodeCondition=["DiskPressure"] May 15 15:14:08.390314 kubelet[2768]: I0515 15:14:08.390252 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-brs2s" nodeCondition=["DiskPressure"] May 15 15:14:08.495025 kubelet[2768]: I0515 15:14:08.494915 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-kvhf4" nodeCondition=["DiskPressure"] May 15 15:14:08.687599 kubelet[2768]: I0515 15:14:08.686903 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qdnvk" nodeCondition=["DiskPressure"] May 15 15:14:08.788028 kubelet[2768]: I0515 15:14:08.787978 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-tkssq" nodeCondition=["DiskPressure"] May 15 15:14:08.887661 kubelet[2768]: I0515 15:14:08.887591 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-xtmdd" nodeCondition=["DiskPressure"] May 15 15:14:08.991010 kubelet[2768]: I0515 15:14:08.990907 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zgkxx" nodeCondition=["DiskPressure"] May 15 15:14:09.091084 kubelet[2768]: I0515 15:14:09.091004 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-lmgck" nodeCondition=["DiskPressure"] May 15 15:14:09.190536 kubelet[2768]: I0515 15:14:09.190416 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-bn5wq" nodeCondition=["DiskPressure"] May 15 15:14:09.290950 kubelet[2768]: I0515 15:14:09.290154 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-9br86" nodeCondition=["DiskPressure"] May 15 15:14:09.490234 kubelet[2768]: I0515 15:14:09.490136 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-n4cm2" nodeCondition=["DiskPressure"] May 15 15:14:09.590284 kubelet[2768]: I0515 15:14:09.589982 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-76c6q" nodeCondition=["DiskPressure"] May 15 15:14:09.638505 kubelet[2768]: I0515 15:14:09.638426 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-67nbz" nodeCondition=["DiskPressure"] May 15 15:14:09.738938 kubelet[2768]: I0515 15:14:09.738883 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-sdxsc" nodeCondition=["DiskPressure"] May 15 15:14:09.838020 kubelet[2768]: I0515 15:14:09.837952 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qj9k5" nodeCondition=["DiskPressure"] May 15 15:14:09.941470 kubelet[2768]: I0515 15:14:09.939706 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-t5dnp" nodeCondition=["DiskPressure"] May 15 15:14:10.039449 kubelet[2768]: I0515 15:14:10.039396 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-wrrgv" nodeCondition=["DiskPressure"] May 15 15:14:10.115392 kubelet[2768]: E0515 15:14:10.115273 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:10.116996 containerd[1566]: time="2025-05-15T15:14:10.116851548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 15:14:10.144714 kubelet[2768]: I0515 15:14:10.144630 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-76frb" nodeCondition=["DiskPressure"] May 15 15:14:10.238939 kubelet[2768]: I0515 15:14:10.238879 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-dtw26" nodeCondition=["DiskPressure"] May 15 15:14:10.343045 kubelet[2768]: I0515 15:14:10.342997 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-592p6" nodeCondition=["DiskPressure"] May 15 15:14:10.437013 kubelet[2768]: I0515 15:14:10.436965 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-chjcr" nodeCondition=["DiskPressure"] May 15 15:14:10.539990 kubelet[2768]: I0515 15:14:10.539847 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-bmn92" nodeCondition=["DiskPressure"] May 15 15:14:10.738469 kubelet[2768]: I0515 15:14:10.738383 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-p26rr" nodeCondition=["DiskPressure"] May 15 15:14:10.839644 kubelet[2768]: I0515 15:14:10.839493 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-k9rfm" nodeCondition=["DiskPressure"] May 15 15:14:10.949090 kubelet[2768]: I0515 15:14:10.948774 2768 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-jj8gw" nodeCondition=["DiskPressure"] May 15 15:14:11.403849 systemd[1]: Started sshd@8-165.232.158.142:22-139.178.68.195:49504.service - OpenSSH per-connection server daemon (139.178.68.195:49504). May 15 15:14:11.515655 sshd[3853]: Accepted publickey for core from 139.178.68.195 port 49504 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:11.518213 sshd-session[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:11.524468 systemd-logind[1490]: New session 8 of user core. May 15 15:14:11.532473 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 15:14:11.750383 sshd[3855]: Connection closed by 139.178.68.195 port 49504 May 15 15:14:11.751226 sshd-session[3853]: pam_unix(sshd:session): session closed for user core May 15 15:14:11.757987 systemd[1]: sshd@8-165.232.158.142:22-139.178.68.195:49504.service: Deactivated successfully. May 15 15:14:11.762686 systemd[1]: session-8.scope: Deactivated successfully. May 15 15:14:11.765003 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. May 15 15:14:11.768688 systemd-logind[1490]: Removed session 8. May 15 15:14:13.379162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3270886099.mount: Deactivated successfully. May 15 15:14:13.381847 containerd[1566]: time="2025-05-15T15:14:13.381684524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3270886099: mkdir /var/lib/containerd/tmpmounts/containerd-mount3270886099/usr/lib/.build-id/5d: no space left on device" May 15 15:14:13.382267 containerd[1566]: time="2025-05-15T15:14:13.381824769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 15:14:13.383874 kubelet[2768]: E0515 15:14:13.383806 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3270886099: mkdir /var/lib/containerd/tmpmounts/containerd-mount3270886099/usr/lib/.build-id/5d: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:14:13.384839 kubelet[2768]: E0515 15:14:13.383897 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3270886099: mkdir /var/lib/containerd/tmpmounts/containerd-mount3270886099/usr/lib/.build-id/5d: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:14:13.390221 kubelet[2768]: E0515 15:14:13.389630 2768 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxm84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-56p29_calico-system(ea3f9278-e4ee-4dca-80e1-48db54fe37e5): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3270886099: mkdir /var/lib/containerd/tmpmounts/containerd-mount3270886099/usr/lib/.build-id/5d: no space left on device May 15 15:14:13.394221 kubelet[2768]: E0515 15:14:13.394043 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3270886099: mkdir /var/lib/containerd/tmpmounts/containerd-mount3270886099/usr/lib/.build-id/5d: no space left on device\"" pod="calico-system/calico-node-56p29" podUID="ea3f9278-e4ee-4dca-80e1-48db54fe37e5" May 15 15:14:15.119506 containerd[1566]: time="2025-05-15T15:14:15.119296038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:14:15.197198 containerd[1566]: time="2025-05-15T15:14:15.195043704Z" level=error msg="Failed to destroy network for sandbox \"76e9884fb33f55a48793851562490de6b51347ab68977c459c9790628da3c109\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:15.197751 systemd[1]: run-netns-cni\x2dff203149\x2d0364\x2d8aa0\x2d30f7\x2d83bd4f700b9c.mount: Deactivated successfully. May 15 15:14:15.199734 containerd[1566]: time="2025-05-15T15:14:15.199686117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e9884fb33f55a48793851562490de6b51347ab68977c459c9790628da3c109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:15.200448 kubelet[2768]: E0515 15:14:15.200329 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e9884fb33f55a48793851562490de6b51347ab68977c459c9790628da3c109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:15.200448 kubelet[2768]: E0515 15:14:15.200422 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e9884fb33f55a48793851562490de6b51347ab68977c459c9790628da3c109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:15.201876 kubelet[2768]: E0515 15:14:15.200866 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e9884fb33f55a48793851562490de6b51347ab68977c459c9790628da3c109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:15.201876 kubelet[2768]: E0515 15:14:15.200959 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76e9884fb33f55a48793851562490de6b51347ab68977c459c9790628da3c109\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:14:16.114750 kubelet[2768]: E0515 15:14:16.114574 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:16.115299 containerd[1566]: time="2025-05-15T15:14:16.115263009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:14:16.194067 containerd[1566]: time="2025-05-15T15:14:16.193965119Z" level=error msg="Failed to destroy network for sandbox \"271640f1bd3545fe8d899e82436645a2ebbc198d7f2ffa7384e266e2c9652084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:16.197495 systemd[1]: run-netns-cni\x2d249b2333\x2d97f6\x2d7487\x2d80b0\x2dc18e454e65eb.mount: Deactivated successfully. May 15 15:14:16.198580 containerd[1566]: time="2025-05-15T15:14:16.197859135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"271640f1bd3545fe8d899e82436645a2ebbc198d7f2ffa7384e266e2c9652084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:16.199206 kubelet[2768]: E0515 15:14:16.199091 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"271640f1bd3545fe8d899e82436645a2ebbc198d7f2ffa7384e266e2c9652084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:16.199361 kubelet[2768]: E0515 15:14:16.199312 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"271640f1bd3545fe8d899e82436645a2ebbc198d7f2ffa7384e266e2c9652084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:16.199467 kubelet[2768]: E0515 15:14:16.199427 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"271640f1bd3545fe8d899e82436645a2ebbc198d7f2ffa7384e266e2c9652084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:16.199602 kubelet[2768]: E0515 15:14:16.199528 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"271640f1bd3545fe8d899e82436645a2ebbc198d7f2ffa7384e266e2c9652084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:14:16.773281 systemd[1]: Started sshd@9-165.232.158.142:22-139.178.68.195:49192.service - OpenSSH per-connection server daemon (139.178.68.195:49192). May 15 15:14:16.842394 kubelet[2768]: I0515 15:14:16.842330 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:14:16.842394 kubelet[2768]: I0515 15:14:16.842384 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:14:16.846905 kubelet[2768]: I0515 15:14:16.846871 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:14:16.863735 kubelet[2768]: I0515 15:14:16.863635 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:14:16.864453 kubelet[2768]: I0515 15:14:16.864086 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","calico-system/csi-node-driver-ssx6b","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:14:16.864453 kubelet[2768]: E0515 15:14:16.864395 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:16.864453 kubelet[2768]: E0515 15:14:16.864410 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:16.864453 kubelet[2768]: E0515 15:14:16.864420 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:16.864453 kubelet[2768]: E0515 15:14:16.864428 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:14:16.864453 kubelet[2768]: E0515 15:14:16.864436 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:16.865075 kubelet[2768]: E0515 15:14:16.864719 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:14:16.865075 kubelet[2768]: E0515 15:14:16.864735 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:14:16.865075 kubelet[2768]: E0515 15:14:16.864745 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:14:16.865075 kubelet[2768]: E0515 15:14:16.864754 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:14:16.865075 kubelet[2768]: E0515 15:14:16.864766 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:14:16.865075 kubelet[2768]: I0515 15:14:16.864786 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:14:16.871012 sshd[3930]: Accepted publickey for core from 139.178.68.195 port 49192 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:16.873409 sshd-session[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:16.879093 systemd-logind[1490]: New session 9 of user core. May 15 15:14:16.888468 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 15:14:17.065720 sshd[3932]: Connection closed by 139.178.68.195 port 49192 May 15 15:14:17.066357 sshd-session[3930]: pam_unix(sshd:session): session closed for user core May 15 15:14:17.071391 systemd[1]: sshd@9-165.232.158.142:22-139.178.68.195:49192.service: Deactivated successfully. May 15 15:14:17.073935 systemd[1]: session-9.scope: Deactivated successfully. May 15 15:14:17.075246 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. May 15 15:14:17.077541 systemd-logind[1490]: Removed session 9. May 15 15:14:18.115311 containerd[1566]: time="2025-05-15T15:14:18.115224279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:14:18.189087 containerd[1566]: time="2025-05-15T15:14:18.188981444Z" level=error msg="Failed to destroy network for sandbox \"c09a981803075f8828623c216de6f47c176f79e965a845146312cabe3b34af6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:18.192805 containerd[1566]: time="2025-05-15T15:14:18.192670927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09a981803075f8828623c216de6f47c176f79e965a845146312cabe3b34af6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:18.194338 kubelet[2768]: E0515 15:14:18.193298 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09a981803075f8828623c216de6f47c176f79e965a845146312cabe3b34af6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:18.194338 kubelet[2768]: E0515 15:14:18.193366 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09a981803075f8828623c216de6f47c176f79e965a845146312cabe3b34af6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:18.194338 kubelet[2768]: E0515 15:14:18.193388 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09a981803075f8828623c216de6f47c176f79e965a845146312cabe3b34af6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:18.193838 systemd[1]: run-netns-cni\x2dfc1e5113\x2dfced\x2d497a\x2d982a\x2ddbcb607459c5.mount: Deactivated successfully. May 15 15:14:18.196367 kubelet[2768]: E0515 15:14:18.194551 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c09a981803075f8828623c216de6f47c176f79e965a845146312cabe3b34af6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:14:21.116352 kubelet[2768]: E0515 15:14:21.116204 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:21.118031 containerd[1566]: time="2025-05-15T15:14:21.117827509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:14:21.195459 containerd[1566]: time="2025-05-15T15:14:21.195383906Z" level=error msg="Failed to destroy network for sandbox \"04668286f167133f2298f43c87aace39ed2495196a9900e6a2c6a398092a36c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:21.197758 containerd[1566]: time="2025-05-15T15:14:21.197692294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04668286f167133f2298f43c87aace39ed2495196a9900e6a2c6a398092a36c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:21.200301 kubelet[2768]: E0515 15:14:21.198243 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04668286f167133f2298f43c87aace39ed2495196a9900e6a2c6a398092a36c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:21.200301 kubelet[2768]: E0515 15:14:21.198302 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04668286f167133f2298f43c87aace39ed2495196a9900e6a2c6a398092a36c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:21.200301 kubelet[2768]: E0515 15:14:21.198321 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04668286f167133f2298f43c87aace39ed2495196a9900e6a2c6a398092a36c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:21.200301 kubelet[2768]: E0515 15:14:21.198864 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04668286f167133f2298f43c87aace39ed2495196a9900e6a2c6a398092a36c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:14:21.198605 systemd[1]: run-netns-cni\x2dd5e41ba9\x2ded59\x2d2d8d\x2de786\x2d04474d0f48b8.mount: Deactivated successfully. May 15 15:14:22.091726 systemd[1]: Started sshd@10-165.232.158.142:22-139.178.68.195:49194.service - OpenSSH per-connection server daemon (139.178.68.195:49194). May 15 15:14:22.147659 sshd[4009]: Accepted publickey for core from 139.178.68.195 port 49194 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:22.150128 sshd-session[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:22.156518 systemd-logind[1490]: New session 10 of user core. May 15 15:14:22.165736 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 15:14:22.313800 sshd[4011]: Connection closed by 139.178.68.195 port 49194 May 15 15:14:22.314504 sshd-session[4009]: pam_unix(sshd:session): session closed for user core May 15 15:14:22.319251 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. May 15 15:14:22.319915 systemd[1]: sshd@10-165.232.158.142:22-139.178.68.195:49194.service: Deactivated successfully. May 15 15:14:22.322571 systemd[1]: session-10.scope: Deactivated successfully. May 15 15:14:22.324753 systemd-logind[1490]: Removed session 10. May 15 15:14:25.115246 kubelet[2768]: E0515 15:14:25.114741 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:25.117159 kubelet[2768]: E0515 15:14:25.117095 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-56p29" podUID="ea3f9278-e4ee-4dca-80e1-48db54fe37e5" May 15 15:14:26.114994 containerd[1566]: time="2025-05-15T15:14:26.114898045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:14:26.178743 containerd[1566]: time="2025-05-15T15:14:26.178681347Z" level=error msg="Failed to destroy network for sandbox \"d1c105eb42b71789ef97f8faff848079ecdc17fd5812666fc066c8f2e4781ab6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:26.180981 systemd[1]: run-netns-cni\x2dbb8e3409\x2d09ec\x2d7fe4\x2df065\x2d679aef8ae65e.mount: Deactivated successfully. May 15 15:14:26.182841 containerd[1566]: time="2025-05-15T15:14:26.182750817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c105eb42b71789ef97f8faff848079ecdc17fd5812666fc066c8f2e4781ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:26.183668 kubelet[2768]: E0515 15:14:26.183624 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c105eb42b71789ef97f8faff848079ecdc17fd5812666fc066c8f2e4781ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:26.184013 kubelet[2768]: E0515 15:14:26.183696 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c105eb42b71789ef97f8faff848079ecdc17fd5812666fc066c8f2e4781ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:26.184013 kubelet[2768]: E0515 15:14:26.183729 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c105eb42b71789ef97f8faff848079ecdc17fd5812666fc066c8f2e4781ab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:26.184013 kubelet[2768]: E0515 15:14:26.183779 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1c105eb42b71789ef97f8faff848079ecdc17fd5812666fc066c8f2e4781ab6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:14:26.881276 kubelet[2768]: I0515 15:14:26.881221 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:14:26.881276 kubelet[2768]: I0515 15:14:26.881269 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:14:26.885290 kubelet[2768]: I0515 15:14:26.885258 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:14:26.901144 kubelet[2768]: I0515 15:14:26.901101 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:14:26.901346 kubelet[2768]: I0515 15:14:26.901238 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/csi-node-driver-ssx6b","calico-system/calico-node-56p29","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:14:26.901346 kubelet[2768]: E0515 15:14:26.901296 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:26.901346 kubelet[2768]: E0515 15:14:26.901312 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:26.901346 kubelet[2768]: E0515 15:14:26.901323 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:26.901346 kubelet[2768]: E0515 15:14:26.901337 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:26.901346 kubelet[2768]: E0515 15:14:26.901348 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:14:26.901609 kubelet[2768]: E0515 15:14:26.901365 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:14:26.901609 kubelet[2768]: E0515 15:14:26.901377 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:14:26.901609 kubelet[2768]: E0515 15:14:26.901393 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:14:26.901609 kubelet[2768]: E0515 15:14:26.901405 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:14:26.901609 kubelet[2768]: E0515 15:14:26.901418 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:14:26.901609 kubelet[2768]: I0515 15:14:26.901431 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:14:27.341455 systemd[1]: Started sshd@11-165.232.158.142:22-139.178.68.195:44670.service - OpenSSH per-connection server daemon (139.178.68.195:44670). May 15 15:14:27.409378 sshd[4053]: Accepted publickey for core from 139.178.68.195 port 44670 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:27.411117 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:27.417817 systemd-logind[1490]: New session 11 of user core. May 15 15:14:27.422441 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 15:14:27.575907 sshd[4055]: Connection closed by 139.178.68.195 port 44670 May 15 15:14:27.576472 sshd-session[4053]: pam_unix(sshd:session): session closed for user core May 15 15:14:27.590100 systemd[1]: sshd@11-165.232.158.142:22-139.178.68.195:44670.service: Deactivated successfully. May 15 15:14:27.592723 systemd[1]: session-11.scope: Deactivated successfully. May 15 15:14:27.594038 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. May 15 15:14:27.598341 systemd[1]: Started sshd@12-165.232.158.142:22-139.178.68.195:44674.service - OpenSSH per-connection server daemon (139.178.68.195:44674). May 15 15:14:27.599928 systemd-logind[1490]: Removed session 11. May 15 15:14:27.664965 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 44674 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:27.667051 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:27.673276 systemd-logind[1490]: New session 12 of user core. May 15 15:14:27.680451 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 15:14:27.861663 sshd[4070]: Connection closed by 139.178.68.195 port 44674 May 15 15:14:27.861001 sshd-session[4068]: pam_unix(sshd:session): session closed for user core May 15 15:14:27.875583 systemd[1]: sshd@12-165.232.158.142:22-139.178.68.195:44674.service: Deactivated successfully. May 15 15:14:27.878675 systemd[1]: session-12.scope: Deactivated successfully. May 15 15:14:27.883409 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. May 15 15:14:27.885524 systemd[1]: Started sshd@13-165.232.158.142:22-139.178.68.195:44690.service - OpenSSH per-connection server daemon (139.178.68.195:44690). May 15 15:14:27.889922 systemd-logind[1490]: Removed session 12. May 15 15:14:27.945917 sshd[4080]: Accepted publickey for core from 139.178.68.195 port 44690 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:27.948077 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:27.954437 systemd-logind[1490]: New session 13 of user core. May 15 15:14:27.960434 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 15:14:28.105822 sshd[4082]: Connection closed by 139.178.68.195 port 44690 May 15 15:14:28.104964 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 15 15:14:28.109967 systemd[1]: sshd@13-165.232.158.142:22-139.178.68.195:44690.service: Deactivated successfully. May 15 15:14:28.112701 systemd[1]: session-13.scope: Deactivated successfully. May 15 15:14:28.113833 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. May 15 15:14:28.116055 kubelet[2768]: E0515 15:14:28.115360 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:28.115699 systemd-logind[1490]: Removed session 13. May 15 15:14:28.117091 containerd[1566]: time="2025-05-15T15:14:28.117059474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:14:28.185405 containerd[1566]: time="2025-05-15T15:14:28.185323046Z" level=error msg="Failed to destroy network for sandbox \"13e06c3af35b3530f41615181ad9587a0b4429ec464303e2129199358d210ba6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:28.188352 containerd[1566]: time="2025-05-15T15:14:28.188239726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e06c3af35b3530f41615181ad9587a0b4429ec464303e2129199358d210ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:28.188402 systemd[1]: run-netns-cni\x2dcc752c3d\x2dcbae\x2dcebf\x2db38c\x2d17305303942e.mount: Deactivated successfully. May 15 15:14:28.188781 kubelet[2768]: E0515 15:14:28.188744 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e06c3af35b3530f41615181ad9587a0b4429ec464303e2129199358d210ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:28.188967 kubelet[2768]: E0515 15:14:28.188919 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e06c3af35b3530f41615181ad9587a0b4429ec464303e2129199358d210ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:28.189332 kubelet[2768]: E0515 15:14:28.189030 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e06c3af35b3530f41615181ad9587a0b4429ec464303e2129199358d210ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:28.189332 kubelet[2768]: E0515 15:14:28.189089 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13e06c3af35b3530f41615181ad9587a0b4429ec464303e2129199358d210ba6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:14:29.115962 containerd[1566]: time="2025-05-15T15:14:29.115675501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:14:29.178639 containerd[1566]: time="2025-05-15T15:14:29.178582530Z" level=error msg="Failed to destroy network for sandbox \"5e2e90bc89a156f392e0f5ed24865f8c7bebadbb3e7d2254f57433dd90dee234\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:29.182314 containerd[1566]: time="2025-05-15T15:14:29.182213081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e2e90bc89a156f392e0f5ed24865f8c7bebadbb3e7d2254f57433dd90dee234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:29.182534 systemd[1]: run-netns-cni\x2d7c4b9e21\x2d7b28\x2d062c\x2d969c\x2d635f37838242.mount: Deactivated successfully. May 15 15:14:29.183731 kubelet[2768]: E0515 15:14:29.183687 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e2e90bc89a156f392e0f5ed24865f8c7bebadbb3e7d2254f57433dd90dee234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:29.184638 kubelet[2768]: E0515 15:14:29.183760 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e2e90bc89a156f392e0f5ed24865f8c7bebadbb3e7d2254f57433dd90dee234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:29.184638 kubelet[2768]: E0515 15:14:29.183783 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e2e90bc89a156f392e0f5ed24865f8c7bebadbb3e7d2254f57433dd90dee234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:29.184638 kubelet[2768]: E0515 15:14:29.183829 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e2e90bc89a156f392e0f5ed24865f8c7bebadbb3e7d2254f57433dd90dee234\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:14:31.937207 systemd[1]: Started sshd@14-165.232.158.142:22-80.94.95.15:60237.service - OpenSSH per-connection server daemon (80.94.95.15:60237). May 15 15:14:32.115331 kubelet[2768]: E0515 15:14:32.115289 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:33.123999 systemd[1]: Started sshd@15-165.232.158.142:22-139.178.68.195:44700.service - OpenSSH per-connection server daemon (139.178.68.195:44700). May 15 15:14:33.184362 sshd[4157]: Accepted publickey for core from 139.178.68.195 port 44700 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:33.186298 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:33.192817 systemd-logind[1490]: New session 14 of user core. May 15 15:14:33.201434 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 15:14:33.337701 sshd[4159]: Connection closed by 139.178.68.195 port 44700 May 15 15:14:33.338689 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 15 15:14:33.343789 systemd[1]: sshd@15-165.232.158.142:22-139.178.68.195:44700.service: Deactivated successfully. May 15 15:14:33.347375 systemd[1]: session-14.scope: Deactivated successfully. May 15 15:14:33.350996 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. May 15 15:14:33.353134 systemd-logind[1490]: Removed session 14. May 15 15:14:34.046583 sshd[4154]: Received disconnect from 80.94.95.15 port 60237:11: Bye [preauth] May 15 15:14:34.046583 sshd[4154]: Disconnected from authenticating user root 80.94.95.15 port 60237 [preauth] May 15 15:14:34.048472 systemd[1]: sshd@14-165.232.158.142:22-80.94.95.15:60237.service: Deactivated successfully. May 15 15:14:36.918886 kubelet[2768]: I0515 15:14:36.918836 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:14:36.920444 kubelet[2768]: I0515 15:14:36.919322 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:14:36.920845 kubelet[2768]: I0515 15:14:36.920679 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/csi-node-driver-ssx6b","calico-system/calico-node-56p29","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920731 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920743 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920750 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920757 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920765 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920778 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920789 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920799 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920807 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:14:36.920845 kubelet[2768]: E0515 15:14:36.920815 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:14:36.920845 kubelet[2768]: I0515 15:14:36.920825 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:14:37.116061 kubelet[2768]: E0515 15:14:37.115235 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:37.117426 containerd[1566]: time="2025-05-15T15:14:37.117387837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:14:37.183502 containerd[1566]: time="2025-05-15T15:14:37.183370293Z" level=error msg="Failed to destroy network for sandbox \"325a945e00f66d57c897b62252caeadb1aebcc9f28dc8ce5330b780694020521\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:37.188974 containerd[1566]: time="2025-05-15T15:14:37.188890575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"325a945e00f66d57c897b62252caeadb1aebcc9f28dc8ce5330b780694020521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:37.189391 kubelet[2768]: E0515 15:14:37.189273 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325a945e00f66d57c897b62252caeadb1aebcc9f28dc8ce5330b780694020521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:37.189391 kubelet[2768]: E0515 15:14:37.189348 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325a945e00f66d57c897b62252caeadb1aebcc9f28dc8ce5330b780694020521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:37.189391 kubelet[2768]: E0515 15:14:37.189374 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"325a945e00f66d57c897b62252caeadb1aebcc9f28dc8ce5330b780694020521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:37.189589 kubelet[2768]: E0515 15:14:37.189433 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"325a945e00f66d57c897b62252caeadb1aebcc9f28dc8ce5330b780694020521\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:14:37.189921 systemd[1]: run-netns-cni\x2dc4b3fa6a\x2dbd45\x2dbced\x2d1b3a\x2d464967583666.mount: Deactivated successfully. May 15 15:14:38.115398 containerd[1566]: time="2025-05-15T15:14:38.115348628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:14:38.186195 containerd[1566]: time="2025-05-15T15:14:38.186130440Z" level=error msg="Failed to destroy network for sandbox \"8d242b7e79085313dfb6de8a08e0b0227c1d406969a2fb165f827ea02f337c6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:38.190028 systemd[1]: run-netns-cni\x2daa46ce72\x2d323b\x2dd6d2\x2dc14a\x2deb20f44c5d60.mount: Deactivated successfully. May 15 15:14:38.190927 containerd[1566]: time="2025-05-15T15:14:38.189890840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d242b7e79085313dfb6de8a08e0b0227c1d406969a2fb165f827ea02f337c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:38.192758 kubelet[2768]: E0515 15:14:38.191641 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d242b7e79085313dfb6de8a08e0b0227c1d406969a2fb165f827ea02f337c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:38.192758 kubelet[2768]: E0515 15:14:38.191705 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d242b7e79085313dfb6de8a08e0b0227c1d406969a2fb165f827ea02f337c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:38.192758 kubelet[2768]: E0515 15:14:38.191725 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d242b7e79085313dfb6de8a08e0b0227c1d406969a2fb165f827ea02f337c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:38.192758 kubelet[2768]: E0515 15:14:38.191766 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d242b7e79085313dfb6de8a08e0b0227c1d406969a2fb165f827ea02f337c6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:14:38.356798 systemd[1]: Started sshd@16-165.232.158.142:22-139.178.68.195:57622.service - OpenSSH per-connection server daemon (139.178.68.195:57622). May 15 15:14:38.410569 sshd[4235]: Accepted publickey for core from 139.178.68.195 port 57622 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:38.412706 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:38.420219 systemd-logind[1490]: New session 15 of user core. May 15 15:14:38.428587 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 15:14:38.564721 sshd[4237]: Connection closed by 139.178.68.195 port 57622 May 15 15:14:38.565383 sshd-session[4235]: pam_unix(sshd:session): session closed for user core May 15 15:14:38.570523 systemd[1]: sshd@16-165.232.158.142:22-139.178.68.195:57622.service: Deactivated successfully. May 15 15:14:38.573081 systemd[1]: session-15.scope: Deactivated successfully. May 15 15:14:38.574121 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. May 15 15:14:38.576275 systemd-logind[1490]: Removed session 15. May 15 15:14:39.115193 kubelet[2768]: E0515 15:14:39.114930 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:39.117592 containerd[1566]: time="2025-05-15T15:14:39.117337454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 15:14:39.117730 kubelet[2768]: E0515 15:14:39.117423 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:41.119522 kubelet[2768]: E0515 15:14:41.117966 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:41.121747 containerd[1566]: time="2025-05-15T15:14:41.121349142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:14:41.253290 containerd[1566]: time="2025-05-15T15:14:41.253224701Z" level=error msg="Failed to destroy network for sandbox \"3b7dfd863ee4f0db4660d7e96dc4e954582b917b46d37b2adf9ad1adfd7d0a3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:41.256617 containerd[1566]: time="2025-05-15T15:14:41.256553978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7dfd863ee4f0db4660d7e96dc4e954582b917b46d37b2adf9ad1adfd7d0a3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:41.257451 kubelet[2768]: E0515 15:14:41.257321 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7dfd863ee4f0db4660d7e96dc4e954582b917b46d37b2adf9ad1adfd7d0a3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:41.257451 kubelet[2768]: E0515 15:14:41.257391 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7dfd863ee4f0db4660d7e96dc4e954582b917b46d37b2adf9ad1adfd7d0a3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:41.257451 kubelet[2768]: E0515 15:14:41.257421 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b7dfd863ee4f0db4660d7e96dc4e954582b917b46d37b2adf9ad1adfd7d0a3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:41.258367 kubelet[2768]: E0515 15:14:41.257500 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b7dfd863ee4f0db4660d7e96dc4e954582b917b46d37b2adf9ad1adfd7d0a3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:14:41.257708 systemd[1]: run-netns-cni\x2dc5ff400f\x2d1b61\x2d1b61\x2d942d\x2d7901e38286b0.mount: Deactivated successfully. May 15 15:14:42.840743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670316416.mount: Deactivated successfully. May 15 15:14:42.842709 containerd[1566]: time="2025-05-15T15:14:42.842527478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount670316416: write /var/lib/containerd/tmpmounts/containerd-mount670316416/usr/bin/mountns: no space left on device" May 15 15:14:42.842709 containerd[1566]: time="2025-05-15T15:14:42.842612581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 15:14:42.843197 kubelet[2768]: E0515 15:14:42.842967 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount670316416: write /var/lib/containerd/tmpmounts/containerd-mount670316416/usr/bin/mountns: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:14:42.843197 kubelet[2768]: E0515 15:14:42.843017 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount670316416: write /var/lib/containerd/tmpmounts/containerd-mount670316416/usr/bin/mountns: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:14:42.844298 kubelet[2768]: E0515 15:14:42.843248 2768 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxm84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-56p29_calico-system(ea3f9278-e4ee-4dca-80e1-48db54fe37e5): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount670316416: write /var/lib/containerd/tmpmounts/containerd-mount670316416/usr/bin/mountns: no space left on device May 15 15:14:42.844474 kubelet[2768]: E0515 15:14:42.843289 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount670316416: write /var/lib/containerd/tmpmounts/containerd-mount670316416/usr/bin/mountns: no space left on device\"" pod="calico-system/calico-node-56p29" podUID="ea3f9278-e4ee-4dca-80e1-48db54fe37e5" May 15 15:14:43.117899 containerd[1566]: time="2025-05-15T15:14:43.116601741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:14:43.218969 containerd[1566]: time="2025-05-15T15:14:43.216398393Z" level=error msg="Failed to destroy network for sandbox \"f9467e869344ea43ce2a9a421385e5e89d6ee1db6d9a398c92005fa0c544ca1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:43.222664 systemd[1]: run-netns-cni\x2de7e09b44\x2d6cc9\x2ddd1f\x2d3f33\x2d595c0ce67418.mount: Deactivated successfully. May 15 15:14:43.238795 containerd[1566]: time="2025-05-15T15:14:43.238621253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9467e869344ea43ce2a9a421385e5e89d6ee1db6d9a398c92005fa0c544ca1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:43.239463 kubelet[2768]: E0515 15:14:43.239002 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9467e869344ea43ce2a9a421385e5e89d6ee1db6d9a398c92005fa0c544ca1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:43.239463 kubelet[2768]: E0515 15:14:43.239072 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9467e869344ea43ce2a9a421385e5e89d6ee1db6d9a398c92005fa0c544ca1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:43.239463 kubelet[2768]: E0515 15:14:43.239094 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9467e869344ea43ce2a9a421385e5e89d6ee1db6d9a398c92005fa0c544ca1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:43.239463 kubelet[2768]: E0515 15:14:43.239150 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9467e869344ea43ce2a9a421385e5e89d6ee1db6d9a398c92005fa0c544ca1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:14:43.586568 systemd[1]: Started sshd@17-165.232.158.142:22-139.178.68.195:47718.service - OpenSSH per-connection server daemon (139.178.68.195:47718). May 15 15:14:43.663677 sshd[4318]: Accepted publickey for core from 139.178.68.195 port 47718 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:43.665091 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:43.673261 systemd-logind[1490]: New session 16 of user core. May 15 15:14:43.678494 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 15:14:43.840652 sshd[4320]: Connection closed by 139.178.68.195 port 47718 May 15 15:14:43.841514 sshd-session[4318]: pam_unix(sshd:session): session closed for user core May 15 15:14:43.847017 systemd[1]: sshd@17-165.232.158.142:22-139.178.68.195:47718.service: Deactivated successfully. May 15 15:14:43.849591 systemd[1]: session-16.scope: Deactivated successfully. May 15 15:14:43.852417 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. May 15 15:14:43.854588 systemd-logind[1490]: Removed session 16. May 15 15:14:45.116116 kubelet[2768]: E0515 15:14:45.115705 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:46.941524 kubelet[2768]: I0515 15:14:46.941312 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:14:46.941524 kubelet[2768]: I0515 15:14:46.941362 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:14:46.944046 kubelet[2768]: I0515 15:14:46.944009 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:14:46.956765 kubelet[2768]: I0515 15:14:46.956730 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:14:46.956931 kubelet[2768]: I0515 15:14:46.956845 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","calico-system/csi-node-driver-ssx6b","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:14:46.956931 kubelet[2768]: E0515 15:14:46.956894 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:46.956931 kubelet[2768]: E0515 15:14:46.956906 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:46.956931 kubelet[2768]: E0515 15:14:46.956914 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:46.956931 kubelet[2768]: E0515 15:14:46.956924 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:14:46.956931 kubelet[2768]: E0515 15:14:46.956930 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:46.957191 kubelet[2768]: E0515 15:14:46.956943 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:14:46.957191 kubelet[2768]: E0515 15:14:46.956951 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:14:46.957191 kubelet[2768]: E0515 15:14:46.956963 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:14:46.957191 kubelet[2768]: E0515 15:14:46.956977 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:14:46.957191 kubelet[2768]: E0515 15:14:46.956989 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:14:46.957191 kubelet[2768]: I0515 15:14:46.957000 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:14:48.115234 kubelet[2768]: E0515 15:14:48.114981 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:48.116532 containerd[1566]: time="2025-05-15T15:14:48.116456950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:14:48.196475 containerd[1566]: time="2025-05-15T15:14:48.196350340Z" level=error msg="Failed to destroy network for sandbox \"29ea2aa73cd9ba225bde72d494d3df9ba2fd7edc1d5d32e7d4274a1f05e9f860\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:48.200209 containerd[1566]: time="2025-05-15T15:14:48.198254388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29ea2aa73cd9ba225bde72d494d3df9ba2fd7edc1d5d32e7d4274a1f05e9f860\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:48.200395 kubelet[2768]: E0515 15:14:48.198556 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29ea2aa73cd9ba225bde72d494d3df9ba2fd7edc1d5d32e7d4274a1f05e9f860\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:48.200395 kubelet[2768]: E0515 15:14:48.198611 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29ea2aa73cd9ba225bde72d494d3df9ba2fd7edc1d5d32e7d4274a1f05e9f860\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:48.200395 kubelet[2768]: E0515 15:14:48.198639 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29ea2aa73cd9ba225bde72d494d3df9ba2fd7edc1d5d32e7d4274a1f05e9f860\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:48.200395 kubelet[2768]: E0515 15:14:48.198698 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29ea2aa73cd9ba225bde72d494d3df9ba2fd7edc1d5d32e7d4274a1f05e9f860\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:14:48.201264 systemd[1]: run-netns-cni\x2d916172c3\x2dce76\x2d9e00\x2d8d66\x2dfda76805e304.mount: Deactivated successfully. May 15 15:14:48.857644 systemd[1]: Started sshd@18-165.232.158.142:22-139.178.68.195:47728.service - OpenSSH per-connection server daemon (139.178.68.195:47728). May 15 15:14:48.922067 sshd[4364]: Accepted publickey for core from 139.178.68.195 port 47728 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:48.923842 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:48.930210 systemd-logind[1490]: New session 17 of user core. May 15 15:14:48.936444 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 15:14:49.077427 sshd[4366]: Connection closed by 139.178.68.195 port 47728 May 15 15:14:49.079462 sshd-session[4364]: pam_unix(sshd:session): session closed for user core May 15 15:14:49.084728 systemd[1]: sshd@18-165.232.158.142:22-139.178.68.195:47728.service: Deactivated successfully. May 15 15:14:49.087079 systemd[1]: session-17.scope: Deactivated successfully. May 15 15:14:49.088318 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. May 15 15:14:49.090416 systemd-logind[1490]: Removed session 17. May 15 15:14:50.115189 containerd[1566]: time="2025-05-15T15:14:50.115123116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:14:50.180472 containerd[1566]: time="2025-05-15T15:14:50.180406062Z" level=error msg="Failed to destroy network for sandbox \"1e262a0180292292bd3453afc4924a22eef52988e09250767c81aab9d2a02cc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:50.182819 systemd[1]: run-netns-cni\x2dd4e3f835\x2d1b26\x2d3fcd\x2d28c7\x2d69c5e9844e2e.mount: Deactivated successfully. May 15 15:14:50.183800 containerd[1566]: time="2025-05-15T15:14:50.183657906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e262a0180292292bd3453afc4924a22eef52988e09250767c81aab9d2a02cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:50.185244 kubelet[2768]: E0515 15:14:50.185084 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e262a0180292292bd3453afc4924a22eef52988e09250767c81aab9d2a02cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:50.185244 kubelet[2768]: E0515 15:14:50.185144 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e262a0180292292bd3453afc4924a22eef52988e09250767c81aab9d2a02cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:50.185244 kubelet[2768]: E0515 15:14:50.185168 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e262a0180292292bd3453afc4924a22eef52988e09250767c81aab9d2a02cc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:50.185244 kubelet[2768]: E0515 15:14:50.185231 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e262a0180292292bd3453afc4924a22eef52988e09250767c81aab9d2a02cc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:14:52.499443 systemd[1]: Started sshd@19-165.232.158.142:22-218.92.0.166:50832.service - OpenSSH per-connection server daemon (218.92.0.166:50832). May 15 15:14:53.712690 sshd-session[4408]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:14:54.092872 systemd[1]: Started sshd@20-165.232.158.142:22-139.178.68.195:51282.service - OpenSSH per-connection server daemon (139.178.68.195:51282). May 15 15:14:54.150941 sshd[4410]: Accepted publickey for core from 139.178.68.195 port 51282 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:54.152796 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:54.159510 systemd-logind[1490]: New session 18 of user core. May 15 15:14:54.167448 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 15:14:54.298895 sshd[4412]: Connection closed by 139.178.68.195 port 51282 May 15 15:14:54.300886 sshd-session[4410]: pam_unix(sshd:session): session closed for user core May 15 15:14:54.305909 systemd[1]: sshd@20-165.232.158.142:22-139.178.68.195:51282.service: Deactivated successfully. May 15 15:14:54.308562 systemd[1]: session-18.scope: Deactivated successfully. May 15 15:14:54.309897 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. May 15 15:14:54.311579 systemd-logind[1490]: Removed session 18. May 15 15:14:55.115908 kubelet[2768]: E0515 15:14:55.114897 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:55.117608 kubelet[2768]: E0515 15:14:55.117423 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-56p29" podUID="ea3f9278-e4ee-4dca-80e1-48db54fe37e5" May 15 15:14:55.547166 sshd[4406]: PAM: Permission denied for root from 218.92.0.166 May 15 15:14:55.874926 sshd-session[4423]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:14:56.114359 kubelet[2768]: E0515 15:14:56.114262 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:14:56.116311 containerd[1566]: time="2025-05-15T15:14:56.115621888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:14:56.209774 containerd[1566]: time="2025-05-15T15:14:56.206874470Z" level=error msg="Failed to destroy network for sandbox \"74ad144c20c35df11318f0006c94547ebb4df58e9705ca017b0a0d570917c760\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:56.209774 containerd[1566]: time="2025-05-15T15:14:56.208685889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ad144c20c35df11318f0006c94547ebb4df58e9705ca017b0a0d570917c760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:56.209969 kubelet[2768]: E0515 15:14:56.209321 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ad144c20c35df11318f0006c94547ebb4df58e9705ca017b0a0d570917c760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:56.209969 kubelet[2768]: E0515 15:14:56.209392 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ad144c20c35df11318f0006c94547ebb4df58e9705ca017b0a0d570917c760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:56.209969 kubelet[2768]: E0515 15:14:56.209419 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ad144c20c35df11318f0006c94547ebb4df58e9705ca017b0a0d570917c760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:56.209969 kubelet[2768]: E0515 15:14:56.209474 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74ad144c20c35df11318f0006c94547ebb4df58e9705ca017b0a0d570917c760\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:14:56.210876 systemd[1]: run-netns-cni\x2d305f2c6e\x2d1022\x2d8530\x2d7a54\x2d1558ea54e94f.mount: Deactivated successfully. May 15 15:14:56.984508 kubelet[2768]: I0515 15:14:56.984454 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:14:56.984508 kubelet[2768]: I0515 15:14:56.984500 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:14:56.987792 kubelet[2768]: I0515 15:14:56.987759 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:14:57.001215 kubelet[2768]: I0515 15:14:57.001146 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:14:57.001695 kubelet[2768]: I0515 15:14:57.001480 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","calico-system/csi-node-driver-ssx6b","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001542 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001556 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001565 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001573 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001580 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001592 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001602 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001611 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001620 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:14:57.001695 kubelet[2768]: E0515 15:14:57.001629 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:14:57.001695 kubelet[2768]: I0515 15:14:57.001639 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:14:57.117339 containerd[1566]: time="2025-05-15T15:14:57.116889561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:14:57.212598 containerd[1566]: time="2025-05-15T15:14:57.212541306Z" level=error msg="Failed to destroy network for sandbox \"a1da88e219e4aa66d5daba8e179ea4b38a96674bb75c890e932f80ac09381faf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:57.215238 containerd[1566]: time="2025-05-15T15:14:57.215158437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1da88e219e4aa66d5daba8e179ea4b38a96674bb75c890e932f80ac09381faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:57.217678 kubelet[2768]: E0515 15:14:57.216381 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1da88e219e4aa66d5daba8e179ea4b38a96674bb75c890e932f80ac09381faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:14:57.217678 kubelet[2768]: E0515 15:14:57.216479 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1da88e219e4aa66d5daba8e179ea4b38a96674bb75c890e932f80ac09381faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:57.217678 kubelet[2768]: E0515 15:14:57.216508 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1da88e219e4aa66d5daba8e179ea4b38a96674bb75c890e932f80ac09381faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:14:57.217678 kubelet[2768]: E0515 15:14:57.216566 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1da88e219e4aa66d5daba8e179ea4b38a96674bb75c890e932f80ac09381faf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:14:57.219290 systemd[1]: run-netns-cni\x2d75326e0a\x2d855e\x2d4e00\x2d8a00\x2dabe7209e9542.mount: Deactivated successfully. May 15 15:14:57.650717 sshd[4406]: PAM: Permission denied for root from 218.92.0.166 May 15 15:14:57.977633 sshd-session[4490]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:14:59.316166 systemd[1]: Started sshd@21-165.232.158.142:22-139.178.68.195:51290.service - OpenSSH per-connection server daemon (139.178.68.195:51290). May 15 15:14:59.389065 sshd[4492]: Accepted publickey for core from 139.178.68.195 port 51290 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:14:59.391038 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:14:59.396979 systemd-logind[1490]: New session 19 of user core. May 15 15:14:59.402718 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 15:14:59.560218 sshd[4494]: Connection closed by 139.178.68.195 port 51290 May 15 15:14:59.560133 sshd-session[4492]: pam_unix(sshd:session): session closed for user core May 15 15:14:59.565295 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. May 15 15:14:59.566209 systemd[1]: sshd@21-165.232.158.142:22-139.178.68.195:51290.service: Deactivated successfully. May 15 15:14:59.570375 systemd[1]: session-19.scope: Deactivated successfully. May 15 15:14:59.575244 systemd-logind[1490]: Removed session 19. May 15 15:15:00.029449 sshd[4406]: PAM: Permission denied for root from 218.92.0.166 May 15 15:15:00.567661 sshd[4406]: Received disconnect from 218.92.0.166 port 50832:11: [preauth] May 15 15:15:00.568232 sshd[4406]: Disconnected from authenticating user root 218.92.0.166 port 50832 [preauth] May 15 15:15:00.571902 systemd[1]: sshd@19-165.232.158.142:22-218.92.0.166:50832.service: Deactivated successfully. May 15 15:15:01.119667 kubelet[2768]: E0515 15:15:01.119619 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:03.114918 kubelet[2768]: E0515 15:15:03.114334 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:03.118243 containerd[1566]: time="2025-05-15T15:15:03.117086473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:15:03.118767 containerd[1566]: time="2025-05-15T15:15:03.118330406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:15:03.278700 containerd[1566]: time="2025-05-15T15:15:03.275719932Z" level=error msg="Failed to destroy network for sandbox \"8f3a368e94ceb82c47157e5033ec5e80e6e138636004d5c67affa2c9182b6c3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:03.278700 containerd[1566]: time="2025-05-15T15:15:03.278316055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3a368e94ceb82c47157e5033ec5e80e6e138636004d5c67affa2c9182b6c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:03.278965 kubelet[2768]: E0515 15:15:03.278745 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3a368e94ceb82c47157e5033ec5e80e6e138636004d5c67affa2c9182b6c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:03.278965 kubelet[2768]: E0515 15:15:03.278810 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3a368e94ceb82c47157e5033ec5e80e6e138636004d5c67affa2c9182b6c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:03.278965 kubelet[2768]: E0515 15:15:03.278836 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3a368e94ceb82c47157e5033ec5e80e6e138636004d5c67affa2c9182b6c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:03.278965 kubelet[2768]: E0515 15:15:03.278891 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f3a368e94ceb82c47157e5033ec5e80e6e138636004d5c67affa2c9182b6c3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:15:03.281725 systemd[1]: run-netns-cni\x2d7782afb0\x2d77ab\x2dd432\x2d660e\x2d24ff95da8431.mount: Deactivated successfully. May 15 15:15:03.313041 containerd[1566]: time="2025-05-15T15:15:03.312973287Z" level=error msg="Failed to destroy network for sandbox \"5a9bdee364deafea452db3addd640a668bb5a3e50b0a89fd644b9067875850b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:03.319137 containerd[1566]: time="2025-05-15T15:15:03.316297997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9bdee364deafea452db3addd640a668bb5a3e50b0a89fd644b9067875850b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:03.319395 kubelet[2768]: E0515 15:15:03.318480 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9bdee364deafea452db3addd640a668bb5a3e50b0a89fd644b9067875850b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:03.319395 kubelet[2768]: E0515 15:15:03.318564 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9bdee364deafea452db3addd640a668bb5a3e50b0a89fd644b9067875850b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:03.319395 kubelet[2768]: E0515 15:15:03.318601 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9bdee364deafea452db3addd640a668bb5a3e50b0a89fd644b9067875850b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:03.319395 kubelet[2768]: E0515 15:15:03.318673 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a9bdee364deafea452db3addd640a668bb5a3e50b0a89fd644b9067875850b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:15:03.317148 systemd[1]: run-netns-cni\x2ddba01fc3\x2d18d1\x2d6166\x2dc2d4\x2dd7683f922ee7.mount: Deactivated successfully. May 15 15:15:04.577088 systemd[1]: Started sshd@22-165.232.158.142:22-139.178.68.195:34036.service - OpenSSH per-connection server daemon (139.178.68.195:34036). May 15 15:15:04.649461 sshd[4572]: Accepted publickey for core from 139.178.68.195 port 34036 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:04.651947 sshd-session[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:04.661147 systemd-logind[1490]: New session 20 of user core. May 15 15:15:04.667497 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 15:15:04.862247 sshd[4574]: Connection closed by 139.178.68.195 port 34036 May 15 15:15:04.862906 sshd-session[4572]: pam_unix(sshd:session): session closed for user core May 15 15:15:04.872141 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. May 15 15:15:04.873498 systemd[1]: sshd@22-165.232.158.142:22-139.178.68.195:34036.service: Deactivated successfully. May 15 15:15:04.877596 systemd[1]: session-20.scope: Deactivated successfully. May 15 15:15:04.883298 systemd-logind[1490]: Removed session 20. May 15 15:15:07.027304 kubelet[2768]: I0515 15:15:07.027255 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:15:07.027304 kubelet[2768]: I0515 15:15:07.027304 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:15:07.030737 kubelet[2768]: I0515 15:15:07.030668 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:15:07.058150 kubelet[2768]: I0515 15:15:07.058115 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:15:07.058317 kubelet[2768]: I0515 15:15:07.058217 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/csi-node-driver-ssx6b","calico-system/calico-node-56p29","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:15:07.058317 kubelet[2768]: E0515 15:15:07.058269 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:07.058317 kubelet[2768]: E0515 15:15:07.058279 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:07.058317 kubelet[2768]: E0515 15:15:07.058286 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:07.058317 kubelet[2768]: E0515 15:15:07.058292 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:07.058317 kubelet[2768]: E0515 15:15:07.058299 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:15:07.058317 kubelet[2768]: E0515 15:15:07.058310 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:15:07.058552 kubelet[2768]: E0515 15:15:07.058321 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:15:07.058552 kubelet[2768]: E0515 15:15:07.058341 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:15:07.058552 kubelet[2768]: E0515 15:15:07.058350 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:15:07.058552 kubelet[2768]: E0515 15:15:07.058360 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:15:07.058552 kubelet[2768]: I0515 15:15:07.058371 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:15:07.116956 kubelet[2768]: E0515 15:15:07.115517 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:07.117938 containerd[1566]: time="2025-05-15T15:15:07.117378315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:15:07.227213 containerd[1566]: time="2025-05-15T15:15:07.224695133Z" level=error msg="Failed to destroy network for sandbox \"4ccea09817894400b44c1ab2bb23a3cfd5aef5847fae9dd9fac0fea350cefba2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:07.226972 systemd[1]: run-netns-cni\x2d14124cec\x2de4c3\x2d64a1\x2dc238\x2d35797813e18a.mount: Deactivated successfully. May 15 15:15:07.228641 containerd[1566]: time="2025-05-15T15:15:07.228585738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccea09817894400b44c1ab2bb23a3cfd5aef5847fae9dd9fac0fea350cefba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:07.230631 kubelet[2768]: E0515 15:15:07.230475 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccea09817894400b44c1ab2bb23a3cfd5aef5847fae9dd9fac0fea350cefba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:07.230940 kubelet[2768]: E0515 15:15:07.230767 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccea09817894400b44c1ab2bb23a3cfd5aef5847fae9dd9fac0fea350cefba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:07.230940 kubelet[2768]: E0515 15:15:07.230798 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccea09817894400b44c1ab2bb23a3cfd5aef5847fae9dd9fac0fea350cefba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:07.230940 kubelet[2768]: E0515 15:15:07.230847 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ccea09817894400b44c1ab2bb23a3cfd5aef5847fae9dd9fac0fea350cefba2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:15:09.877439 systemd[1]: Started sshd@23-165.232.158.142:22-139.178.68.195:34044.service - OpenSSH per-connection server daemon (139.178.68.195:34044). May 15 15:15:09.942911 sshd[4618]: Accepted publickey for core from 139.178.68.195 port 34044 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:09.944792 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:09.954156 systemd-logind[1490]: New session 21 of user core. May 15 15:15:09.960473 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 15:15:10.116210 containerd[1566]: time="2025-05-15T15:15:10.115875646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:15:10.116628 kubelet[2768]: E0515 15:15:10.116390 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:10.123361 kubelet[2768]: E0515 15:15:10.123103 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-56p29" podUID="ea3f9278-e4ee-4dca-80e1-48db54fe37e5" May 15 15:15:10.141009 sshd[4620]: Connection closed by 139.178.68.195 port 34044 May 15 15:15:10.141290 sshd-session[4618]: pam_unix(sshd:session): session closed for user core May 15 15:15:10.154516 systemd[1]: sshd@23-165.232.158.142:22-139.178.68.195:34044.service: Deactivated successfully. May 15 15:15:10.159398 systemd[1]: session-21.scope: Deactivated successfully. May 15 15:15:10.164788 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. May 15 15:15:10.170719 systemd-logind[1490]: Removed session 21. May 15 15:15:10.229315 containerd[1566]: time="2025-05-15T15:15:10.229233001Z" level=error msg="Failed to destroy network for sandbox \"d99d64e94b36e384d9f88545a3a5d3aadfd91564ded540448d8ee59524a00746\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:10.232436 containerd[1566]: time="2025-05-15T15:15:10.232308822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99d64e94b36e384d9f88545a3a5d3aadfd91564ded540448d8ee59524a00746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:10.237075 kubelet[2768]: E0515 15:15:10.234799 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99d64e94b36e384d9f88545a3a5d3aadfd91564ded540448d8ee59524a00746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:10.237075 kubelet[2768]: E0515 15:15:10.234878 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99d64e94b36e384d9f88545a3a5d3aadfd91564ded540448d8ee59524a00746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:10.237075 kubelet[2768]: E0515 15:15:10.234916 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99d64e94b36e384d9f88545a3a5d3aadfd91564ded540448d8ee59524a00746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:10.237075 kubelet[2768]: E0515 15:15:10.234975 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d99d64e94b36e384d9f88545a3a5d3aadfd91564ded540448d8ee59524a00746\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:15:10.235306 systemd[1]: run-netns-cni\x2dee7969d0\x2d33cc\x2d1b49\x2d4745\x2decab7aaf9afd.mount: Deactivated successfully. May 15 15:15:14.116462 containerd[1566]: time="2025-05-15T15:15:14.116386558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:15:14.196428 containerd[1566]: time="2025-05-15T15:15:14.196375462Z" level=error msg="Failed to destroy network for sandbox \"0fe6c9f35df07ac8e22feef52674f4e6a16cca626835133c849cbe737af49989\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:14.197586 containerd[1566]: time="2025-05-15T15:15:14.197523739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe6c9f35df07ac8e22feef52674f4e6a16cca626835133c849cbe737af49989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:14.198344 kubelet[2768]: E0515 15:15:14.197801 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe6c9f35df07ac8e22feef52674f4e6a16cca626835133c849cbe737af49989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:14.198344 kubelet[2768]: E0515 15:15:14.197882 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe6c9f35df07ac8e22feef52674f4e6a16cca626835133c849cbe737af49989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:14.198344 kubelet[2768]: E0515 15:15:14.197913 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe6c9f35df07ac8e22feef52674f4e6a16cca626835133c849cbe737af49989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:14.198344 kubelet[2768]: E0515 15:15:14.197970 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fe6c9f35df07ac8e22feef52674f4e6a16cca626835133c849cbe737af49989\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:15:14.201430 systemd[1]: run-netns-cni\x2d2f795c93\x2d0e83\x2ddcc1\x2d73e0\x2d12aad93f17ac.mount: Deactivated successfully. May 15 15:15:15.157750 systemd[1]: Started sshd@24-165.232.158.142:22-139.178.68.195:50436.service - OpenSSH per-connection server daemon (139.178.68.195:50436). May 15 15:15:15.214897 sshd[4692]: Accepted publickey for core from 139.178.68.195 port 50436 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:15.217313 sshd-session[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:15.226074 systemd-logind[1490]: New session 22 of user core. May 15 15:15:15.230425 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 15:15:15.383841 sshd[4694]: Connection closed by 139.178.68.195 port 50436 May 15 15:15:15.384060 sshd-session[4692]: pam_unix(sshd:session): session closed for user core May 15 15:15:15.390059 systemd[1]: sshd@24-165.232.158.142:22-139.178.68.195:50436.service: Deactivated successfully. May 15 15:15:15.392578 systemd[1]: session-22.scope: Deactivated successfully. May 15 15:15:15.394581 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. May 15 15:15:15.396225 systemd-logind[1490]: Removed session 22. May 15 15:15:17.087125 kubelet[2768]: I0515 15:15:17.087009 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:15:17.087665 kubelet[2768]: I0515 15:15:17.087213 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:15:17.090813 kubelet[2768]: I0515 15:15:17.090670 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:15:17.107804 kubelet[2768]: I0515 15:15:17.107438 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:15:17.107804 kubelet[2768]: I0515 15:15:17.107571 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/csi-node-driver-ssx6b","calico-system/calico-node-56p29","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107625 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107639 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107648 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107658 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107668 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107683 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107698 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107709 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107720 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:15:17.107804 kubelet[2768]: E0515 15:15:17.107732 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:15:17.107804 kubelet[2768]: I0515 15:15:17.107747 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:15:18.114545 kubelet[2768]: E0515 15:15:18.114488 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:18.116351 containerd[1566]: time="2025-05-15T15:15:18.115769328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:15:18.116834 kubelet[2768]: E0515 15:15:18.115072 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:18.117810 containerd[1566]: time="2025-05-15T15:15:18.117601861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:15:18.271402 containerd[1566]: time="2025-05-15T15:15:18.271337022Z" level=error msg="Failed to destroy network for sandbox \"4b8ce1c88bdb5b24751d402d0efac094b32457aadbc03504ff70a32ee3419944\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:18.272412 containerd[1566]: time="2025-05-15T15:15:18.272280153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8ce1c88bdb5b24751d402d0efac094b32457aadbc03504ff70a32ee3419944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:18.274586 kubelet[2768]: E0515 15:15:18.272586 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8ce1c88bdb5b24751d402d0efac094b32457aadbc03504ff70a32ee3419944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:18.274586 kubelet[2768]: E0515 15:15:18.272659 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8ce1c88bdb5b24751d402d0efac094b32457aadbc03504ff70a32ee3419944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:18.274586 kubelet[2768]: E0515 15:15:18.272690 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8ce1c88bdb5b24751d402d0efac094b32457aadbc03504ff70a32ee3419944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:18.274586 kubelet[2768]: E0515 15:15:18.272746 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b8ce1c88bdb5b24751d402d0efac094b32457aadbc03504ff70a32ee3419944\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:15:18.277562 systemd[1]: run-netns-cni\x2d4f1713f3\x2dc04d\x2d6c92\x2dfe7a\x2d9ef8b3555032.mount: Deactivated successfully. May 15 15:15:18.290048 containerd[1566]: time="2025-05-15T15:15:18.288017093Z" level=error msg="Failed to destroy network for sandbox \"a9026162f02f2f962aef0a3d461513a677c04952b6fd80de76e7c0e4538c525d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:18.293124 containerd[1566]: time="2025-05-15T15:15:18.293061940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9026162f02f2f962aef0a3d461513a677c04952b6fd80de76e7c0e4538c525d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:18.297205 kubelet[2768]: E0515 15:15:18.294807 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9026162f02f2f962aef0a3d461513a677c04952b6fd80de76e7c0e4538c525d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:18.297205 kubelet[2768]: E0515 15:15:18.294888 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9026162f02f2f962aef0a3d461513a677c04952b6fd80de76e7c0e4538c525d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:18.297205 kubelet[2768]: E0515 15:15:18.294917 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9026162f02f2f962aef0a3d461513a677c04952b6fd80de76e7c0e4538c525d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:18.297205 kubelet[2768]: E0515 15:15:18.294994 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9026162f02f2f962aef0a3d461513a677c04952b6fd80de76e7c0e4538c525d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:15:18.295209 systemd[1]: run-netns-cni\x2d2e20f422\x2d5996\x2d085a\x2d901d\x2dcaef94b416f1.mount: Deactivated successfully. May 15 15:15:20.402497 systemd[1]: Started sshd@25-165.232.158.142:22-139.178.68.195:50452.service - OpenSSH per-connection server daemon (139.178.68.195:50452). May 15 15:15:20.460415 sshd[4770]: Accepted publickey for core from 139.178.68.195 port 50452 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:20.462816 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:20.471281 systemd-logind[1490]: New session 23 of user core. May 15 15:15:20.478470 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 15:15:20.619681 sshd[4772]: Connection closed by 139.178.68.195 port 50452 May 15 15:15:20.620372 sshd-session[4770]: pam_unix(sshd:session): session closed for user core May 15 15:15:20.624746 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. May 15 15:15:20.624960 systemd[1]: sshd@25-165.232.158.142:22-139.178.68.195:50452.service: Deactivated successfully. May 15 15:15:20.627689 systemd[1]: session-23.scope: Deactivated successfully. May 15 15:15:20.631208 systemd-logind[1490]: Removed session 23. May 15 15:15:21.116894 kubelet[2768]: E0515 15:15:21.116862 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:22.114727 containerd[1566]: time="2025-05-15T15:15:22.114682052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:15:22.194088 containerd[1566]: time="2025-05-15T15:15:22.194006394Z" level=error msg="Failed to destroy network for sandbox \"ae39192ac197a260edebb1333899f7bb58b93d7a0fbe95bc97707070ea1c0715\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:22.196392 containerd[1566]: time="2025-05-15T15:15:22.196333997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae39192ac197a260edebb1333899f7bb58b93d7a0fbe95bc97707070ea1c0715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:22.197848 kubelet[2768]: E0515 15:15:22.196902 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae39192ac197a260edebb1333899f7bb58b93d7a0fbe95bc97707070ea1c0715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:22.197848 kubelet[2768]: E0515 15:15:22.196967 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae39192ac197a260edebb1333899f7bb58b93d7a0fbe95bc97707070ea1c0715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:22.197848 kubelet[2768]: E0515 15:15:22.196990 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae39192ac197a260edebb1333899f7bb58b93d7a0fbe95bc97707070ea1c0715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:22.197848 kubelet[2768]: E0515 15:15:22.197041 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssx6b_calico-system(7521021f-77bb-4466-96bd-6730a9b2c004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae39192ac197a260edebb1333899f7bb58b93d7a0fbe95bc97707070ea1c0715\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssx6b" podUID="7521021f-77bb-4466-96bd-6730a9b2c004" May 15 15:15:22.197990 systemd[1]: run-netns-cni\x2dd656090e\x2de39a\x2d16ea\x2d3211\x2dd34393b8c8aa.mount: Deactivated successfully. May 15 15:15:25.116136 containerd[1566]: time="2025-05-15T15:15:25.116057358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:15:25.118470 kubelet[2768]: E0515 15:15:25.118213 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:25.119680 containerd[1566]: time="2025-05-15T15:15:25.119608112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 15:15:25.196198 containerd[1566]: time="2025-05-15T15:15:25.193991503Z" level=error msg="Failed to destroy network for sandbox \"b36d470bb02ce5a990d44ce641314553b46f0ea9fdfcee90f0dd133b82d37a78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:25.196778 systemd[1]: run-netns-cni\x2d4cb68d37\x2db908\x2d5d5a\x2df385\x2d0d8309c90ca2.mount: Deactivated successfully. May 15 15:15:25.198274 containerd[1566]: time="2025-05-15T15:15:25.198229705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b36d470bb02ce5a990d44ce641314553b46f0ea9fdfcee90f0dd133b82d37a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:25.198708 kubelet[2768]: E0515 15:15:25.198668 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b36d470bb02ce5a990d44ce641314553b46f0ea9fdfcee90f0dd133b82d37a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:25.198847 kubelet[2768]: E0515 15:15:25.198730 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b36d470bb02ce5a990d44ce641314553b46f0ea9fdfcee90f0dd133b82d37a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:25.198847 kubelet[2768]: E0515 15:15:25.198752 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b36d470bb02ce5a990d44ce641314553b46f0ea9fdfcee90f0dd133b82d37a78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:25.199272 kubelet[2768]: E0515 15:15:25.199068 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b36d470bb02ce5a990d44ce641314553b46f0ea9fdfcee90f0dd133b82d37a78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:15:25.635150 systemd[1]: Started sshd@26-165.232.158.142:22-139.178.68.195:39780.service - OpenSSH per-connection server daemon (139.178.68.195:39780). May 15 15:15:25.697717 sshd[4844]: Accepted publickey for core from 139.178.68.195 port 39780 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:25.699475 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:25.707423 systemd-logind[1490]: New session 24 of user core. May 15 15:15:25.718437 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 15:15:25.865252 sshd[4846]: Connection closed by 139.178.68.195 port 39780 May 15 15:15:25.863674 sshd-session[4844]: pam_unix(sshd:session): session closed for user core May 15 15:15:25.869745 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. May 15 15:15:25.870599 systemd[1]: sshd@26-165.232.158.142:22-139.178.68.195:39780.service: Deactivated successfully. May 15 15:15:25.874722 systemd[1]: session-24.scope: Deactivated successfully. May 15 15:15:25.877405 systemd-logind[1490]: Removed session 24. May 15 15:15:27.154966 kubelet[2768]: I0515 15:15:27.154908 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:15:27.155843 kubelet[2768]: I0515 15:15:27.155431 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:15:27.159567 kubelet[2768]: I0515 15:15:27.159543 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:15:27.162434 kubelet[2768]: I0515 15:15:27.162401 2768 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" size=18182961 runtimeHandler="" May 15 15:15:27.171198 containerd[1566]: time="2025-05-15T15:15:27.170081012Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:15:27.200280 containerd[1566]: time="2025-05-15T15:15:27.198544895Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" returns successfully" May 15 15:15:27.211722 containerd[1566]: time="2025-05-15T15:15:27.211665416Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:15:27.212045 containerd[1566]: time="2025-05-15T15:15:27.212028590Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"" May 15 15:15:27.212315 containerd[1566]: time="2025-05-15T15:15:27.212284407Z" level=info msg="ImageDelete event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:15:27.212667 kubelet[2768]: I0515 15:15:27.212545 2768 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" size=321520 runtimeHandler="" May 15 15:15:27.212814 containerd[1566]: time="2025-05-15T15:15:27.212791896Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 15:15:27.213901 containerd[1566]: time="2025-05-15T15:15:27.213865248Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.9\"" May 15 15:15:27.215962 containerd[1566]: time="2025-05-15T15:15:27.215919131Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"" May 15 15:15:27.217595 containerd[1566]: time="2025-05-15T15:15:27.217553858Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" returns successfully" May 15 15:15:27.217758 containerd[1566]: time="2025-05-15T15:15:27.217735553Z" level=info msg="ImageDelete event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 15:15:27.219268 containerd[1566]: time="2025-05-15T15:15:27.218318005Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 15:15:27.219376 kubelet[2768]: I0515 15:15:27.218053 2768 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" size=57236178 runtimeHandler="" May 15 15:15:27.220342 containerd[1566]: time="2025-05-15T15:15:27.220304234Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.12-0\"" May 15 15:15:27.220943 containerd[1566]: time="2025-05-15T15:15:27.220876604Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\"" May 15 15:15:27.221215 containerd[1566]: time="2025-05-15T15:15:27.221189633Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" returns successfully" May 15 15:15:27.224281 containerd[1566]: time="2025-05-15T15:15:27.221292904Z" level=info msg="ImageDelete event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 15:15:27.243693 kubelet[2768]: I0515 15:15:27.243661 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:15:27.243901 kubelet[2768]: I0515 15:15:27.243881 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/csi-node-driver-ssx6b","calico-system/calico-node-56p29","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:15:27.244137 kubelet[2768]: E0515 15:15:27.244119 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:27.244249 kubelet[2768]: E0515 15:15:27.244239 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:27.244463 kubelet[2768]: E0515 15:15:27.244452 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:27.245494 kubelet[2768]: E0515 15:15:27.244614 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:27.245494 kubelet[2768]: E0515 15:15:27.244633 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:15:27.245494 kubelet[2768]: E0515 15:15:27.244651 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:15:27.245494 kubelet[2768]: E0515 15:15:27.244665 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:15:27.245494 kubelet[2768]: E0515 15:15:27.244680 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:15:27.245494 kubelet[2768]: E0515 15:15:27.244691 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:15:27.245494 kubelet[2768]: E0515 15:15:27.244704 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:15:27.245494 kubelet[2768]: I0515 15:15:27.244720 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:15:30.115039 kubelet[2768]: E0515 15:15:30.114892 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:30.118277 containerd[1566]: time="2025-05-15T15:15:30.118098314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:15:30.284965 containerd[1566]: time="2025-05-15T15:15:30.284729987Z" level=error msg="Failed to destroy network for sandbox \"a2cad97079499904a77d9576077108c10fdf7ee8dea650047e93970db28d8600\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:30.287141 systemd[1]: run-netns-cni\x2d9901de6f\x2dcc99\x2d97e7\x2dc831\x2d28b98be04756.mount: Deactivated successfully. May 15 15:15:30.289994 containerd[1566]: time="2025-05-15T15:15:30.289929180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cad97079499904a77d9576077108c10fdf7ee8dea650047e93970db28d8600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:30.291245 kubelet[2768]: E0515 15:15:30.290797 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cad97079499904a77d9576077108c10fdf7ee8dea650047e93970db28d8600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:30.291245 kubelet[2768]: E0515 15:15:30.290884 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cad97079499904a77d9576077108c10fdf7ee8dea650047e93970db28d8600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:30.291245 kubelet[2768]: E0515 15:15:30.290911 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2cad97079499904a77d9576077108c10fdf7ee8dea650047e93970db28d8600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:30.291245 kubelet[2768]: E0515 15:15:30.290964 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2cad97079499904a77d9576077108c10fdf7ee8dea650047e93970db28d8600\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:15:30.884089 systemd[1]: Started sshd@27-165.232.158.142:22-139.178.68.195:39786.service - OpenSSH per-connection server daemon (139.178.68.195:39786). May 15 15:15:30.977187 sshd[4892]: Accepted publickey for core from 139.178.68.195 port 39786 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:30.981018 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:30.991351 systemd-logind[1490]: New session 25 of user core. May 15 15:15:30.997552 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 15:15:31.209264 sshd[4894]: Connection closed by 139.178.68.195 port 39786 May 15 15:15:31.209792 sshd-session[4892]: pam_unix(sshd:session): session closed for user core May 15 15:15:31.220871 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. May 15 15:15:31.221939 systemd[1]: sshd@27-165.232.158.142:22-139.178.68.195:39786.service: Deactivated successfully. May 15 15:15:31.227094 systemd[1]: session-25.scope: Deactivated successfully. May 15 15:15:31.230586 systemd-logind[1490]: Removed session 25. May 15 15:15:33.114979 kubelet[2768]: E0515 15:15:33.114937 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:33.117738 containerd[1566]: time="2025-05-15T15:15:33.117696448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:15:33.252520 containerd[1566]: time="2025-05-15T15:15:33.252460606Z" level=error msg="Failed to destroy network for sandbox \"a393947ddc7b499adb020e356a6c42d202f6ff696bb65001416949c36ef6f631\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:33.255239 containerd[1566]: time="2025-05-15T15:15:33.255048128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a393947ddc7b499adb020e356a6c42d202f6ff696bb65001416949c36ef6f631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:33.256358 kubelet[2768]: E0515 15:15:33.256319 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a393947ddc7b499adb020e356a6c42d202f6ff696bb65001416949c36ef6f631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:15:33.256485 kubelet[2768]: E0515 15:15:33.256383 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a393947ddc7b499adb020e356a6c42d202f6ff696bb65001416949c36ef6f631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:33.256485 kubelet[2768]: E0515 15:15:33.256405 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a393947ddc7b499adb020e356a6c42d202f6ff696bb65001416949c36ef6f631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:33.256926 kubelet[2768]: E0515 15:15:33.256789 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a393947ddc7b499adb020e356a6c42d202f6ff696bb65001416949c36ef6f631\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:15:33.257291 systemd[1]: run-netns-cni\x2d3679fb9d\x2d2b17\x2d1e5c\x2d96aa\x2dbd1747e476f6.mount: Deactivated successfully. May 15 15:15:33.369985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145584156.mount: Deactivated successfully. May 15 15:15:33.396152 containerd[1566]: time="2025-05-15T15:15:33.396099050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:33.397349 containerd[1566]: time="2025-05-15T15:15:33.397302678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 15:15:33.398080 containerd[1566]: time="2025-05-15T15:15:33.397484604Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:33.399313 containerd[1566]: time="2025-05-15T15:15:33.399250094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:33.400047 containerd[1566]: time="2025-05-15T15:15:33.399814579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.277590013s" May 15 15:15:33.400047 containerd[1566]: time="2025-05-15T15:15:33.399849556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 15:15:33.437431 containerd[1566]: time="2025-05-15T15:15:33.437376009Z" level=info msg="CreateContainer within sandbox \"b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 15:15:33.446217 containerd[1566]: time="2025-05-15T15:15:33.445291015Z" level=info msg="Container e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1: CDI devices from CRI Config.CDIDevices: []" May 15 15:15:33.458434 containerd[1566]: time="2025-05-15T15:15:33.458340053Z" level=info msg="CreateContainer within sandbox \"b82ebf105b1c24c8d1c4604aed411903b8a42616bfe80cb67f9aa0b3e38b8bb1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\"" May 15 15:15:33.460404 containerd[1566]: time="2025-05-15T15:15:33.459153339Z" level=info msg="StartContainer for \"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\"" May 15 15:15:33.462159 containerd[1566]: time="2025-05-15T15:15:33.462050559Z" level=info msg="connecting to shim e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1" address="unix:///run/containerd/s/6fb3aa088f0edaf21bd87e29aee2457469ddeb6e744a09ed5e9b09221fe5d21e" protocol=ttrpc version=3 May 15 15:15:33.654878 systemd[1]: Started cri-containerd-e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1.scope - libcontainer container e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1. May 15 15:15:33.780303 containerd[1566]: time="2025-05-15T15:15:33.780265079Z" level=info msg="StartContainer for \"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\" returns successfully" May 15 15:15:34.029229 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 15:15:34.030045 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 15:15:34.582786 kubelet[2768]: E0515 15:15:34.582751 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:34.605750 kubelet[2768]: I0515 15:15:34.603570 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-56p29" podStartSLOduration=2.316595566 podStartE2EDuration="1m54.603549473s" podCreationTimestamp="2025-05-15 15:13:40 +0000 UTC" firstStartedPulling="2025-05-15 15:13:41.115684564 +0000 UTC m=+20.184858352" lastFinishedPulling="2025-05-15 15:15:33.402638484 +0000 UTC m=+132.471812259" observedRunningTime="2025-05-15 15:15:34.602477089 +0000 UTC m=+133.671650897" watchObservedRunningTime="2025-05-15 15:15:34.603549473 +0000 UTC m=+133.672723280" May 15 15:15:35.115252 containerd[1566]: time="2025-05-15T15:15:35.115191468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,}" May 15 15:15:35.443910 containerd[1566]: time="2025-05-15T15:15:35.441480674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\" id:\"12548d56db8b3b6adf7671161ed085cf0aa86708a4d6ed9ea6d0deb55e47c4e0\" pid:5028 exit_status:1 exited_at:{seconds:1747322135 nanos:440912669}" May 15 15:15:35.446767 systemd-networkd[1449]: cali1611f5935fc: Link UP May 15 15:15:35.448073 systemd-networkd[1449]: cali1611f5935fc: Gained carrier May 15 15:15:35.476577 containerd[1566]: 2025-05-15 15:15:35.149 [INFO][4996] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 15:15:35.476577 containerd[1566]: 2025-05-15 15:15:35.178 [INFO][4996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0 csi-node-driver- calico-system 7521021f-77bb-4466-96bd-6730a9b2c004 649 0 2025-05-15 15:13:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4334.0.0-a-3982d56781 csi-node-driver-ssx6b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1611f5935fc [] []}} ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Namespace="calico-system" Pod="csi-node-driver-ssx6b" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-" May 15 15:15:35.476577 containerd[1566]: 2025-05-15 15:15:35.178 [INFO][4996] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Namespace="calico-system" Pod="csi-node-driver-ssx6b" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" May 15 15:15:35.476577 containerd[1566]: 2025-05-15 15:15:35.345 [INFO][5008] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" HandleID="k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Workload="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.374 [INFO][5008] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" HandleID="k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Workload="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c2940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-3982d56781", "pod":"csi-node-driver-ssx6b", "timestamp":"2025-05-15 15:15:35.345708533 +0000 UTC"}, Hostname:"ci-4334.0.0-a-3982d56781", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.374 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.374 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.374 [INFO][5008] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-3982d56781' May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.377 [INFO][5008] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.386 [INFO][5008] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.394 [INFO][5008] ipam/ipam.go 489: Trying affinity for 192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.397 [INFO][5008] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.477810 containerd[1566]: 2025-05-15 15:15:35.401 [INFO][5008] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.478126 containerd[1566]: 2025-05-15 15:15:35.401 [INFO][5008] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.478126 containerd[1566]: 2025-05-15 15:15:35.407 [INFO][5008] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581 May 15 15:15:35.478126 containerd[1566]: 2025-05-15 15:15:35.415 [INFO][5008] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.478126 containerd[1566]: 2025-05-15 15:15:35.423 [INFO][5008] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.129/26] block=192.168.40.128/26 handle="k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.478126 containerd[1566]: 2025-05-15 15:15:35.423 [INFO][5008] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.129/26] handle="k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" host="ci-4334.0.0-a-3982d56781" May 15 15:15:35.478126 containerd[1566]: 2025-05-15 15:15:35.423 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 15:15:35.478126 containerd[1566]: 2025-05-15 15:15:35.424 [INFO][5008] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.129/26] IPv6=[] ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" HandleID="k8s-pod-network.d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Workload="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" May 15 15:15:35.478321 containerd[1566]: 2025-05-15 15:15:35.428 [INFO][4996] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Namespace="calico-system" Pod="csi-node-driver-ssx6b" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7521021f-77bb-4466-96bd-6730a9b2c004", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 13, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-3982d56781", ContainerID:"", Pod:"csi-node-driver-ssx6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1611f5935fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:15:35.478392 containerd[1566]: 2025-05-15 15:15:35.428 [INFO][4996] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.129/32] ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Namespace="calico-system" Pod="csi-node-driver-ssx6b" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" May 15 15:15:35.478392 containerd[1566]: 2025-05-15 15:15:35.428 [INFO][4996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1611f5935fc ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Namespace="calico-system" Pod="csi-node-driver-ssx6b" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" May 15 15:15:35.478392 containerd[1566]: 2025-05-15 15:15:35.449 [INFO][4996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Namespace="calico-system" Pod="csi-node-driver-ssx6b" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" May 15 15:15:35.478482 containerd[1566]: 2025-05-15 15:15:35.450 [INFO][4996] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Namespace="calico-system" Pod="csi-node-driver-ssx6b" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7521021f-77bb-4466-96bd-6730a9b2c004", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 13, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-3982d56781", ContainerID:"d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581", Pod:"csi-node-driver-ssx6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1611f5935fc", MAC:"fe:e9:38:10:a3:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:15:35.478540 containerd[1566]: 2025-05-15 15:15:35.472 [INFO][4996] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" Namespace="calico-system" Pod="csi-node-driver-ssx6b" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-csi--node--driver--ssx6b-eth0" May 15 15:15:35.555861 containerd[1566]: time="2025-05-15T15:15:35.555794096Z" level=info msg="connecting to shim d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581" address="unix:///run/containerd/s/edee283a4caa356a5939d330650a617a21f19c22879ae76b1ec0b5f1576b2bff" namespace=k8s.io protocol=ttrpc version=3 May 15 15:15:35.589277 kubelet[2768]: E0515 15:15:35.589220 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:35.644983 systemd[1]: Started cri-containerd-d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581.scope - libcontainer container d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581. May 15 15:15:35.721265 containerd[1566]: time="2025-05-15T15:15:35.720725068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssx6b,Uid:7521021f-77bb-4466-96bd-6730a9b2c004,Namespace:calico-system,Attempt:0,} returns sandbox id \"d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581\"" May 15 15:15:35.725654 containerd[1566]: time="2025-05-15T15:15:35.725314445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 15:15:35.979853 containerd[1566]: time="2025-05-15T15:15:35.979724375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\" id:\"3d2cb11d6f1601175981b957e6957da6cccd52765290aefc1978fd63aaed640e\" pid:5155 exit_status:1 exited_at:{seconds:1747322135 nanos:978550131}" May 15 15:15:36.115622 containerd[1566]: time="2025-05-15T15:15:36.115576914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,}" May 15 15:15:36.235599 systemd[1]: Started sshd@28-165.232.158.142:22-139.178.68.195:43206.service - OpenSSH per-connection server daemon (139.178.68.195:43206). May 15 15:15:36.344030 systemd-networkd[1449]: cali193dfe7e418: Link UP May 15 15:15:36.344640 systemd-networkd[1449]: cali193dfe7e418: Gained carrier May 15 15:15:36.356436 sshd[5268]: Accepted publickey for core from 139.178.68.195 port 43206 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:36.368879 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:36.372363 containerd[1566]: 2025-05-15 15:15:36.175 [INFO][5241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0 calico-kube-controllers-5595bbd956- calico-system 85795e54-736b-42e9-a348-a1b529022653 749 0 2025-05-15 15:13:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5595bbd956 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4334.0.0-a-3982d56781 calico-kube-controllers-5595bbd956-4ksb6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali193dfe7e418 [] []}} ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Namespace="calico-system" Pod="calico-kube-controllers-5595bbd956-4ksb6" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-" May 15 15:15:36.372363 containerd[1566]: 2025-05-15 15:15:36.176 [INFO][5241] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Namespace="calico-system" Pod="calico-kube-controllers-5595bbd956-4ksb6" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" May 15 15:15:36.372363 containerd[1566]: 2025-05-15 15:15:36.228 [INFO][5261] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" HandleID="k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Workload="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.259 [INFO][5261] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" HandleID="k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Workload="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003034c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-3982d56781", "pod":"calico-kube-controllers-5595bbd956-4ksb6", "timestamp":"2025-05-15 15:15:36.22811818 +0000 UTC"}, Hostname:"ci-4334.0.0-a-3982d56781", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.261 [INFO][5261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.261 [INFO][5261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.261 [INFO][5261] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-3982d56781' May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.264 [INFO][5261] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.271 [INFO][5261] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.282 [INFO][5261] ipam/ipam.go 489: Trying affinity for 192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.286 [INFO][5261] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.372598 containerd[1566]: 2025-05-15 15:15:36.292 [INFO][5261] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.375609 containerd[1566]: 2025-05-15 15:15:36.292 [INFO][5261] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.375609 containerd[1566]: 2025-05-15 15:15:36.296 [INFO][5261] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b May 15 15:15:36.375609 containerd[1566]: 2025-05-15 15:15:36.315 [INFO][5261] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.375609 containerd[1566]: 2025-05-15 15:15:36.323 [INFO][5261] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.130/26] block=192.168.40.128/26 handle="k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.375609 containerd[1566]: 2025-05-15 15:15:36.323 [INFO][5261] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.130/26] handle="k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" host="ci-4334.0.0-a-3982d56781" May 15 15:15:36.375609 containerd[1566]: 2025-05-15 15:15:36.324 [INFO][5261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 15:15:36.375609 containerd[1566]: 2025-05-15 15:15:36.325 [INFO][5261] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.130/26] IPv6=[] ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" HandleID="k8s-pod-network.e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Workload="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" May 15 15:15:36.376395 containerd[1566]: 2025-05-15 15:15:36.334 [INFO][5241] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Namespace="calico-system" Pod="calico-kube-controllers-5595bbd956-4ksb6" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0", GenerateName:"calico-kube-controllers-5595bbd956-", Namespace:"calico-system", SelfLink:"", UID:"85795e54-736b-42e9-a348-a1b529022653", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 13, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5595bbd956", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-3982d56781", ContainerID:"", Pod:"calico-kube-controllers-5595bbd956-4ksb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali193dfe7e418", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:15:36.376544 containerd[1566]: 2025-05-15 15:15:36.334 [INFO][5241] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.130/32] ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Namespace="calico-system" Pod="calico-kube-controllers-5595bbd956-4ksb6" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" May 15 15:15:36.376544 containerd[1566]: 2025-05-15 15:15:36.334 [INFO][5241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali193dfe7e418 ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Namespace="calico-system" Pod="calico-kube-controllers-5595bbd956-4ksb6" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" May 15 15:15:36.376544 containerd[1566]: 2025-05-15 15:15:36.342 [INFO][5241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Namespace="calico-system" Pod="calico-kube-controllers-5595bbd956-4ksb6" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" May 15 15:15:36.376677 containerd[1566]: 2025-05-15 15:15:36.342 [INFO][5241] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Namespace="calico-system" Pod="calico-kube-controllers-5595bbd956-4ksb6" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0", GenerateName:"calico-kube-controllers-5595bbd956-", Namespace:"calico-system", SelfLink:"", UID:"85795e54-736b-42e9-a348-a1b529022653", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 13, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5595bbd956", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-3982d56781", ContainerID:"e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b", Pod:"calico-kube-controllers-5595bbd956-4ksb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali193dfe7e418", MAC:"fe:2d:f8:90:47:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:15:36.376770 containerd[1566]: 2025-05-15 15:15:36.360 [INFO][5241] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" Namespace="calico-system" Pod="calico-kube-controllers-5595bbd956-4ksb6" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-calico--kube--controllers--5595bbd956--4ksb6-eth0" May 15 15:15:36.385679 systemd-logind[1490]: New session 26 of user core. May 15 15:15:36.393628 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 15:15:36.429233 containerd[1566]: time="2025-05-15T15:15:36.428818383Z" level=info msg="connecting to shim e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b" address="unix:///run/containerd/s/b3f6120ce8e9c9dd471a87dcefa074a094b9210d037fce98fb457d1827fac4c1" namespace=k8s.io protocol=ttrpc version=3 May 15 15:15:36.471378 systemd[1]: Started cri-containerd-e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b.scope - libcontainer container e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b. May 15 15:15:36.678828 sshd[5280]: Connection closed by 139.178.68.195 port 43206 May 15 15:15:36.678688 sshd-session[5268]: pam_unix(sshd:session): session closed for user core May 15 15:15:36.684805 systemd[1]: sshd@28-165.232.158.142:22-139.178.68.195:43206.service: Deactivated successfully. May 15 15:15:36.691304 systemd[1]: session-26.scope: Deactivated successfully. May 15 15:15:36.696064 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. May 15 15:15:36.699624 systemd-logind[1490]: Removed session 26. May 15 15:15:36.701300 containerd[1566]: time="2025-05-15T15:15:36.701263089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5595bbd956-4ksb6,Uid:85795e54-736b-42e9-a348-a1b529022653,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8299ca46e035c696c3b4a4dcf471f17ce923706d2f2d32889bcc520bb13660b\"" May 15 15:15:36.869873 systemd-networkd[1449]: vxlan.calico: Link UP May 15 15:15:36.869909 systemd-networkd[1449]: vxlan.calico: Gained carrier May 15 15:15:37.093416 systemd-networkd[1449]: cali1611f5935fc: Gained IPv6LL May 15 15:15:37.274384 kubelet[2768]: I0515 15:15:37.274349 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:15:37.275081 kubelet[2768]: I0515 15:15:37.274830 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:15:37.277999 kubelet[2768]: I0515 15:15:37.277971 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:15:37.309449 kubelet[2768]: I0515 15:15:37.307945 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:15:37.309792 kubelet[2768]: I0515 15:15:37.309667 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/csi-node-driver-ssx6b","calico-system/calico-typha-64b5f48db9-jvlhw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:15:37.309792 kubelet[2768]: E0515 15:15:37.309726 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:37.309792 kubelet[2768]: E0515 15:15:37.309736 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:37.309792 kubelet[2768]: E0515 15:15:37.309745 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:37.309792 kubelet[2768]: E0515 15:15:37.309752 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:37.309792 kubelet[2768]: E0515 15:15:37.309768 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:15:37.309792 kubelet[2768]: E0515 15:15:37.309778 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:15:37.309792 kubelet[2768]: E0515 15:15:37.309787 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:15:37.310417 kubelet[2768]: E0515 15:15:37.309795 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:15:37.310417 kubelet[2768]: E0515 15:15:37.309837 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:15:37.310417 kubelet[2768]: E0515 15:15:37.309846 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:15:37.310417 kubelet[2768]: I0515 15:15:37.309856 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:15:37.484227 containerd[1566]: time="2025-05-15T15:15:37.483835294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:37.484652 containerd[1566]: time="2025-05-15T15:15:37.484423126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 15 15:15:37.485678 containerd[1566]: time="2025-05-15T15:15:37.485609897Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:37.487422 containerd[1566]: time="2025-05-15T15:15:37.487310835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:37.487987 containerd[1566]: time="2025-05-15T15:15:37.487958224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.762606505s" May 15 15:15:37.488121 containerd[1566]: time="2025-05-15T15:15:37.488105633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 15:15:37.489112 containerd[1566]: time="2025-05-15T15:15:37.489089952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 15:15:37.492347 containerd[1566]: time="2025-05-15T15:15:37.492314768Z" level=info msg="CreateContainer within sandbox \"d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 15:15:37.503940 containerd[1566]: time="2025-05-15T15:15:37.503896624Z" level=info msg="Container 9d21368f7c7fed573766aa530fba9a4786ca4f6a48069496bb15704388dbb3a6: CDI devices from CRI Config.CDIDevices: []" May 15 15:15:37.517761 containerd[1566]: time="2025-05-15T15:15:37.517689406Z" level=info msg="CreateContainer within sandbox \"d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9d21368f7c7fed573766aa530fba9a4786ca4f6a48069496bb15704388dbb3a6\"" May 15 15:15:37.518762 containerd[1566]: time="2025-05-15T15:15:37.518710705Z" level=info msg="StartContainer for \"9d21368f7c7fed573766aa530fba9a4786ca4f6a48069496bb15704388dbb3a6\"" May 15 15:15:37.522543 containerd[1566]: time="2025-05-15T15:15:37.522502149Z" level=info msg="connecting to shim 9d21368f7c7fed573766aa530fba9a4786ca4f6a48069496bb15704388dbb3a6" address="unix:///run/containerd/s/edee283a4caa356a5939d330650a617a21f19c22879ae76b1ec0b5f1576b2bff" protocol=ttrpc version=3 May 15 15:15:37.562398 systemd[1]: Started cri-containerd-9d21368f7c7fed573766aa530fba9a4786ca4f6a48069496bb15704388dbb3a6.scope - libcontainer container 9d21368f7c7fed573766aa530fba9a4786ca4f6a48069496bb15704388dbb3a6. May 15 15:15:37.643286 containerd[1566]: time="2025-05-15T15:15:37.643244058Z" level=info msg="StartContainer for \"9d21368f7c7fed573766aa530fba9a4786ca4f6a48069496bb15704388dbb3a6\" returns successfully" May 15 15:15:38.117615 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL May 15 15:15:38.373508 systemd-networkd[1449]: cali193dfe7e418: Gained IPv6LL May 15 15:15:39.896060 containerd[1566]: time="2025-05-15T15:15:39.895993496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 15:15:39.909137 containerd[1566]: time="2025-05-15T15:15:39.909065906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/73/fs/usr/bin/kube-controllers: no space left on device" May 15 15:15:39.945044 kubelet[2768]: E0515 15:15:39.944985 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/73/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:15:39.945044 kubelet[2768]: E0515 15:15:39.945037 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/73/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:15:39.946604 containerd[1566]: time="2025-05-15T15:15:39.945905413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 15:15:39.952187 kubelet[2768]: E0515 15:15:39.952130 2768 kuberuntime_manager.go:1256] container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6flx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/kube-controllers:v3.29.3": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/73/fs/usr/bin/kube-controllers: no space left on device May 15 15:15:39.952562 kubelet[2768]: E0515 15:15:39.952492 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/73/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:15:40.624534 kubelet[2768]: E0515 15:15:40.624412 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:15:41.116942 kubelet[2768]: E0515 15:15:41.116412 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:41.118092 containerd[1566]: time="2025-05-15T15:15:41.117781608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,}" May 15 15:15:41.365399 systemd-networkd[1449]: cali92db4c1a7e1: Link UP May 15 15:15:41.370390 systemd-networkd[1449]: cali92db4c1a7e1: Gained carrier May 15 15:15:41.426292 containerd[1566]: 2025-05-15 15:15:41.178 [INFO][5451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0 coredns-7db6d8ff4d- kube-system 1ad4b350-5146-45de-9d05-ced32cc472bb 751 0 2025-05-15 15:13:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-3982d56781 coredns-7db6d8ff4d-zchv5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali92db4c1a7e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zchv5" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-" May 15 15:15:41.426292 containerd[1566]: 2025-05-15 15:15:41.178 [INFO][5451] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zchv5" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" May 15 15:15:41.426292 containerd[1566]: 2025-05-15 15:15:41.238 [INFO][5464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" HandleID="k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Workload="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.265 [INFO][5464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" HandleID="k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Workload="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edad0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-3982d56781", "pod":"coredns-7db6d8ff4d-zchv5", "timestamp":"2025-05-15 15:15:41.238532432 +0000 UTC"}, Hostname:"ci-4334.0.0-a-3982d56781", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.265 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.265 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.265 [INFO][5464] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-3982d56781' May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.272 [INFO][5464] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.285 [INFO][5464] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.301 [INFO][5464] ipam/ipam.go 489: Trying affinity for 192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.307 [INFO][5464] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.426561 containerd[1566]: 2025-05-15 15:15:41.318 [INFO][5464] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.427627 containerd[1566]: 2025-05-15 15:15:41.320 [INFO][5464] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.427627 containerd[1566]: 2025-05-15 15:15:41.325 [INFO][5464] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323 May 15 15:15:41.427627 containerd[1566]: 2025-05-15 15:15:41.336 [INFO][5464] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.427627 containerd[1566]: 2025-05-15 15:15:41.352 [INFO][5464] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.131/26] block=192.168.40.128/26 handle="k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.427627 containerd[1566]: 2025-05-15 15:15:41.352 [INFO][5464] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.131/26] handle="k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" host="ci-4334.0.0-a-3982d56781" May 15 15:15:41.427627 containerd[1566]: 2025-05-15 15:15:41.352 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 15:15:41.427627 containerd[1566]: 2025-05-15 15:15:41.352 [INFO][5464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.131/26] IPv6=[] ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" HandleID="k8s-pod-network.c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Workload="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" May 15 15:15:41.427810 containerd[1566]: 2025-05-15 15:15:41.358 [INFO][5451] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zchv5" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1ad4b350-5146-45de-9d05-ced32cc472bb", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-3982d56781", ContainerID:"", Pod:"coredns-7db6d8ff4d-zchv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92db4c1a7e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:15:41.427810 containerd[1566]: 2025-05-15 15:15:41.359 [INFO][5451] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.131/32] ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zchv5" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" May 15 15:15:41.427810 containerd[1566]: 2025-05-15 15:15:41.359 [INFO][5451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92db4c1a7e1 ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zchv5" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" May 15 15:15:41.427810 containerd[1566]: 2025-05-15 15:15:41.368 [INFO][5451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zchv5" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" May 15 15:15:41.427810 containerd[1566]: 2025-05-15 15:15:41.372 [INFO][5451] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zchv5" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1ad4b350-5146-45de-9d05-ced32cc472bb", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-3982d56781", ContainerID:"c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323", Pod:"coredns-7db6d8ff4d-zchv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92db4c1a7e1", MAC:"ea:d5:89:f0:ef:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:15:41.427810 containerd[1566]: 2025-05-15 15:15:41.416 [INFO][5451] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zchv5" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--zchv5-eth0" May 15 15:15:41.511444 containerd[1566]: time="2025-05-15T15:15:41.510402149Z" level=info msg="connecting to shim c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323" address="unix:///run/containerd/s/b889ac0797770d136d6a68cfedff489ff2e3389f1e26f66868a6dc1a23a5bb43" namespace=k8s.io protocol=ttrpc version=3 May 15 15:15:41.606362 systemd[1]: Started cri-containerd-c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323.scope - libcontainer container c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323. May 15 15:15:41.707927 systemd[1]: Started sshd@29-165.232.158.142:22-139.178.68.195:43208.service - OpenSSH per-connection server daemon (139.178.68.195:43208). May 15 15:15:41.857750 sshd[5523]: Accepted publickey for core from 139.178.68.195 port 43208 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:41.861534 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:41.863439 containerd[1566]: time="2025-05-15T15:15:41.863223179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zchv5,Uid:1ad4b350-5146-45de-9d05-ced32cc472bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323\"" May 15 15:15:41.864552 kubelet[2768]: E0515 15:15:41.864522 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:41.881760 systemd-logind[1490]: New session 27 of user core. May 15 15:15:41.888862 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 15:15:42.209357 sshd[5536]: Connection closed by 139.178.68.195 port 43208 May 15 15:15:42.210076 sshd-session[5523]: pam_unix(sshd:session): session closed for user core May 15 15:15:42.220412 systemd[1]: sshd@29-165.232.158.142:22-139.178.68.195:43208.service: Deactivated successfully. May 15 15:15:42.228658 systemd[1]: session-27.scope: Deactivated successfully. May 15 15:15:42.231727 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. May 15 15:15:42.236117 systemd-logind[1490]: Removed session 27. May 15 15:15:42.359624 containerd[1566]: time="2025-05-15T15:15:42.358645291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:42.359624 containerd[1566]: time="2025-05-15T15:15:42.359576960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 15 15:15:42.360566 containerd[1566]: time="2025-05-15T15:15:42.360535580Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:42.363935 containerd[1566]: time="2025-05-15T15:15:42.363891621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:42.364642 containerd[1566]: time="2025-05-15T15:15:42.364529095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.418588242s" May 15 15:15:42.364793 containerd[1566]: time="2025-05-15T15:15:42.364773596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 15:15:42.368256 containerd[1566]: time="2025-05-15T15:15:42.368200018Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:15:42.369951 containerd[1566]: time="2025-05-15T15:15:42.369245963Z" level=info msg="CreateContainer within sandbox \"d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 15:15:42.387332 containerd[1566]: time="2025-05-15T15:15:42.387271459Z" level=info msg="Container 2a8ebe985edadd46f2e2f56df6c0664bb1bc47f2330141547acbb4adb6d8b597: CDI devices from CRI Config.CDIDevices: []" May 15 15:15:42.423256 containerd[1566]: time="2025-05-15T15:15:42.423015806Z" level=info msg="CreateContainer within sandbox \"d22a1dd3b41862155fd29da2e13c331d2d6d3b80d9cddfa011e1d29a08aa2581\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2a8ebe985edadd46f2e2f56df6c0664bb1bc47f2330141547acbb4adb6d8b597\"" May 15 15:15:42.424399 containerd[1566]: time="2025-05-15T15:15:42.424357645Z" level=info msg="StartContainer for \"2a8ebe985edadd46f2e2f56df6c0664bb1bc47f2330141547acbb4adb6d8b597\"" May 15 15:15:42.428207 containerd[1566]: time="2025-05-15T15:15:42.428110838Z" level=info msg="connecting to shim 2a8ebe985edadd46f2e2f56df6c0664bb1bc47f2330141547acbb4adb6d8b597" address="unix:///run/containerd/s/edee283a4caa356a5939d330650a617a21f19c22879ae76b1ec0b5f1576b2bff" protocol=ttrpc version=3 May 15 15:15:42.469768 systemd[1]: Started cri-containerd-2a8ebe985edadd46f2e2f56df6c0664bb1bc47f2330141547acbb4adb6d8b597.scope - libcontainer container 2a8ebe985edadd46f2e2f56df6c0664bb1bc47f2330141547acbb4adb6d8b597. May 15 15:15:42.553750 containerd[1566]: time="2025-05-15T15:15:42.553710464Z" level=info msg="StartContainer for \"2a8ebe985edadd46f2e2f56df6c0664bb1bc47f2330141547acbb4adb6d8b597\" returns successfully" May 15 15:15:42.664555 kubelet[2768]: I0515 15:15:42.664454 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ssx6b" podStartSLOduration=116.021756302 podStartE2EDuration="2m2.664423499s" podCreationTimestamp="2025-05-15 15:13:40 +0000 UTC" firstStartedPulling="2025-05-15 15:15:35.724495519 +0000 UTC m=+134.793669309" lastFinishedPulling="2025-05-15 15:15:42.367162715 +0000 UTC m=+141.436336506" observedRunningTime="2025-05-15 15:15:42.661549555 +0000 UTC m=+141.730723351" watchObservedRunningTime="2025-05-15 15:15:42.664423499 +0000 UTC m=+141.733597295" May 15 15:15:42.789590 systemd-networkd[1449]: cali92db4c1a7e1: Gained IPv6LL May 15 15:15:42.900199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60883437.mount: Deactivated successfully. May 15 15:15:43.080264 containerd[1566]: time="2025-05-15T15:15:43.079960520Z" level=error msg="failed to cleanup \"extract-912273185-GFla sha256:f96114e9454bb8b5edf548870b385293d170efffaaf27ec6bca0df5396b830ef\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 15:15:43.081586 containerd[1566]: time="2025-05-15T15:15:43.081533360Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79/fs/usr/share/zoneinfo/posix/Africa/Bissau: no space left on device" May 15 15:15:43.081799 containerd[1566]: time="2025-05-15T15:15:43.081573338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=3057258" May 15 15:15:43.082083 kubelet[2768]: E0515 15:15:43.082020 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79/fs/usr/share/zoneinfo/posix/Africa/Bissau: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.1" May 15 15:15:43.082213 kubelet[2768]: E0515 15:15:43.082082 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79/fs/usr/share/zoneinfo/posix/Africa/Bissau: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.1" May 15 15:15:43.082444 kubelet[2768]: E0515 15:15:43.082409 2768 kuberuntime_manager.go:1256] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-478kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb): ErrImagePull: failed to pull and unpack image "registry.k8s.io/coredns/coredns:v1.11.1": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79/fs/usr/share/zoneinfo/posix/Africa/Bissau: no space left on device May 15 15:15:43.082603 kubelet[2768]: E0515 15:15:43.082542 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.1\\\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/79/fs/usr/share/zoneinfo/posix/Africa/Bissau: no space left on device\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:15:43.284739 kubelet[2768]: I0515 15:15:43.284695 2768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 15:15:43.284739 kubelet[2768]: I0515 15:15:43.284735 2768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 15:15:43.641107 kubelet[2768]: E0515 15:15:43.640962 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:43.642354 kubelet[2768]: E0515 15:15:43.641947 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.11.1\\\"\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:15:44.115900 kubelet[2768]: E0515 15:15:44.115028 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:44.116958 containerd[1566]: time="2025-05-15T15:15:44.116904936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,}" May 15 15:15:44.286437 systemd-networkd[1449]: calif56348241d8: Link UP May 15 15:15:44.288672 systemd-networkd[1449]: calif56348241d8: Gained carrier May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.179 [INFO][5603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0 coredns-7db6d8ff4d- kube-system 9cbb0523-a6f6-461c-a2a5-fad5b947b233 745 0 2025-05-15 15:13:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-3982d56781 coredns-7db6d8ff4d-nzhxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif56348241d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhxw" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.179 [INFO][5603] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhxw" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.219 [INFO][5614] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" HandleID="k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Workload="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.234 [INFO][5614] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" HandleID="k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Workload="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290f30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-3982d56781", "pod":"coredns-7db6d8ff4d-nzhxw", "timestamp":"2025-05-15 15:15:44.219802222 +0000 UTC"}, Hostname:"ci-4334.0.0-a-3982d56781", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.234 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.234 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.234 [INFO][5614] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-3982d56781' May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.237 [INFO][5614] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.243 [INFO][5614] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.251 [INFO][5614] ipam/ipam.go 489: Trying affinity for 192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.254 [INFO][5614] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.258 [INFO][5614] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.258 [INFO][5614] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.261 [INFO][5614] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926 May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.268 [INFO][5614] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.276 [INFO][5614] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.132/26] block=192.168.40.128/26 handle="k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.276 [INFO][5614] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.132/26] handle="k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" host="ci-4334.0.0-a-3982d56781" May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.276 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 15:15:44.316021 containerd[1566]: 2025-05-15 15:15:44.276 [INFO][5614] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.132/26] IPv6=[] ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" HandleID="k8s-pod-network.3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Workload="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" May 15 15:15:44.318645 containerd[1566]: 2025-05-15 15:15:44.281 [INFO][5603] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhxw" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9cbb0523-a6f6-461c-a2a5-fad5b947b233", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-3982d56781", ContainerID:"", Pod:"coredns-7db6d8ff4d-nzhxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif56348241d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:15:44.318645 containerd[1566]: 2025-05-15 15:15:44.281 [INFO][5603] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.132/32] ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhxw" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" May 15 15:15:44.318645 containerd[1566]: 2025-05-15 15:15:44.281 [INFO][5603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif56348241d8 ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhxw" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" May 15 15:15:44.318645 containerd[1566]: 2025-05-15 15:15:44.288 [INFO][5603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhxw" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" May 15 15:15:44.318645 containerd[1566]: 2025-05-15 15:15:44.290 [INFO][5603] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhxw" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9cbb0523-a6f6-461c-a2a5-fad5b947b233", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-3982d56781", ContainerID:"3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926", Pod:"coredns-7db6d8ff4d-nzhxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif56348241d8", MAC:"f2:cd:bc:56:1d:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:15:44.318645 containerd[1566]: 2025-05-15 15:15:44.309 [INFO][5603] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhxw" WorkloadEndpoint="ci--4334.0.0--a--3982d56781-k8s-coredns--7db6d8ff4d--nzhxw-eth0" May 15 15:15:44.369497 containerd[1566]: time="2025-05-15T15:15:44.369084883Z" level=info msg="connecting to shim 3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926" address="unix:///run/containerd/s/46cce27c32c68d334c506e554c96ef528d8eaf9e05f2572264d940f7b5e8f626" namespace=k8s.io protocol=ttrpc version=3 May 15 15:15:44.418836 systemd[1]: Started cri-containerd-3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926.scope - libcontainer container 3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926. May 15 15:15:44.522203 containerd[1566]: time="2025-05-15T15:15:44.522142216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhxw,Uid:9cbb0523-a6f6-461c-a2a5-fad5b947b233,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926\"" May 15 15:15:44.523643 kubelet[2768]: E0515 15:15:44.523614 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:44.526000 containerd[1566]: time="2025-05-15T15:15:44.525626070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:15:45.031916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769201879.mount: Deactivated successfully. May 15 15:15:45.154055 containerd[1566]: time="2025-05-15T15:15:45.154006569Z" level=error msg="failed to cleanup \"extract-46384596-2lFf sha256:f96114e9454bb8b5edf548870b385293d170efffaaf27ec6bca0df5396b830ef\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 15:15:45.155694 containerd[1566]: time="2025-05-15T15:15:45.155549323Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/posix/Australia/Hobart: no space left on device" May 15 15:15:45.155932 kubelet[2768]: E0515 15:15:45.155837 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/posix/Australia/Hobart: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.1" May 15 15:15:45.155932 kubelet[2768]: E0515 15:15:45.155887 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/posix/Australia/Hobart: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.1" May 15 15:15:45.156618 containerd[1566]: time="2025-05-15T15:15:45.155894763Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=2008682" May 15 15:15:45.156676 kubelet[2768]: E0515 15:15:45.156413 2768 kuberuntime_manager.go:1256] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84svn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-7db6d8ff4d-nzhxw_kube-system(9cbb0523-a6f6-461c-a2a5-fad5b947b233): ErrImagePull: failed to pull and unpack image "registry.k8s.io/coredns/coredns:v1.11.1": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/posix/Australia/Hobart: no space left on device May 15 15:15:45.156676 kubelet[2768]: E0515 15:15:45.156450 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.1\\\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/posix/Australia/Hobart: no space left on device\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:15:45.655534 kubelet[2768]: E0515 15:15:45.655398 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:45.656548 kubelet[2768]: E0515 15:15:45.656481 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.11.1\\\"\"" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podUID="9cbb0523-a6f6-461c-a2a5-fad5b947b233" May 15 15:15:46.181459 systemd-networkd[1449]: calif56348241d8: Gained IPv6LL May 15 15:15:47.230498 systemd[1]: Started sshd@30-165.232.158.142:22-139.178.68.195:49018.service - OpenSSH per-connection server daemon (139.178.68.195:49018). May 15 15:15:47.345207 sshd[5697]: Accepted publickey for core from 139.178.68.195 port 49018 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:47.350264 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:47.356332 kubelet[2768]: I0515 15:15:47.355726 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:15:47.357639 kubelet[2768]: I0515 15:15:47.356983 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:15:47.365201 kubelet[2768]: I0515 15:15:47.364137 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:15:47.374266 systemd-logind[1490]: New session 28 of user core. May 15 15:15:47.377694 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 15:15:47.398038 kubelet[2768]: I0515 15:15:47.397983 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:15:47.398303 kubelet[2768]: I0515 15:15:47.398264 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-typha-64b5f48db9-jvlhw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:15:47.398469 kubelet[2768]: E0515 15:15:47.398342 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:47.398469 kubelet[2768]: E0515 15:15:47.398360 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:47.398469 kubelet[2768]: E0515 15:15:47.398385 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:47.398469 kubelet[2768]: E0515 15:15:47.398405 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:15:47.398469 kubelet[2768]: E0515 15:15:47.398419 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:15:47.398469 kubelet[2768]: E0515 15:15:47.398433 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:15:47.398469 kubelet[2768]: E0515 15:15:47.398460 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:15:47.398838 kubelet[2768]: E0515 15:15:47.398475 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:15:47.398838 kubelet[2768]: E0515 15:15:47.398494 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:47.398838 kubelet[2768]: E0515 15:15:47.398506 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:15:47.398838 kubelet[2768]: I0515 15:15:47.398523 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:15:47.576004 sshd[5706]: Connection closed by 139.178.68.195 port 49018 May 15 15:15:47.576223 sshd-session[5697]: pam_unix(sshd:session): session closed for user core May 15 15:15:47.584518 systemd[1]: sshd@30-165.232.158.142:22-139.178.68.195:49018.service: Deactivated successfully. May 15 15:15:47.591240 systemd[1]: session-28.scope: Deactivated successfully. May 15 15:15:47.593470 systemd-logind[1490]: Session 28 logged out. Waiting for processes to exit. May 15 15:15:47.596285 systemd-logind[1490]: Removed session 28. May 15 15:15:48.115151 kubelet[2768]: E0515 15:15:48.115059 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:52.116913 containerd[1566]: time="2025-05-15T15:15:52.116563398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 15:15:52.593726 systemd[1]: Started sshd@31-165.232.158.142:22-139.178.68.195:49028.service - OpenSSH per-connection server daemon (139.178.68.195:49028). May 15 15:15:52.654266 sshd[5720]: Accepted publickey for core from 139.178.68.195 port 49028 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:52.656274 sshd-session[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:52.662049 systemd-logind[1490]: New session 29 of user core. May 15 15:15:52.666480 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 15:15:52.840593 sshd[5722]: Connection closed by 139.178.68.195 port 49028 May 15 15:15:52.841305 sshd-session[5720]: pam_unix(sshd:session): session closed for user core May 15 15:15:52.845557 systemd[1]: sshd@31-165.232.158.142:22-139.178.68.195:49028.service: Deactivated successfully. May 15 15:15:52.848044 systemd[1]: session-29.scope: Deactivated successfully. May 15 15:15:52.849491 systemd-logind[1490]: Session 29 logged out. Waiting for processes to exit. May 15 15:15:52.852862 systemd-logind[1490]: Removed session 29. May 15 15:15:53.337823 containerd[1566]: time="2025-05-15T15:15:53.337714610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" May 15 15:15:53.337823 containerd[1566]: time="2025-05-15T15:15:53.337748945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=5247158" May 15 15:15:53.338732 kubelet[2768]: E0515 15:15:53.338005 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:15:53.338732 kubelet[2768]: E0515 15:15:53.338060 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:15:53.338732 kubelet[2768]: E0515 15:15:53.338210 2768 kuberuntime_manager.go:1256] container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6flx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/kube-controllers:v3.29.3": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device May 15 15:15:53.338732 kubelet[2768]: E0515 15:15:53.338240 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:15:56.115051 kubelet[2768]: E0515 15:15:56.114471 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:56.117628 containerd[1566]: time="2025-05-15T15:15:56.117509439Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:15:56.591900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879096055.mount: Deactivated successfully. May 15 15:15:56.714205 containerd[1566]: time="2025-05-15T15:15:56.713848792Z" level=error msg="failed to cleanup \"extract-603648911-HY3J sha256:f96114e9454bb8b5edf548870b385293d170efffaaf27ec6bca0df5396b830ef\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 15:15:56.714534 containerd[1566]: time="2025-05-15T15:15:56.714469480Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/87/fs/usr/share/zoneinfo/posix/Asia/Seoul: no space left on device" May 15 15:15:56.714592 containerd[1566]: time="2025-05-15T15:15:56.714567343Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=2008682" May 15 15:15:56.714875 kubelet[2768]: E0515 15:15:56.714828 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/87/fs/usr/share/zoneinfo/posix/Asia/Seoul: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.1" May 15 15:15:56.715380 kubelet[2768]: E0515 15:15:56.714898 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.1\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/87/fs/usr/share/zoneinfo/posix/Asia/Seoul: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.1" May 15 15:15:56.715380 kubelet[2768]: E0515 15:15:56.715091 2768 kuberuntime_manager.go:1256] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-478kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-7db6d8ff4d-zchv5_kube-system(1ad4b350-5146-45de-9d05-ced32cc472bb): ErrImagePull: failed to pull and unpack image "registry.k8s.io/coredns/coredns:v1.11.1": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/87/fs/usr/share/zoneinfo/posix/Asia/Seoul: no space left on device May 15 15:15:56.715380 kubelet[2768]: E0515 15:15:56.715149 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.1\\\": failed to extract layer sha256:7bea6b893187b14fc0a759fe5f8972d1292a9c0554c87cbf485f0947c26b8a05: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/87/fs/usr/share/zoneinfo/posix/Asia/Seoul: no space left on device\"" pod="kube-system/coredns-7db6d8ff4d-zchv5" podUID="1ad4b350-5146-45de-9d05-ced32cc472bb" May 15 15:15:57.427621 kubelet[2768]: I0515 15:15:57.427574 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:15:57.427621 kubelet[2768]: I0515 15:15:57.427620 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:15:57.430832 kubelet[2768]: I0515 15:15:57.430766 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:15:57.432395 kubelet[2768]: I0515 15:15:57.432363 2768 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578" size=21998657 runtimeHandler="" May 15 15:15:57.432668 containerd[1566]: time="2025-05-15T15:15:57.432532836Z" level=info msg="RemoveImage \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 15:15:57.434301 containerd[1566]: time="2025-05-15T15:15:57.433866332Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.36.7\"" May 15 15:15:57.434301 containerd[1566]: time="2025-05-15T15:15:57.433962741Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\"" May 15 15:15:57.434391 containerd[1566]: time="2025-05-15T15:15:57.434355246Z" level=info msg="RemoveImage \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" returns successfully" May 15 15:15:57.435090 containerd[1566]: time="2025-05-15T15:15:57.435019342Z" level=info msg="ImageDelete event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 15:15:57.448574 kubelet[2768]: I0515 15:15:57.448218 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:15:57.448574 kubelet[2768]: I0515 15:15:57.448413 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-typha-64b5f48db9-jvlhw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448455 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448468 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448476 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448487 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448496 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448505 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448519 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448527 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448535 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:15:57.448574 kubelet[2768]: E0515 15:15:57.448544 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:15:57.448574 kubelet[2768]: I0515 15:15:57.448553 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:15:57.868352 systemd[1]: Started sshd@32-165.232.158.142:22-139.178.68.195:42836.service - OpenSSH per-connection server daemon (139.178.68.195:42836). May 15 15:15:57.939388 sshd[5754]: Accepted publickey for core from 139.178.68.195 port 42836 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:15:57.941245 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:15:57.947224 systemd-logind[1490]: New session 30 of user core. May 15 15:15:57.957493 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 15:15:58.115917 kubelet[2768]: E0515 15:15:58.114408 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:15:58.121020 containerd[1566]: time="2025-05-15T15:15:58.120778344Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:15:58.149281 sshd[5756]: Connection closed by 139.178.68.195 port 42836 May 15 15:15:58.149905 sshd-session[5754]: pam_unix(sshd:session): session closed for user core May 15 15:15:58.158842 systemd[1]: sshd@32-165.232.158.142:22-139.178.68.195:42836.service: Deactivated successfully. May 15 15:15:58.163942 systemd[1]: session-30.scope: Deactivated successfully. May 15 15:15:58.167072 systemd-logind[1490]: Session 30 logged out. Waiting for processes to exit. May 15 15:15:58.170071 systemd-logind[1490]: Removed session 30. May 15 15:15:58.547305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463951306.mount: Deactivated successfully. May 15 15:15:59.692240 containerd[1566]: time="2025-05-15T15:15:59.692186522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:59.694024 containerd[1566]: time="2025-05-15T15:15:59.693969697Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 15:15:59.695193 containerd[1566]: time="2025-05-15T15:15:59.694975963Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:59.699242 containerd[1566]: time="2025-05-15T15:15:59.699207721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:15:59.701429 containerd[1566]: time="2025-05-15T15:15:59.701332931Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.578883644s" May 15 15:15:59.701429 containerd[1566]: time="2025-05-15T15:15:59.701366819Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:15:59.703928 containerd[1566]: time="2025-05-15T15:15:59.703890512Z" level=info msg="CreateContainer within sandbox \"3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 15:15:59.722166 containerd[1566]: time="2025-05-15T15:15:59.722114638Z" level=info msg="Container 3ae7a2d803e0df28abc99a85ae7efb95a26de07a04439476e85638c124b4c82f: CDI devices from CRI Config.CDIDevices: []" May 15 15:15:59.727591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010581904.mount: Deactivated successfully. May 15 15:15:59.738828 containerd[1566]: time="2025-05-15T15:15:59.738780343Z" level=info msg="CreateContainer within sandbox \"3ef44ea34ca60311c70c414a8fbd80ba8284c60650e6e2298320b8b63cee1926\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ae7a2d803e0df28abc99a85ae7efb95a26de07a04439476e85638c124b4c82f\"" May 15 15:15:59.739487 containerd[1566]: time="2025-05-15T15:15:59.739463425Z" level=info msg="StartContainer for \"3ae7a2d803e0df28abc99a85ae7efb95a26de07a04439476e85638c124b4c82f\"" May 15 15:15:59.740672 containerd[1566]: time="2025-05-15T15:15:59.740608308Z" level=info msg="connecting to shim 3ae7a2d803e0df28abc99a85ae7efb95a26de07a04439476e85638c124b4c82f" address="unix:///run/containerd/s/46cce27c32c68d334c506e554c96ef528d8eaf9e05f2572264d940f7b5e8f626" protocol=ttrpc version=3 May 15 15:15:59.768867 systemd[1]: Started cri-containerd-3ae7a2d803e0df28abc99a85ae7efb95a26de07a04439476e85638c124b4c82f.scope - libcontainer container 3ae7a2d803e0df28abc99a85ae7efb95a26de07a04439476e85638c124b4c82f. May 15 15:15:59.819806 containerd[1566]: time="2025-05-15T15:15:59.819758796Z" level=info msg="StartContainer for \"3ae7a2d803e0df28abc99a85ae7efb95a26de07a04439476e85638c124b4c82f\" returns successfully" May 15 15:16:00.116099 kubelet[2768]: E0515 15:16:00.115401 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:00.116099 kubelet[2768]: E0515 15:16:00.115951 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:00.707847 kubelet[2768]: E0515 15:16:00.707533 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:00.740367 kubelet[2768]: I0515 15:16:00.740221 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nzhxw" podStartSLOduration=131.56313173 podStartE2EDuration="2m26.740162002s" podCreationTimestamp="2025-05-15 15:13:34 +0000 UTC" firstStartedPulling="2025-05-15 15:15:44.525313184 +0000 UTC m=+143.594486971" lastFinishedPulling="2025-05-15 15:15:59.702343456 +0000 UTC m=+158.771517243" observedRunningTime="2025-05-15 15:16:00.735031762 +0000 UTC m=+159.804205559" watchObservedRunningTime="2025-05-15 15:16:00.740162002 +0000 UTC m=+159.809335801" May 15 15:16:01.710593 kubelet[2768]: E0515 15:16:01.710486 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:02.714668 kubelet[2768]: E0515 15:16:02.714618 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:03.167543 systemd[1]: Started sshd@33-165.232.158.142:22-139.178.68.195:42846.service - OpenSSH per-connection server daemon (139.178.68.195:42846). May 15 15:16:03.269530 sshd[5851]: Accepted publickey for core from 139.178.68.195 port 42846 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:03.271109 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:03.279668 systemd-logind[1490]: New session 31 of user core. May 15 15:16:03.284436 systemd[1]: Started session-31.scope - Session 31 of User core. May 15 15:16:03.489448 sshd[5853]: Connection closed by 139.178.68.195 port 42846 May 15 15:16:03.490491 sshd-session[5851]: pam_unix(sshd:session): session closed for user core May 15 15:16:03.496484 systemd[1]: sshd@33-165.232.158.142:22-139.178.68.195:42846.service: Deactivated successfully. May 15 15:16:03.500393 systemd[1]: session-31.scope: Deactivated successfully. May 15 15:16:03.503688 systemd-logind[1490]: Session 31 logged out. Waiting for processes to exit. May 15 15:16:03.506036 systemd-logind[1490]: Removed session 31. May 15 15:16:05.371139 containerd[1566]: time="2025-05-15T15:16:05.370967149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\" id:\"272948eb0a672c07da804baa2a098e60c1ea1dff18b8de9236301ffbd09b3469\" pid:5880 exited_at:{seconds:1747322165 nanos:370301645}" May 15 15:16:05.375986 kubelet[2768]: E0515 15:16:05.375739 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:07.117342 kubelet[2768]: E0515 15:16:07.116231 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:16:07.478266 kubelet[2768]: I0515 15:16:07.478131 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:16:07.478266 kubelet[2768]: I0515 15:16:07.478201 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:16:07.481957 kubelet[2768]: I0515 15:16:07.481894 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:16:07.504207 kubelet[2768]: I0515 15:16:07.504120 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:16:07.504385 kubelet[2768]: I0515 15:16:07.504364 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-zchv5","calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:16:07.504456 kubelet[2768]: E0515 15:16:07.504407 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:16:07.504456 kubelet[2768]: E0515 15:16:07.504420 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:16:07.504456 kubelet[2768]: E0515 15:16:07.504432 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:16:07.504456 kubelet[2768]: E0515 15:16:07.504441 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:16:07.504456 kubelet[2768]: E0515 15:16:07.504451 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:16:07.504587 kubelet[2768]: E0515 15:16:07.504462 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:16:07.504587 kubelet[2768]: E0515 15:16:07.504475 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:16:07.504587 kubelet[2768]: E0515 15:16:07.504486 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:16:07.504587 kubelet[2768]: E0515 15:16:07.504498 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:16:07.504587 kubelet[2768]: E0515 15:16:07.504506 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:16:07.504587 kubelet[2768]: I0515 15:16:07.504516 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:16:08.507461 systemd[1]: Started sshd@34-165.232.158.142:22-139.178.68.195:35994.service - OpenSSH per-connection server daemon (139.178.68.195:35994). May 15 15:16:08.579147 sshd[5894]: Accepted publickey for core from 139.178.68.195 port 35994 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:08.581359 sshd-session[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:08.588530 systemd-logind[1490]: New session 32 of user core. May 15 15:16:08.594456 systemd[1]: Started session-32.scope - Session 32 of User core. May 15 15:16:08.748977 sshd[5896]: Connection closed by 139.178.68.195 port 35994 May 15 15:16:08.748809 sshd-session[5894]: pam_unix(sshd:session): session closed for user core May 15 15:16:08.758075 systemd[1]: sshd@34-165.232.158.142:22-139.178.68.195:35994.service: Deactivated successfully. May 15 15:16:08.762005 systemd[1]: session-32.scope: Deactivated successfully. May 15 15:16:08.764349 systemd-logind[1490]: Session 32 logged out. Waiting for processes to exit. May 15 15:16:08.767362 systemd-logind[1490]: Removed session 32. May 15 15:16:10.115435 kubelet[2768]: E0515 15:16:10.114981 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:10.119600 containerd[1566]: time="2025-05-15T15:16:10.119448943Z" level=info msg="CreateContainer within sandbox \"c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 15:16:10.128053 containerd[1566]: time="2025-05-15T15:16:10.128011405Z" level=info msg="Container c56915bf0aba67dc641d54d65554cfa695d828eef95bdd402b6f2ab2d157f19a: CDI devices from CRI Config.CDIDevices: []" May 15 15:16:10.141242 containerd[1566]: time="2025-05-15T15:16:10.140197468Z" level=info msg="CreateContainer within sandbox \"c75e735428243e4f284440d006388ec0ad8306f5ec3006cc7eb7380ad5776323\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c56915bf0aba67dc641d54d65554cfa695d828eef95bdd402b6f2ab2d157f19a\"" May 15 15:16:10.142276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352626034.mount: Deactivated successfully. May 15 15:16:10.146412 containerd[1566]: time="2025-05-15T15:16:10.146095289Z" level=info msg="StartContainer for \"c56915bf0aba67dc641d54d65554cfa695d828eef95bdd402b6f2ab2d157f19a\"" May 15 15:16:10.148670 containerd[1566]: time="2025-05-15T15:16:10.148603772Z" level=info msg="connecting to shim c56915bf0aba67dc641d54d65554cfa695d828eef95bdd402b6f2ab2d157f19a" address="unix:///run/containerd/s/b889ac0797770d136d6a68cfedff489ff2e3389f1e26f66868a6dc1a23a5bb43" protocol=ttrpc version=3 May 15 15:16:10.179789 systemd[1]: Started cri-containerd-c56915bf0aba67dc641d54d65554cfa695d828eef95bdd402b6f2ab2d157f19a.scope - libcontainer container c56915bf0aba67dc641d54d65554cfa695d828eef95bdd402b6f2ab2d157f19a. May 15 15:16:10.244427 containerd[1566]: time="2025-05-15T15:16:10.244386573Z" level=info msg="StartContainer for \"c56915bf0aba67dc641d54d65554cfa695d828eef95bdd402b6f2ab2d157f19a\" returns successfully" May 15 15:16:10.738095 kubelet[2768]: E0515 15:16:10.738041 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:11.489076 kubelet[2768]: I0515 15:16:11.489011 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zchv5" podStartSLOduration=-9223371879.36579 podStartE2EDuration="2m37.488987117s" podCreationTimestamp="2025-05-15 15:13:34 +0000 UTC" firstStartedPulling="2025-05-15 15:15:41.868601169 +0000 UTC m=+140.937774956" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:16:10.75249428 +0000 UTC m=+169.821668076" watchObservedRunningTime="2025-05-15 15:16:11.488987117 +0000 UTC m=+170.558160912" May 15 15:16:11.740798 kubelet[2768]: E0515 15:16:11.740634 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:12.743033 kubelet[2768]: E0515 15:16:12.743001 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:13.767528 systemd[1]: Started sshd@35-165.232.158.142:22-139.178.68.195:44772.service - OpenSSH per-connection server daemon (139.178.68.195:44772). May 15 15:16:13.838443 sshd[5943]: Accepted publickey for core from 139.178.68.195 port 44772 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:13.840103 sshd-session[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:13.846862 systemd-logind[1490]: New session 33 of user core. May 15 15:16:13.852496 systemd[1]: Started session-33.scope - Session 33 of User core. May 15 15:16:14.028490 sshd[5945]: Connection closed by 139.178.68.195 port 44772 May 15 15:16:14.031746 sshd-session[5943]: pam_unix(sshd:session): session closed for user core May 15 15:16:14.036926 systemd[1]: sshd@35-165.232.158.142:22-139.178.68.195:44772.service: Deactivated successfully. May 15 15:16:14.039744 systemd[1]: session-33.scope: Deactivated successfully. May 15 15:16:14.040922 systemd-logind[1490]: Session 33 logged out. Waiting for processes to exit. May 15 15:16:14.042504 systemd-logind[1490]: Removed session 33. May 15 15:16:17.532802 kubelet[2768]: I0515 15:16:17.532761 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:16:17.532802 kubelet[2768]: I0515 15:16:17.532819 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:16:17.535369 kubelet[2768]: I0515 15:16:17.535274 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:16:17.556060 kubelet[2768]: I0515 15:16:17.555190 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:16:17.556060 kubelet[2768]: I0515 15:16:17.555362 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555397 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555415 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555426 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555436 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555445 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555454 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555466 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555475 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555485 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:16:17.556060 kubelet[2768]: E0515 15:16:17.555494 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:16:17.556060 kubelet[2768]: I0515 15:16:17.555504 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:16:19.043628 systemd[1]: Started sshd@36-165.232.158.142:22-139.178.68.195:44776.service - OpenSSH per-connection server daemon (139.178.68.195:44776). May 15 15:16:19.116936 kubelet[2768]: E0515 15:16:19.116004 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:19.121689 sshd[5966]: Accepted publickey for core from 139.178.68.195 port 44776 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:19.123038 sshd-session[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:19.124414 containerd[1566]: time="2025-05-15T15:16:19.123365649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 15:16:19.135278 systemd-logind[1490]: New session 34 of user core. May 15 15:16:19.141680 systemd[1]: Started session-34.scope - Session 34 of User core. May 15 15:16:19.316357 sshd[5968]: Connection closed by 139.178.68.195 port 44776 May 15 15:16:19.315589 sshd-session[5966]: pam_unix(sshd:session): session closed for user core May 15 15:16:19.322002 systemd[1]: sshd@36-165.232.158.142:22-139.178.68.195:44776.service: Deactivated successfully. May 15 15:16:19.324907 systemd[1]: session-34.scope: Deactivated successfully. May 15 15:16:19.326792 systemd-logind[1490]: Session 34 logged out. Waiting for processes to exit. May 15 15:16:19.329149 systemd-logind[1490]: Removed session 34. May 15 15:16:20.431519 containerd[1566]: time="2025-05-15T15:16:20.431419506Z" level=error msg="failed to cleanup \"extract-167695321-OKlp sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 15:16:20.432234 containerd[1566]: time="2025-05-15T15:16:20.432115260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" May 15 15:16:20.432335 containerd[1566]: time="2025-05-15T15:16:20.432241565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=11538614" May 15 15:16:20.432599 kubelet[2768]: E0515 15:16:20.432559 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:16:20.432904 kubelet[2768]: E0515 15:16:20.432633 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:16:20.432947 kubelet[2768]: E0515 15:16:20.432933 2768 kuberuntime_manager.go:1256] container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6flx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/kube-controllers:v3.29.3": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device May 15 15:16:20.433042 kubelet[2768]: E0515 15:16:20.432970 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:16:24.336342 systemd[1]: Started sshd@37-165.232.158.142:22-139.178.68.195:58236.service - OpenSSH per-connection server daemon (139.178.68.195:58236). May 15 15:16:24.405373 sshd[5981]: Accepted publickey for core from 139.178.68.195 port 58236 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:24.407161 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:24.415782 systemd-logind[1490]: New session 35 of user core. May 15 15:16:24.423403 systemd[1]: Started session-35.scope - Session 35 of User core. May 15 15:16:24.578416 sshd[5985]: Connection closed by 139.178.68.195 port 58236 May 15 15:16:24.579087 sshd-session[5981]: pam_unix(sshd:session): session closed for user core May 15 15:16:24.585279 systemd[1]: sshd@37-165.232.158.142:22-139.178.68.195:58236.service: Deactivated successfully. May 15 15:16:24.587935 systemd[1]: session-35.scope: Deactivated successfully. May 15 15:16:24.589765 systemd-logind[1490]: Session 35 logged out. Waiting for processes to exit. May 15 15:16:24.591964 systemd-logind[1490]: Removed session 35. May 15 15:16:27.588326 kubelet[2768]: I0515 15:16:27.588260 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:16:27.588326 kubelet[2768]: I0515 15:16:27.588309 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:16:27.591592 kubelet[2768]: I0515 15:16:27.591469 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:16:27.619618 kubelet[2768]: I0515 15:16:27.619584 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:16:27.619895 kubelet[2768]: I0515 15:16:27.619866 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.619920 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.619938 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.619948 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.619959 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.619969 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.619977 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.619988 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.620004 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.620013 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:16:27.620034 kubelet[2768]: E0515 15:16:27.620021 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:16:27.620034 kubelet[2768]: I0515 15:16:27.620031 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:16:29.592854 systemd[1]: Started sshd@38-165.232.158.142:22-139.178.68.195:58252.service - OpenSSH per-connection server daemon (139.178.68.195:58252). May 15 15:16:29.662290 sshd[5997]: Accepted publickey for core from 139.178.68.195 port 58252 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:29.664400 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:29.670675 systemd-logind[1490]: New session 36 of user core. May 15 15:16:29.678574 systemd[1]: Started session-36.scope - Session 36 of User core. May 15 15:16:29.903587 sshd[5999]: Connection closed by 139.178.68.195 port 58252 May 15 15:16:29.904814 sshd-session[5997]: pam_unix(sshd:session): session closed for user core May 15 15:16:29.912385 systemd[1]: sshd@38-165.232.158.142:22-139.178.68.195:58252.service: Deactivated successfully. May 15 15:16:29.916867 systemd[1]: session-36.scope: Deactivated successfully. May 15 15:16:29.919852 systemd-logind[1490]: Session 36 logged out. Waiting for processes to exit. May 15 15:16:29.923873 systemd-logind[1490]: Removed session 36. May 15 15:16:33.120797 kubelet[2768]: E0515 15:16:33.119839 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:16:34.921582 systemd[1]: Started sshd@39-165.232.158.142:22-139.178.68.195:38160.service - OpenSSH per-connection server daemon (139.178.68.195:38160). May 15 15:16:34.992126 sshd[6011]: Accepted publickey for core from 139.178.68.195 port 38160 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:34.993586 sshd-session[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:34.999029 systemd-logind[1490]: New session 37 of user core. May 15 15:16:35.006634 systemd[1]: Started session-37.scope - Session 37 of User core. May 15 15:16:35.156663 sshd[6013]: Connection closed by 139.178.68.195 port 38160 May 15 15:16:35.157993 sshd-session[6011]: pam_unix(sshd:session): session closed for user core May 15 15:16:35.163531 systemd[1]: sshd@39-165.232.158.142:22-139.178.68.195:38160.service: Deactivated successfully. May 15 15:16:35.164337 systemd-logind[1490]: Session 37 logged out. Waiting for processes to exit. May 15 15:16:35.167580 systemd[1]: session-37.scope: Deactivated successfully. May 15 15:16:35.176801 systemd-logind[1490]: Removed session 37. May 15 15:16:35.341737 containerd[1566]: time="2025-05-15T15:16:35.341691588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\" id:\"3c5e18bce2fc027eff64cadfcd39b4bdf8e137395b259d07128c9d39492bc855\" pid:6038 exited_at:{seconds:1747322195 nanos:341244414}" May 15 15:16:36.469424 systemd[1]: Started sshd@40-165.232.158.142:22-64.225.79.163:33620.service - OpenSSH per-connection server daemon (64.225.79.163:33620). May 15 15:16:37.191976 sshd[6051]: Invalid user site from 64.225.79.163 port 33620 May 15 15:16:37.355834 sshd[6051]: Connection closed by invalid user site 64.225.79.163 port 33620 [preauth] May 15 15:16:37.359397 systemd[1]: sshd@40-165.232.158.142:22-64.225.79.163:33620.service: Deactivated successfully. May 15 15:16:37.646031 kubelet[2768]: I0515 15:16:37.645985 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:16:37.646031 kubelet[2768]: I0515 15:16:37.646032 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:16:37.648578 kubelet[2768]: I0515 15:16:37.648552 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:16:37.672695 kubelet[2768]: I0515 15:16:37.672664 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:16:37.672969 kubelet[2768]: I0515 15:16:37.672931 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:16:37.673053 kubelet[2768]: E0515 15:16:37.673002 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:16:37.673053 kubelet[2768]: E0515 15:16:37.673024 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:16:37.673053 kubelet[2768]: E0515 15:16:37.673039 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:16:37.673053 kubelet[2768]: E0515 15:16:37.673052 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:16:37.673164 kubelet[2768]: E0515 15:16:37.673065 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:16:37.673164 kubelet[2768]: E0515 15:16:37.673077 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:16:37.673164 kubelet[2768]: E0515 15:16:37.673093 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:16:37.673164 kubelet[2768]: E0515 15:16:37.673104 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:16:37.673164 kubelet[2768]: E0515 15:16:37.673116 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:16:37.673164 kubelet[2768]: E0515 15:16:37.673130 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:16:37.673164 kubelet[2768]: I0515 15:16:37.673143 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:16:40.174020 systemd[1]: Started sshd@41-165.232.158.142:22-139.178.68.195:38174.service - OpenSSH per-connection server daemon (139.178.68.195:38174). May 15 15:16:40.238229 sshd[6056]: Accepted publickey for core from 139.178.68.195 port 38174 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:40.240166 sshd-session[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:40.247266 systemd-logind[1490]: New session 38 of user core. May 15 15:16:40.252603 systemd[1]: Started session-38.scope - Session 38 of User core. May 15 15:16:40.425352 sshd[6059]: Connection closed by 139.178.68.195 port 38174 May 15 15:16:40.426522 sshd-session[6056]: pam_unix(sshd:session): session closed for user core May 15 15:16:40.430693 systemd[1]: sshd@41-165.232.158.142:22-139.178.68.195:38174.service: Deactivated successfully. May 15 15:16:40.433661 systemd[1]: session-38.scope: Deactivated successfully. May 15 15:16:40.436295 systemd-logind[1490]: Session 38 logged out. Waiting for processes to exit. May 15 15:16:40.438496 systemd-logind[1490]: Removed session 38. May 15 15:16:42.115995 kubelet[2768]: E0515 15:16:42.115945 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:16:44.215541 systemd[1]: Started sshd@42-165.232.158.142:22-218.92.0.166:13102.service - OpenSSH per-connection server daemon (218.92.0.166:13102). May 15 15:16:45.442094 systemd[1]: Started sshd@43-165.232.158.142:22-139.178.68.195:33808.service - OpenSSH per-connection server daemon (139.178.68.195:33808). May 15 15:16:45.515133 sshd[6076]: Accepted publickey for core from 139.178.68.195 port 33808 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:45.518909 sshd-session[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:45.527850 systemd-logind[1490]: New session 39 of user core. May 15 15:16:45.532420 systemd[1]: Started session-39.scope - Session 39 of User core. May 15 15:16:45.692045 sshd[6079]: Connection closed by 139.178.68.195 port 33808 May 15 15:16:45.692652 sshd-session[6076]: pam_unix(sshd:session): session closed for user core May 15 15:16:45.699041 systemd-logind[1490]: Session 39 logged out. Waiting for processes to exit. May 15 15:16:45.699322 systemd[1]: sshd@43-165.232.158.142:22-139.178.68.195:33808.service: Deactivated successfully. May 15 15:16:45.701405 systemd[1]: session-39.scope: Deactivated successfully. May 15 15:16:45.704144 systemd-logind[1490]: Removed session 39. May 15 15:16:45.831161 sshd-session[6074]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:16:47.118221 kubelet[2768]: E0515 15:16:47.118057 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:16:47.696577 kubelet[2768]: I0515 15:16:47.696507 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:16:47.696577 kubelet[2768]: I0515 15:16:47.696554 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:16:47.696789 kubelet[2768]: I0515 15:16:47.696718 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:16:47.696789 kubelet[2768]: E0515 15:16:47.696753 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:16:47.696789 kubelet[2768]: E0515 15:16:47.696768 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:16:47.696789 kubelet[2768]: E0515 15:16:47.696778 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:16:47.696789 kubelet[2768]: E0515 15:16:47.696788 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:16:47.696967 kubelet[2768]: E0515 15:16:47.696800 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:16:47.696967 kubelet[2768]: E0515 15:16:47.696808 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:16:47.696967 kubelet[2768]: E0515 15:16:47.696820 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:16:47.696967 kubelet[2768]: E0515 15:16:47.696829 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:16:47.696967 kubelet[2768]: E0515 15:16:47.696837 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:16:47.696967 kubelet[2768]: E0515 15:16:47.696846 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:16:47.696967 kubelet[2768]: I0515 15:16:47.696856 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:16:48.049500 sshd[6072]: PAM: Permission denied for root from 218.92.0.166 May 15 15:16:48.767512 sshd-session[6090]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:16:49.488299 update_engine[1494]: I20250515 15:16:49.488166 1494 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 15:16:49.488299 update_engine[1494]: I20250515 15:16:49.488291 1494 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 15:16:49.489072 update_engine[1494]: I20250515 15:16:49.489037 1494 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 15:16:49.490403 update_engine[1494]: I20250515 15:16:49.490360 1494 omaha_request_params.cc:62] Current group set to developer May 15 15:16:49.490736 update_engine[1494]: I20250515 15:16:49.490490 1494 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 15:16:49.490736 update_engine[1494]: I20250515 15:16:49.490499 1494 update_attempter.cc:643] Scheduling an action processor start. May 15 15:16:49.490736 update_engine[1494]: I20250515 15:16:49.490520 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 15:16:49.490736 update_engine[1494]: I20250515 15:16:49.490564 1494 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 15:16:49.490736 update_engine[1494]: I20250515 15:16:49.490620 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 15:16:49.490736 update_engine[1494]: I20250515 15:16:49.490628 1494 omaha_request_action.cc:272] Request: May 15 15:16:49.490736 update_engine[1494]: May 15 15:16:49.490736 update_engine[1494]: May 15 15:16:49.490736 update_engine[1494]: May 15 15:16:49.490736 update_engine[1494]: May 15 15:16:49.490736 update_engine[1494]: May 15 15:16:49.490736 update_engine[1494]: May 15 15:16:49.490736 update_engine[1494]: May 15 15:16:49.490736 update_engine[1494]: May 15 15:16:49.490736 update_engine[1494]: I20250515 15:16:49.490634 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 15:16:49.521005 update_engine[1494]: I20250515 15:16:49.520944 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 15:16:49.521459 update_engine[1494]: I20250515 15:16:49.521399 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 15:16:49.521960 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 15:16:49.523769 update_engine[1494]: E20250515 15:16:49.522528 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 15:16:49.523769 update_engine[1494]: I20250515 15:16:49.522626 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 15:16:50.715605 systemd[1]: Started sshd@44-165.232.158.142:22-139.178.68.195:33820.service - OpenSSH per-connection server daemon (139.178.68.195:33820). May 15 15:16:50.722776 sshd[6072]: PAM: Permission denied for root from 218.92.0.166 May 15 15:16:50.807743 sshd[6092]: Accepted publickey for core from 139.178.68.195 port 33820 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:50.810206 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:50.817245 systemd-logind[1490]: New session 40 of user core. May 15 15:16:50.823472 systemd[1]: Started session-40.scope - Session 40 of User core. May 15 15:16:51.005108 sshd[6094]: Connection closed by 139.178.68.195 port 33820 May 15 15:16:51.005926 sshd-session[6092]: pam_unix(sshd:session): session closed for user core May 15 15:16:51.011881 systemd-logind[1490]: Session 40 logged out. Waiting for processes to exit. May 15 15:16:51.012849 systemd[1]: sshd@44-165.232.158.142:22-139.178.68.195:33820.service: Deactivated successfully. May 15 15:16:51.018466 systemd[1]: session-40.scope: Deactivated successfully. May 15 15:16:51.023930 systemd-logind[1490]: Removed session 40. May 15 15:16:51.054688 sshd-session[6096]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.166 user=root May 15 15:16:52.951350 sshd[6072]: PAM: Permission denied for root from 218.92.0.166 May 15 15:16:53.113365 sshd[6072]: Received disconnect from 218.92.0.166 port 13102:11: [preauth] May 15 15:16:53.113365 sshd[6072]: Disconnected from authenticating user root 218.92.0.166 port 13102 [preauth] May 15 15:16:53.118695 systemd[1]: sshd@42-165.232.158.142:22-218.92.0.166:13102.service: Deactivated successfully. May 15 15:16:56.019756 systemd[1]: Started sshd@45-165.232.158.142:22-139.178.68.195:34784.service - OpenSSH per-connection server daemon (139.178.68.195:34784). May 15 15:16:56.079968 sshd[6109]: Accepted publickey for core from 139.178.68.195 port 34784 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:16:56.082241 sshd-session[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:16:56.089134 systemd-logind[1490]: New session 41 of user core. May 15 15:16:56.103493 systemd[1]: Started session-41.scope - Session 41 of User core. May 15 15:16:56.284990 sshd[6111]: Connection closed by 139.178.68.195 port 34784 May 15 15:16:56.285771 sshd-session[6109]: pam_unix(sshd:session): session closed for user core May 15 15:16:56.292585 systemd[1]: sshd@45-165.232.158.142:22-139.178.68.195:34784.service: Deactivated successfully. May 15 15:16:56.295864 systemd[1]: session-41.scope: Deactivated successfully. May 15 15:16:56.297795 systemd-logind[1490]: Session 41 logged out. Waiting for processes to exit. May 15 15:16:56.301075 systemd-logind[1490]: Removed session 41. May 15 15:16:57.743395 kubelet[2768]: I0515 15:16:57.743322 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:16:57.743395 kubelet[2768]: I0515 15:16:57.743395 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:16:57.745701 kubelet[2768]: I0515 15:16:57.745682 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:16:57.769691 kubelet[2768]: I0515 15:16:57.769657 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:16:57.769881 kubelet[2768]: I0515 15:16:57.769862 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:16:57.769964 kubelet[2768]: E0515 15:16:57.769900 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:16:57.769964 kubelet[2768]: E0515 15:16:57.769924 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:16:57.769964 kubelet[2768]: E0515 15:16:57.769935 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:16:57.769964 kubelet[2768]: E0515 15:16:57.769945 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:16:57.769964 kubelet[2768]: E0515 15:16:57.769953 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:16:57.769964 kubelet[2768]: E0515 15:16:57.769962 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:16:57.770128 kubelet[2768]: E0515 15:16:57.769974 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:16:57.770128 kubelet[2768]: E0515 15:16:57.769994 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:16:57.770128 kubelet[2768]: E0515 15:16:57.770003 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:16:57.770128 kubelet[2768]: E0515 15:16:57.770011 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:16:57.770128 kubelet[2768]: I0515 15:16:57.770022 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:16:58.116045 kubelet[2768]: E0515 15:16:58.115821 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:16:59.429242 update_engine[1494]: I20250515 15:16:59.428580 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 15:16:59.429242 update_engine[1494]: I20250515 15:16:59.428853 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 15:16:59.431732 update_engine[1494]: I20250515 15:16:59.431658 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 15:16:59.431886 update_engine[1494]: E20250515 15:16:59.431836 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 15:16:59.431960 update_engine[1494]: I20250515 15:16:59.431900 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 15:17:01.308895 systemd[1]: Started sshd@46-165.232.158.142:22-139.178.68.195:34790.service - OpenSSH per-connection server daemon (139.178.68.195:34790). May 15 15:17:01.381451 sshd[6129]: Accepted publickey for core from 139.178.68.195 port 34790 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:17:01.383330 sshd-session[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:17:01.390591 systemd-logind[1490]: New session 42 of user core. May 15 15:17:01.398630 systemd[1]: Started session-42.scope - Session 42 of User core. May 15 15:17:01.622603 sshd[6131]: Connection closed by 139.178.68.195 port 34790 May 15 15:17:01.624128 sshd-session[6129]: pam_unix(sshd:session): session closed for user core May 15 15:17:01.633162 systemd[1]: sshd@46-165.232.158.142:22-139.178.68.195:34790.service: Deactivated successfully. May 15 15:17:01.639292 systemd[1]: session-42.scope: Deactivated successfully. May 15 15:17:01.644621 systemd-logind[1490]: Session 42 logged out. Waiting for processes to exit. May 15 15:17:01.646748 systemd-logind[1490]: Removed session 42. May 15 15:17:05.344479 containerd[1566]: time="2025-05-15T15:17:05.344301222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\" id:\"ac4f1f24fdb621076a59faf14a2ce3f8be458f2c63591a1db24e484e32950029\" pid:6159 exited_at:{seconds:1747322225 nanos:343471543}" May 15 15:17:06.115197 kubelet[2768]: E0515 15:17:06.114910 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:17:06.638498 systemd[1]: Started sshd@47-165.232.158.142:22-139.178.68.195:43614.service - OpenSSH per-connection server daemon (139.178.68.195:43614). May 15 15:17:06.717305 sshd[6171]: Accepted publickey for core from 139.178.68.195 port 43614 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:17:06.719391 sshd-session[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:17:06.726643 systemd-logind[1490]: New session 43 of user core. May 15 15:17:06.734435 systemd[1]: Started session-43.scope - Session 43 of User core. May 15 15:17:06.896528 sshd[6173]: Connection closed by 139.178.68.195 port 43614 May 15 15:17:06.897245 sshd-session[6171]: pam_unix(sshd:session): session closed for user core May 15 15:17:06.909087 systemd-logind[1490]: Session 43 logged out. Waiting for processes to exit. May 15 15:17:06.910483 systemd[1]: sshd@47-165.232.158.142:22-139.178.68.195:43614.service: Deactivated successfully. May 15 15:17:06.913870 systemd[1]: session-43.scope: Deactivated successfully. May 15 15:17:06.917159 systemd-logind[1490]: Removed session 43. May 15 15:17:07.792564 kubelet[2768]: I0515 15:17:07.792529 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:17:07.792564 kubelet[2768]: I0515 15:17:07.792579 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:17:07.794676 kubelet[2768]: I0515 15:17:07.794649 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:17:07.840904 kubelet[2768]: I0515 15:17:07.840849 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:17:07.841103 kubelet[2768]: I0515 15:17:07.841072 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841127 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841143 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841155 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841165 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841189 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841199 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841211 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841219 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841227 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:17:07.841264 kubelet[2768]: E0515 15:17:07.841235 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:17:07.841264 kubelet[2768]: I0515 15:17:07.841246 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:17:09.430242 update_engine[1494]: I20250515 15:17:09.430136 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 15:17:09.430692 update_engine[1494]: I20250515 15:17:09.430424 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 15:17:09.430728 update_engine[1494]: I20250515 15:17:09.430708 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 15:17:09.431556 update_engine[1494]: E20250515 15:17:09.431504 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 15:17:09.431665 update_engine[1494]: I20250515 15:17:09.431578 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 15:17:10.115524 kubelet[2768]: E0515 15:17:10.115318 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:17:11.118207 containerd[1566]: time="2025-05-15T15:17:11.117767289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 15:17:11.917028 systemd[1]: Started sshd@48-165.232.158.142:22-139.178.68.195:43620.service - OpenSSH per-connection server daemon (139.178.68.195:43620). May 15 15:17:11.990146 sshd[6195]: Accepted publickey for core from 139.178.68.195 port 43620 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:17:11.992160 sshd-session[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:17:12.000872 systemd-logind[1490]: New session 44 of user core. May 15 15:17:12.006662 systemd[1]: Started session-44.scope - Session 44 of User core. May 15 15:17:12.191575 sshd[6197]: Connection closed by 139.178.68.195 port 43620 May 15 15:17:12.192748 sshd-session[6195]: pam_unix(sshd:session): session closed for user core May 15 15:17:12.200493 systemd-logind[1490]: Session 44 logged out. Waiting for processes to exit. May 15 15:17:12.200945 systemd[1]: sshd@48-165.232.158.142:22-139.178.68.195:43620.service: Deactivated successfully. May 15 15:17:12.206681 systemd[1]: session-44.scope: Deactivated successfully. May 15 15:17:12.213065 systemd-logind[1490]: Removed session 44. May 15 15:17:12.815776 containerd[1566]: time="2025-05-15T15:17:12.815627077Z" level=error msg="failed to cleanup \"extract-625357352-5LUq sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 15:17:12.817248 containerd[1566]: time="2025-05-15T15:17:12.816763806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" May 15 15:17:12.817248 containerd[1566]: time="2025-05-15T15:17:12.816836950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=11538614" May 15 15:17:12.817401 kubelet[2768]: E0515 15:17:12.817104 2768 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:17:12.817401 kubelet[2768]: E0515 15:17:12.817155 2768 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:17:12.818044 kubelet[2768]: E0515 15:17:12.817900 2768 kuberuntime_manager.go:1256] container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6flx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5595bbd956-4ksb6_calico-system(85795e54-736b-42e9-a348-a1b529022653): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/kube-controllers:v3.29.3": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device May 15 15:17:12.818044 kubelet[2768]: E0515 15:17:12.817955 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:17:17.207621 systemd[1]: Started sshd@49-165.232.158.142:22-139.178.68.195:38336.service - OpenSSH per-connection server daemon (139.178.68.195:38336). May 15 15:17:17.305519 sshd[6214]: Accepted publickey for core from 139.178.68.195 port 38336 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:17:17.309234 sshd-session[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:17:17.318963 systemd-logind[1490]: New session 45 of user core. May 15 15:17:17.326427 systemd[1]: Started session-45.scope - Session 45 of User core. May 15 15:17:17.516408 sshd[6216]: Connection closed by 139.178.68.195 port 38336 May 15 15:17:17.517434 sshd-session[6214]: pam_unix(sshd:session): session closed for user core May 15 15:17:17.523722 systemd[1]: sshd@49-165.232.158.142:22-139.178.68.195:38336.service: Deactivated successfully. May 15 15:17:17.527025 systemd[1]: session-45.scope: Deactivated successfully. May 15 15:17:17.528694 systemd-logind[1490]: Session 45 logged out. Waiting for processes to exit. May 15 15:17:17.531929 systemd-logind[1490]: Removed session 45. May 15 15:17:17.872723 kubelet[2768]: I0515 15:17:17.872478 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:17:17.872723 kubelet[2768]: I0515 15:17:17.872525 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:17:17.876012 kubelet[2768]: I0515 15:17:17.875888 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:17:17.897785 kubelet[2768]: I0515 15:17:17.897748 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:17:17.898044 kubelet[2768]: I0515 15:17:17.897942 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:17:17.898222 kubelet[2768]: E0515 15:17:17.898184 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:17:17.898222 kubelet[2768]: E0515 15:17:17.898207 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:17:17.898222 kubelet[2768]: E0515 15:17:17.898220 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:17:17.898351 kubelet[2768]: E0515 15:17:17.898232 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:17:17.898397 kubelet[2768]: E0515 15:17:17.898383 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:17:17.898441 kubelet[2768]: E0515 15:17:17.898399 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:17:17.898441 kubelet[2768]: E0515 15:17:17.898413 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:17:17.898441 kubelet[2768]: E0515 15:17:17.898423 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:17:17.898441 kubelet[2768]: E0515 15:17:17.898432 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:17:17.898441 kubelet[2768]: E0515 15:17:17.898441 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:17:17.898590 kubelet[2768]: I0515 15:17:17.898452 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:17:19.124804 kubelet[2768]: E0515 15:17:19.123512 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:17:19.133487 kubelet[2768]: E0515 15:17:19.133453 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:17:19.430326 update_engine[1494]: I20250515 15:17:19.429988 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 15:17:19.430735 update_engine[1494]: I20250515 15:17:19.430599 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 15:17:19.430898 update_engine[1494]: I20250515 15:17:19.430877 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 15:17:19.431217 update_engine[1494]: E20250515 15:17:19.431154 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 15:17:19.431281 update_engine[1494]: I20250515 15:17:19.431258 1494 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 15:17:19.431310 update_engine[1494]: I20250515 15:17:19.431280 1494 omaha_request_action.cc:617] Omaha request response: May 15 15:17:19.431450 update_engine[1494]: E20250515 15:17:19.431422 1494 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.433876 1494 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435063 1494 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435081 1494 update_attempter.cc:306] Processing Done. May 15 15:17:19.435522 update_engine[1494]: E20250515 15:17:19.435101 1494 update_attempter.cc:619] Update failed. May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435106 1494 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435112 1494 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435118 1494 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435386 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435423 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435429 1494 omaha_request_action.cc:272] Request: May 15 15:17:19.435522 update_engine[1494]: May 15 15:17:19.435522 update_engine[1494]: May 15 15:17:19.435522 update_engine[1494]: May 15 15:17:19.435522 update_engine[1494]: May 15 15:17:19.435522 update_engine[1494]: May 15 15:17:19.435522 update_engine[1494]: May 15 15:17:19.435522 update_engine[1494]: I20250515 15:17:19.435436 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 15:17:19.436211 update_engine[1494]: I20250515 15:17:19.435614 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 15:17:19.436211 update_engine[1494]: I20250515 15:17:19.435962 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 15:17:19.436306 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 15:17:19.436620 update_engine[1494]: E20250515 15:17:19.436381 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 15:17:19.436620 update_engine[1494]: I20250515 15:17:19.436422 1494 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 15:17:19.436620 update_engine[1494]: I20250515 15:17:19.436429 1494 omaha_request_action.cc:617] Omaha request response: May 15 15:17:19.436620 update_engine[1494]: I20250515 15:17:19.436436 1494 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 15:17:19.436620 update_engine[1494]: I20250515 15:17:19.436440 1494 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 15:17:19.436620 update_engine[1494]: I20250515 15:17:19.436445 1494 update_attempter.cc:306] Processing Done. May 15 15:17:19.436620 update_engine[1494]: I20250515 15:17:19.436450 1494 update_attempter.cc:310] Error event sent. May 15 15:17:19.436620 update_engine[1494]: I20250515 15:17:19.436460 1494 update_check_scheduler.cc:74] Next update check in 48m58s May 15 15:17:19.436895 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 15:17:20.115266 kubelet[2768]: E0515 15:17:20.115164 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:17:22.538495 systemd[1]: Started sshd@50-165.232.158.142:22-139.178.68.195:38348.service - OpenSSH per-connection server daemon (139.178.68.195:38348). May 15 15:17:22.605543 sshd[6231]: Accepted publickey for core from 139.178.68.195 port 38348 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:17:22.607944 sshd-session[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:17:22.614999 systemd-logind[1490]: New session 46 of user core. May 15 15:17:22.620403 systemd[1]: Started session-46.scope - Session 46 of User core. May 15 15:17:22.819505 sshd[6233]: Connection closed by 139.178.68.195 port 38348 May 15 15:17:22.820050 sshd-session[6231]: pam_unix(sshd:session): session closed for user core May 15 15:17:22.835842 systemd[1]: sshd@50-165.232.158.142:22-139.178.68.195:38348.service: Deactivated successfully. May 15 15:17:22.838646 systemd[1]: session-46.scope: Deactivated successfully. May 15 15:17:22.839904 systemd-logind[1490]: Session 46 logged out. Waiting for processes to exit. May 15 15:17:22.842066 systemd-logind[1490]: Removed session 46. May 15 15:17:26.116002 kubelet[2768]: E0515 15:17:26.115794 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:17:26.117460 kubelet[2768]: E0515 15:17:26.117404 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" podUID="85795e54-736b-42e9-a348-a1b529022653" May 15 15:17:27.837889 systemd[1]: Started sshd@51-165.232.158.142:22-139.178.68.195:35060.service - OpenSSH per-connection server daemon (139.178.68.195:35060). May 15 15:17:27.934806 sshd[6245]: Accepted publickey for core from 139.178.68.195 port 35060 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:17:27.938923 sshd-session[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:17:27.949844 kubelet[2768]: I0515 15:17:27.949394 2768 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:17:27.950533 kubelet[2768]: I0515 15:17:27.950241 2768 container_gc.go:88] "Attempting to delete unused containers" May 15 15:17:27.954030 systemd-logind[1490]: New session 47 of user core. May 15 15:17:27.960389 kubelet[2768]: I0515 15:17:27.960277 2768 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:17:27.963276 systemd[1]: Started session-47.scope - Session 47 of User core. May 15 15:17:27.993382 kubelet[2768]: I0515 15:17:27.992850 2768 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:17:27.993382 kubelet[2768]: I0515 15:17:27.993064 2768 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-5595bbd956-4ksb6","calico-system/calico-typha-64b5f48db9-jvlhw","kube-system/coredns-7db6d8ff4d-zchv5","kube-system/coredns-7db6d8ff4d-nzhxw","calico-system/calico-node-56p29","kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781","calico-system/csi-node-driver-ssx6b","kube-system/kube-proxy-xq2kw","kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781","kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781"] May 15 15:17:27.993382 kubelet[2768]: E0515 15:17:27.993109 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-5595bbd956-4ksb6" May 15 15:17:27.993382 kubelet[2768]: E0515 15:17:27.993129 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64b5f48db9-jvlhw" May 15 15:17:27.993382 kubelet[2768]: E0515 15:17:27.993140 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-zchv5" May 15 15:17:27.993382 kubelet[2768]: E0515 15:17:27.993153 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-nzhxw" May 15 15:17:27.993382 kubelet[2768]: E0515 15:17:27.993164 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-56p29" May 15 15:17:27.993965 kubelet[2768]: E0515 15:17:27.993817 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3982d56781" May 15 15:17:27.993965 kubelet[2768]: E0515 15:17:27.993847 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-ssx6b" May 15 15:17:27.993965 kubelet[2768]: E0515 15:17:27.993857 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-xq2kw" May 15 15:17:27.994371 kubelet[2768]: E0515 15:17:27.994308 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3982d56781" May 15 15:17:27.994371 kubelet[2768]: E0515 15:17:27.994335 2768 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3982d56781" May 15 15:17:27.994371 kubelet[2768]: I0515 15:17:27.994358 2768 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:17:28.163503 sshd[6247]: Connection closed by 139.178.68.195 port 35060 May 15 15:17:28.164086 sshd-session[6245]: pam_unix(sshd:session): session closed for user core May 15 15:17:28.169629 systemd[1]: sshd@51-165.232.158.142:22-139.178.68.195:35060.service: Deactivated successfully. May 15 15:17:28.172663 systemd[1]: session-47.scope: Deactivated successfully. May 15 15:17:28.176052 systemd-logind[1490]: Session 47 logged out. Waiting for processes to exit. May 15 15:17:28.179308 systemd-logind[1490]: Removed session 47. May 15 15:17:30.115109 kubelet[2768]: E0515 15:17:30.114987 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:17:33.182138 systemd[1]: Started sshd@52-165.232.158.142:22-139.178.68.195:35074.service - OpenSSH per-connection server daemon (139.178.68.195:35074). May 15 15:17:33.253349 sshd[6259]: Accepted publickey for core from 139.178.68.195 port 35074 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:17:33.255835 sshd-session[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:17:33.266917 systemd-logind[1490]: New session 48 of user core. May 15 15:17:33.270450 systemd[1]: Started session-48.scope - Session 48 of User core. May 15 15:17:33.414346 sshd[6261]: Connection closed by 139.178.68.195 port 35074 May 15 15:17:33.416419 sshd-session[6259]: pam_unix(sshd:session): session closed for user core May 15 15:17:33.430422 systemd[1]: sshd@52-165.232.158.142:22-139.178.68.195:35074.service: Deactivated successfully. May 15 15:17:33.434995 systemd[1]: session-48.scope: Deactivated successfully. May 15 15:17:33.437253 systemd-logind[1490]: Session 48 logged out. Waiting for processes to exit. May 15 15:17:33.442443 systemd-logind[1490]: Removed session 48. May 15 15:17:33.444669 systemd[1]: Started sshd@53-165.232.158.142:22-139.178.68.195:35080.service - OpenSSH per-connection server daemon (139.178.68.195:35080). May 15 15:17:33.504486 sshd[6272]: Accepted publickey for core from 139.178.68.195 port 35080 ssh2: RSA SHA256:MR6P4SMnBj7Bljnyb1daa15ne/ebNhdFSQPikHCJ1Fk May 15 15:17:33.508066 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:17:33.516514 systemd-logind[1490]: New session 49 of user core. May 15 15:17:33.522000 systemd[1]: Started session-49.scope - Session 49 of User core. May 15 15:17:33.797837 sshd[6274]: Connection closed by 139.178.68.195 port 35080 May 15 15:17:33.799968 sshd-session[6272]: pam_unix(sshd:session): session closed for user core May 15 15:17:33.815953 systemd[1]: sshd@53-165.232.158.142:22-139.178.68.195:35080.service: Deactivated successfully. May 15 15:17:33.821740 systemd[1]: session-49.scope: Deactivated successfully. May 15 15:17:33.823388 systemd-logind[1490]: Session 49 logged out. Waiting for processes to exit. May 15 15:17:33.825441 systemd-logind[1490]: Removed session 49. May 15 15:17:35.347933 containerd[1566]: time="2025-05-15T15:17:35.347886633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5d72caebdf5e0778ae06ea873fa22e1d51ee1b6fb4481ccdff4d34b625a0af1\" id:\"641408d43672d81aea56ef3f9a535cc12bc56bab776ddba975b947b51ff58171\" pid:6297 exited_at:{seconds:1747322255 nanos:347520213}"