May 15 15:43:36.998027 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 15:43:36.998072 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 15:43:36.998089 kernel: BIOS-provided physical RAM map: May 15 15:43:36.998101 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 15:43:36.998112 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 15:43:36.998124 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 15:43:36.998138 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 15 15:43:36.998157 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 15 15:43:36.998171 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 15:43:36.998183 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 15:43:36.998196 kernel: NX (Execute Disable) protection: active May 15 15:43:36.998207 kernel: APIC: Static calls initialized May 15 15:43:36.998218 kernel: SMBIOS 2.8 present. May 15 15:43:36.998229 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 15 15:43:36.998247 kernel: DMI: Memory slots populated: 1/1 May 15 15:43:36.998260 kernel: Hypervisor detected: KVM May 15 15:43:36.998278 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 15:43:36.998290 kernel: kvm-clock: using sched offset of 5263111204 cycles May 15 15:43:36.998303 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 15:43:36.998315 kernel: tsc: Detected 2000.000 MHz processor May 15 15:43:36.998328 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 15:43:36.998341 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 15:43:36.998354 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 15 15:43:36.998371 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 15:43:36.998384 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 15:43:36.998396 kernel: ACPI: Early table checksum verification disabled May 15 15:43:36.998409 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 15 15:43:36.998422 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:43:36.998434 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:43:36.998447 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:43:36.998460 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 15:43:36.998473 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:43:36.998490 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:43:36.998503 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:43:36.998515 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 15:43:36.998528 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 15 15:43:36.998541 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 15 15:43:36.998573 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 15:43:36.998586 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 15 15:43:36.998598 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 15 15:43:36.998621 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 15 15:43:36.998635 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 15 15:43:36.998649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 15 15:43:36.998663 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 15 15:43:36.998676 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 15 15:43:36.998719 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 15 15:43:36.998732 kernel: Zone ranges: May 15 15:43:36.998745 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 15:43:36.998759 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 15 15:43:36.998771 kernel: Normal empty May 15 15:43:36.998800 kernel: Device empty May 15 15:43:36.998814 kernel: Movable zone start for each node May 15 15:43:36.998828 kernel: Early memory node ranges May 15 15:43:36.998847 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 15:43:36.998862 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 15 15:43:36.998881 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 15 15:43:36.998895 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 15:43:36.998909 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 15:43:36.998923 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 15 15:43:36.998937 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 15:43:36.998951 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 15:43:36.998972 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 15:43:36.998986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 15:43:36.999004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 15:43:36.999021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 15:43:36.999041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 15:43:36.999054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 15:43:36.999068 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 15:43:36.999082 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 15:43:36.999096 kernel: TSC deadline timer available May 15 15:43:36.999109 kernel: CPU topo: Max. logical packages: 1 May 15 15:43:36.999123 kernel: CPU topo: Max. logical dies: 1 May 15 15:43:36.999137 kernel: CPU topo: Max. dies per package: 1 May 15 15:43:36.999155 kernel: CPU topo: Max. threads per core: 1 May 15 15:43:36.999168 kernel: CPU topo: Num. cores per package: 2 May 15 15:43:36.999183 kernel: CPU topo: Num. threads per package: 2 May 15 15:43:36.999197 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 15:43:36.999211 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 15:43:36.999225 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 15 15:43:36.999238 kernel: Booting paravirtualized kernel on KVM May 15 15:43:36.999253 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 15:43:36.999267 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 15:43:36.999282 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 15:43:36.999300 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 15:43:36.999314 kernel: pcpu-alloc: [0] 0 1 May 15 15:43:36.999328 kernel: kvm-guest: PV spinlocks disabled, no host support May 15 15:43:36.999345 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 15:43:36.999360 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 15:43:36.999374 kernel: random: crng init done May 15 15:43:36.999388 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 15:43:36.999402 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 15 15:43:36.999420 kernel: Fallback order for Node 0: 0 May 15 15:43:36.999433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 15 15:43:36.999445 kernel: Policy zone: DMA32 May 15 15:43:36.999455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 15:43:36.999466 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 15:43:36.999477 kernel: Kernel/User page tables isolation: enabled May 15 15:43:36.999488 kernel: ftrace: allocating 40065 entries in 157 pages May 15 15:43:36.999502 kernel: ftrace: allocated 157 pages with 5 groups May 15 15:43:36.999516 kernel: Dynamic Preempt: voluntary May 15 15:43:36.999533 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 15:43:36.999549 kernel: rcu: RCU event tracing is enabled. May 15 15:43:36.999563 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 15:43:36.999578 kernel: Trampoline variant of Tasks RCU enabled. May 15 15:43:36.999592 kernel: Rude variant of Tasks RCU enabled. May 15 15:43:36.999605 kernel: Tracing variant of Tasks RCU enabled. May 15 15:43:36.999617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 15:43:36.999631 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 15:43:36.999645 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 15:43:36.999669 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 15:43:36.999684 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 15:43:36.999716 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 15:43:36.999728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 15:43:36.999742 kernel: Console: colour VGA+ 80x25 May 15 15:43:36.999755 kernel: printk: legacy console [tty0] enabled May 15 15:43:36.999767 kernel: printk: legacy console [ttyS0] enabled May 15 15:43:36.999807 kernel: ACPI: Core revision 20240827 May 15 15:43:36.999822 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 15:43:36.999852 kernel: APIC: Switch to symmetric I/O mode setup May 15 15:43:36.999866 kernel: x2apic enabled May 15 15:43:36.999881 kernel: APIC: Switched APIC routing to: physical x2apic May 15 15:43:36.999898 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 15:43:36.999919 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 15 15:43:36.999935 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 15 15:43:36.999949 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 15 15:43:36.999964 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 15 15:43:36.999980 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 15:43:36.999998 kernel: Spectre V2 : Mitigation: Retpolines May 15 15:43:37.000012 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 15:43:37.000028 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 15:43:37.000042 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 15:43:37.000057 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 15:43:37.000072 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 15:43:37.000088 kernel: MDS: Mitigation: Clear CPU buffers May 15 15:43:37.000106 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 15 15:43:37.000121 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 15:43:37.000136 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 15:43:37.000151 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 15:43:37.000166 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 15:43:37.000181 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 15:43:37.000196 kernel: Freeing SMP alternatives memory: 32K May 15 15:43:37.000212 kernel: pid_max: default: 32768 minimum: 301 May 15 15:43:37.000228 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 15:43:37.000248 kernel: landlock: Up and running. May 15 15:43:37.000260 kernel: SELinux: Initializing. May 15 15:43:37.000275 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 15:43:37.000290 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 15:43:37.000306 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 15 15:43:37.000322 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 15 15:43:37.000339 kernel: signal: max sigframe size: 1776 May 15 15:43:37.000355 kernel: rcu: Hierarchical SRCU implementation. May 15 15:43:37.000371 kernel: rcu: Max phase no-delay instances is 400. May 15 15:43:37.000391 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 15:43:37.000407 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 15 15:43:37.000423 kernel: smp: Bringing up secondary CPUs ... May 15 15:43:37.000438 kernel: smpboot: x86: Booting SMP configuration: May 15 15:43:37.000460 kernel: .... node #0, CPUs: #1 May 15 15:43:37.000476 kernel: smp: Brought up 1 node, 2 CPUs May 15 15:43:37.000492 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 15 15:43:37.000509 kernel: Memory: 1966904K/2096612K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 125144K reserved, 0K cma-reserved) May 15 15:43:37.000524 kernel: devtmpfs: initialized May 15 15:43:37.000542 kernel: x86/mm: Memory block size: 128MB May 15 15:43:37.000557 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 15:43:37.000572 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 15:43:37.000587 kernel: pinctrl core: initialized pinctrl subsystem May 15 15:43:37.000602 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 15:43:37.000617 kernel: audit: initializing netlink subsys (disabled) May 15 15:43:37.000632 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 15:43:37.000647 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 15:43:37.000662 kernel: audit: type=2000 audit(1747323811.931:1): state=initialized audit_enabled=0 res=1 May 15 15:43:37.000681 kernel: cpuidle: using governor menu May 15 15:43:37.000722 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 15:43:37.000738 kernel: dca service started, version 1.12.1 May 15 15:43:37.000753 kernel: PCI: Using configuration type 1 for base access May 15 15:43:37.000769 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 15:43:37.000782 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 15:43:37.000795 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 15:43:37.000809 kernel: ACPI: Added _OSI(Module Device) May 15 15:43:37.000823 kernel: ACPI: Added _OSI(Processor Device) May 15 15:43:37.000842 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 15:43:37.000853 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 15:43:37.000865 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 15:43:37.000878 kernel: ACPI: Interpreter enabled May 15 15:43:37.000892 kernel: ACPI: PM: (supports S0 S5) May 15 15:43:37.000904 kernel: ACPI: Using IOAPIC for interrupt routing May 15 15:43:37.000917 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 15:43:37.000929 kernel: PCI: Using E820 reservations for host bridge windows May 15 15:43:37.000942 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 15 15:43:37.000958 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 15:43:37.001287 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 15:43:37.001447 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 15 15:43:37.001588 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 15 15:43:37.001608 kernel: acpiphp: Slot [3] registered May 15 15:43:37.001623 kernel: acpiphp: Slot [4] registered May 15 15:43:37.001636 kernel: acpiphp: Slot [5] registered May 15 15:43:37.001659 kernel: acpiphp: Slot [6] registered May 15 15:43:37.001672 kernel: acpiphp: Slot [7] registered May 15 15:43:37.001684 kernel: acpiphp: Slot [8] registered May 15 15:43:37.003409 kernel: acpiphp: Slot [9] registered May 15 15:43:37.003444 kernel: acpiphp: Slot [10] registered May 15 15:43:37.003461 kernel: acpiphp: Slot [11] registered May 15 15:43:37.003478 kernel: acpiphp: Slot [12] registered May 15 15:43:37.003494 kernel: acpiphp: Slot [13] registered May 15 15:43:37.003510 kernel: acpiphp: Slot [14] registered May 15 15:43:37.003526 kernel: acpiphp: Slot [15] registered May 15 15:43:37.003552 kernel: acpiphp: Slot [16] registered May 15 15:43:37.003568 kernel: acpiphp: Slot [17] registered May 15 15:43:37.003584 kernel: acpiphp: Slot [18] registered May 15 15:43:37.003600 kernel: acpiphp: Slot [19] registered May 15 15:43:37.003615 kernel: acpiphp: Slot [20] registered May 15 15:43:37.003631 kernel: acpiphp: Slot [21] registered May 15 15:43:37.003647 kernel: acpiphp: Slot [22] registered May 15 15:43:37.003663 kernel: acpiphp: Slot [23] registered May 15 15:43:37.003679 kernel: acpiphp: Slot [24] registered May 15 15:43:37.005193 kernel: acpiphp: Slot [25] registered May 15 15:43:37.005224 kernel: acpiphp: Slot [26] registered May 15 15:43:37.005238 kernel: acpiphp: Slot [27] registered May 15 15:43:37.005252 kernel: acpiphp: Slot [28] registered May 15 15:43:37.005266 kernel: acpiphp: Slot [29] registered May 15 15:43:37.005279 kernel: acpiphp: Slot [30] registered May 15 15:43:37.005291 kernel: acpiphp: Slot [31] registered May 15 15:43:37.005303 kernel: PCI host bridge to bus 0000:00 May 15 15:43:37.005551 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 15:43:37.005692 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 15:43:37.005918 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 15:43:37.006237 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 15 15:43:37.006362 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 15 15:43:37.006478 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 15:43:37.007469 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 15 15:43:37.009236 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 15 15:43:37.009461 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 15 15:43:37.009602 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 15 15:43:37.009773 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 15 15:43:37.009914 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 15 15:43:37.010047 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 15 15:43:37.010180 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 15 15:43:37.010354 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 15 15:43:37.010504 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 15 15:43:37.010772 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 15 15:43:37.011825 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 15 15:43:37.012057 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 15 15:43:37.012272 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 15 15:43:37.012428 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 15 15:43:37.012567 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 15 15:43:37.015841 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 15 15:43:37.016081 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 15 15:43:37.016223 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 15:43:37.016415 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 15:43:37.016582 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 15 15:43:37.016761 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 15 15:43:37.016905 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 15 15:43:37.017068 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 15:43:37.017322 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 15 15:43:37.017466 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 15 15:43:37.017605 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 15 15:43:37.017794 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 15:43:37.017944 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 15 15:43:37.018075 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 15 15:43:37.018210 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 15 15:43:37.018382 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 15:43:37.018527 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 15 15:43:37.018666 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 15 15:43:37.020992 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 15 15:43:37.021249 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 15:43:37.021430 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 15 15:43:37.021584 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 15 15:43:37.021844 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 15 15:43:37.021993 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 15 15:43:37.022132 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 15 15:43:37.022317 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 15 15:43:37.022338 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 15:43:37.022352 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 15:43:37.022365 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 15:43:37.022379 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 15:43:37.022392 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 15:43:37.022407 kernel: iommu: Default domain type: Translated May 15 15:43:37.022420 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 15:43:37.022441 kernel: PCI: Using ACPI for IRQ routing May 15 15:43:37.022454 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 15:43:37.022466 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 15:43:37.022480 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 15 15:43:37.022670 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 15 15:43:37.024976 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 15 15:43:37.025146 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 15:43:37.025186 kernel: vgaarb: loaded May 15 15:43:37.025201 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 15:43:37.025230 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 15:43:37.025243 kernel: clocksource: Switched to clocksource kvm-clock May 15 15:43:37.025256 kernel: VFS: Disk quotas dquot_6.6.0 May 15 15:43:37.025270 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 15:43:37.025285 kernel: pnp: PnP ACPI init May 15 15:43:37.025300 kernel: pnp: PnP ACPI: found 4 devices May 15 15:43:37.025315 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 15:43:37.025329 kernel: NET: Registered PF_INET protocol family May 15 15:43:37.025341 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 15:43:37.025358 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 15 15:43:37.025371 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 15:43:37.025385 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 15 15:43:37.025399 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 15 15:43:37.025413 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 15 15:43:37.025428 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 15:43:37.025444 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 15:43:37.025459 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 15:43:37.025477 kernel: NET: Registered PF_XDP protocol family May 15 15:43:37.025642 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 15:43:37.025799 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 15:43:37.025922 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 15:43:37.026044 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 15 15:43:37.026164 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 15 15:43:37.026310 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 15 15:43:37.026452 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 15:43:37.026481 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 15 15:43:37.026615 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 30346 usecs May 15 15:43:37.026634 kernel: PCI: CLS 0 bytes, default 64 May 15 15:43:37.026649 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 15 15:43:37.026665 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 15 15:43:37.026681 kernel: Initialise system trusted keyrings May 15 15:43:37.027654 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 15 15:43:37.027688 kernel: Key type asymmetric registered May 15 15:43:37.027720 kernel: Asymmetric key parser 'x509' registered May 15 15:43:37.027763 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 15:43:37.027778 kernel: io scheduler mq-deadline registered May 15 15:43:37.027793 kernel: io scheduler kyber registered May 15 15:43:37.027808 kernel: io scheduler bfq registered May 15 15:43:37.027823 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 15:43:37.027839 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 15 15:43:37.027854 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 15:43:37.027869 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 15:43:37.027884 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 15:43:37.027902 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 15:43:37.027917 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 15:43:37.027932 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 15:43:37.027947 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 15:43:37.028185 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 15:43:37.028341 kernel: rtc_cmos 00:03: registered as rtc0 May 15 15:43:37.028482 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T15:43:36 UTC (1747323816) May 15 15:43:37.028500 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 15:43:37.028627 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 15 15:43:37.028641 kernel: intel_pstate: CPU model not supported May 15 15:43:37.028655 kernel: NET: Registered PF_INET6 protocol family May 15 15:43:37.028668 kernel: Segment Routing with IPv6 May 15 15:43:37.028680 kernel: In-situ OAM (IOAM) with IPv6 May 15 15:43:37.028693 kernel: NET: Registered PF_PACKET protocol family May 15 15:43:37.028725 kernel: Key type dns_resolver registered May 15 15:43:37.028739 kernel: IPI shorthand broadcast: enabled May 15 15:43:37.028752 kernel: sched_clock: Marking stable (4202007502, 162459152)->(4402457629, -37990975) May 15 15:43:37.028771 kernel: registered taskstats version 1 May 15 15:43:37.028784 kernel: Loading compiled-in X.509 certificates May 15 15:43:37.028796 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 15:43:37.028809 kernel: Demotion targets for Node 0: null May 15 15:43:37.028845 kernel: Key type .fscrypt registered May 15 15:43:37.028858 kernel: Key type fscrypt-provisioning registered May 15 15:43:37.028895 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 15:43:37.028911 kernel: ima: Allocated hash algorithm: sha1 May 15 15:43:37.028925 kernel: ima: No architecture policies found May 15 15:43:37.028938 kernel: clk: Disabling unused clocks May 15 15:43:37.028960 kernel: Warning: unable to open an initial console. May 15 15:43:37.028974 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 15:43:37.028987 kernel: Write protecting the kernel read-only data: 24576k May 15 15:43:37.028999 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 15:43:37.029018 kernel: Run /init as init process May 15 15:43:37.029032 kernel: with arguments: May 15 15:43:37.029044 kernel: /init May 15 15:43:37.029058 kernel: with environment: May 15 15:43:37.029074 kernel: HOME=/ May 15 15:43:37.029085 kernel: TERM=linux May 15 15:43:37.029097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 15:43:37.029112 systemd[1]: Successfully made /usr/ read-only. May 15 15:43:37.029131 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 15:43:37.029146 systemd[1]: Detected virtualization kvm. May 15 15:43:37.029211 systemd[1]: Detected architecture x86-64. May 15 15:43:37.029231 systemd[1]: Running in initrd. May 15 15:43:37.029245 systemd[1]: No hostname configured, using default hostname. May 15 15:43:37.029260 systemd[1]: Hostname set to . May 15 15:43:37.029272 systemd[1]: Initializing machine ID from VM UUID. May 15 15:43:37.029285 systemd[1]: Queued start job for default target initrd.target. May 15 15:43:37.029298 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 15:43:37.029314 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 15:43:37.029332 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 15:43:37.029349 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 15:43:37.029363 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 15:43:37.029382 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 15:43:37.029398 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 15:43:37.029414 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 15:43:37.029429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 15:43:37.029442 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 15:43:37.029456 systemd[1]: Reached target paths.target - Path Units. May 15 15:43:37.029469 systemd[1]: Reached target slices.target - Slice Units. May 15 15:43:37.029483 systemd[1]: Reached target swap.target - Swaps. May 15 15:43:37.029497 systemd[1]: Reached target timers.target - Timer Units. May 15 15:43:37.029512 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 15:43:37.029530 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 15:43:37.029544 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 15:43:37.029557 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 15:43:37.029570 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 15:43:37.029585 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 15:43:37.029600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 15:43:37.029616 systemd[1]: Reached target sockets.target - Socket Units. May 15 15:43:37.029629 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 15:43:37.029643 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 15:43:37.029662 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 15:43:37.029678 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 15:43:37.029693 systemd[1]: Starting systemd-fsck-usr.service... May 15 15:43:37.032789 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 15:43:37.032809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 15:43:37.032827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:43:37.032844 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 15:43:37.032928 systemd-journald[213]: Collecting audit messages is disabled. May 15 15:43:37.032972 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 15:43:37.032990 systemd[1]: Finished systemd-fsck-usr.service. May 15 15:43:37.033007 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 15:43:37.033025 systemd-journald[213]: Journal started May 15 15:43:37.033059 systemd-journald[213]: Runtime Journal (/run/log/journal/af670fa0397448c5929f23191605c200) is 4.9M, max 39.5M, 34.6M free. May 15 15:43:36.985290 systemd-modules-load[214]: Inserted module 'overlay' May 15 15:43:37.095465 systemd[1]: Started systemd-journald.service - Journal Service. May 15 15:43:37.095511 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 15:43:37.095535 kernel: Bridge firewalling registered May 15 15:43:37.052048 systemd-modules-load[214]: Inserted module 'br_netfilter' May 15 15:43:37.098174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 15:43:37.099148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:43:37.100599 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 15:43:37.105853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 15:43:37.107582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 15:43:37.114018 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 15:43:37.116384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 15:43:37.139643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 15:43:37.142130 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 15:43:37.145775 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 15:43:37.151792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 15:43:37.156211 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 15:43:37.160980 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 15:43:37.164036 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 15:43:37.203730 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 15:43:37.224836 systemd-resolved[248]: Positive Trust Anchors: May 15 15:43:37.225782 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 15:43:37.225835 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 15:43:37.234525 systemd-resolved[248]: Defaulting to hostname 'linux'. May 15 15:43:37.238324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 15:43:37.239321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 15:43:37.329780 kernel: SCSI subsystem initialized May 15 15:43:37.341760 kernel: Loading iSCSI transport class v2.0-870. May 15 15:43:37.354770 kernel: iscsi: registered transport (tcp) May 15 15:43:37.380790 kernel: iscsi: registered transport (qla4xxx) May 15 15:43:37.380908 kernel: QLogic iSCSI HBA Driver May 15 15:43:37.410289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 15:43:37.428942 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 15:43:37.432448 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 15:43:37.502542 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 15:43:37.505271 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 15:43:37.572809 kernel: raid6: avx2x4 gen() 23824 MB/s May 15 15:43:37.588777 kernel: raid6: avx2x2 gen() 27766 MB/s May 15 15:43:37.606147 kernel: raid6: avx2x1 gen() 17427 MB/s May 15 15:43:37.606268 kernel: raid6: using algorithm avx2x2 gen() 27766 MB/s May 15 15:43:37.625900 kernel: raid6: .... xor() 14176 MB/s, rmw enabled May 15 15:43:37.626026 kernel: raid6: using avx2x2 recovery algorithm May 15 15:43:37.653786 kernel: xor: automatically using best checksumming function avx May 15 15:43:37.834769 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 15:43:37.844773 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 15:43:37.847764 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 15:43:37.880094 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 15 15:43:37.886646 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 15:43:37.891534 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 15:43:37.922529 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 15 15:43:37.957250 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 15:43:37.960045 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 15:43:38.026737 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 15:43:38.031468 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 15:43:38.129782 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 15 15:43:38.223885 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 15 15:43:38.224048 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 15 15:43:38.224157 kernel: scsi host0: Virtio SCSI HBA May 15 15:43:38.224321 kernel: cryptd: max_cpu_qlen set to 1000 May 15 15:43:38.224341 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 15:43:38.224371 kernel: GPT:9289727 != 125829119 May 15 15:43:38.224387 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 15:43:38.224404 kernel: GPT:9289727 != 125829119 May 15 15:43:38.224420 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 15:43:38.224437 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 15:43:38.224453 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 15 15:43:38.299886 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 15:43:38.299916 kernel: ACPI: bus type USB registered May 15 15:43:38.299934 kernel: usbcore: registered new interface driver usbfs May 15 15:43:38.299964 kernel: usbcore: registered new interface driver hub May 15 15:43:38.299981 kernel: usbcore: registered new device driver usb May 15 15:43:38.300002 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 15 15:43:38.300206 kernel: AES CTR mode by8 optimization enabled May 15 15:43:38.204589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 15:43:38.396725 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 15 15:43:38.396978 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 15 15:43:38.397104 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 15 15:43:38.397238 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 15 15:43:38.397346 kernel: hub 1-0:1.0: USB hub found May 15 15:43:38.397494 kernel: hub 1-0:1.0: 2 ports detected May 15 15:43:38.397670 kernel: libata version 3.00 loaded. May 15 15:43:38.397692 kernel: ata_piix 0000:00:01.1: version 2.13 May 15 15:43:38.397851 kernel: scsi host1: ata_piix May 15 15:43:38.397976 kernel: scsi host2: ata_piix May 15 15:43:38.398107 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 15 15:43:38.398119 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 15 15:43:38.204811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:43:38.205583 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:43:38.208094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:43:38.210140 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 15:43:38.366709 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 15:43:38.404368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:43:38.414872 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 15:43:38.424365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 15:43:38.425159 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 15:43:38.435971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 15:43:38.450543 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 15:43:38.475849 disk-uuid[606]: Primary Header is updated. May 15 15:43:38.475849 disk-uuid[606]: Secondary Entries is updated. May 15 15:43:38.475849 disk-uuid[606]: Secondary Header is updated. May 15 15:43:38.481744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 15:43:38.488745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 15:43:38.641838 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 15:43:38.668807 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 15:43:38.669502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 15:43:38.671019 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 15:43:38.673315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 15:43:38.700012 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 15:43:39.491198 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 15:43:39.491724 disk-uuid[607]: The operation has completed successfully. May 15 15:43:39.556214 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 15:43:39.556335 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 15:43:39.586430 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 15:43:39.603140 sh[632]: Success May 15 15:43:39.627737 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 15:43:39.631687 kernel: device-mapper: uevent: version 1.0.3 May 15 15:43:39.631805 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 15:43:39.645733 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 15 15:43:39.708946 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 15:43:39.711247 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 15:43:39.726593 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 15:43:39.741445 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 15:43:39.741548 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (644) May 15 15:43:39.747350 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 15:43:39.747438 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 15:43:39.747455 kernel: BTRFS info (device dm-0): using free-space-tree May 15 15:43:39.756623 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 15:43:39.758850 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 15:43:39.760678 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 15:43:39.763084 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 15:43:39.766885 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 15:43:39.788733 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (671) May 15 15:43:39.788817 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:43:39.792201 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 15:43:39.792273 kernel: BTRFS info (device vda6): using free-space-tree May 15 15:43:39.806781 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:43:39.808213 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 15:43:39.811427 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 15:43:39.982856 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 15:43:39.986904 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 15:43:40.045743 ignition[718]: Ignition 2.21.0 May 15 15:43:40.047015 ignition[718]: Stage: fetch-offline May 15 15:43:40.047837 ignition[718]: no configs at "/usr/lib/ignition/base.d" May 15 15:43:40.047856 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:43:40.048077 ignition[718]: parsed url from cmdline: "" May 15 15:43:40.048083 ignition[718]: no config URL provided May 15 15:43:40.048097 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" May 15 15:43:40.048111 ignition[718]: no config at "/usr/lib/ignition/user.ign" May 15 15:43:40.048122 ignition[718]: failed to fetch config: resource requires networking May 15 15:43:40.048482 ignition[718]: Ignition finished successfully May 15 15:43:40.055876 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 15:43:40.058151 systemd-networkd[818]: lo: Link UP May 15 15:43:40.058158 systemd-networkd[818]: lo: Gained carrier May 15 15:43:40.062590 systemd-networkd[818]: Enumeration completed May 15 15:43:40.063483 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 15 15:43:40.063486 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 15:43:40.063490 systemd-networkd[818]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 15 15:43:40.065402 systemd-networkd[818]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 15:43:40.065408 systemd-networkd[818]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 15:43:40.065960 systemd[1]: Reached target network.target - Network. May 15 15:43:40.066350 systemd-networkd[818]: eth0: Link UP May 15 15:43:40.066356 systemd-networkd[818]: eth0: Gained carrier May 15 15:43:40.066372 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 15 15:43:40.068002 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 15:43:40.072453 systemd-networkd[818]: eth1: Link UP May 15 15:43:40.072459 systemd-networkd[818]: eth1: Gained carrier May 15 15:43:40.072485 systemd-networkd[818]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 15:43:40.089853 systemd-networkd[818]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 May 15 15:43:40.094821 systemd-networkd[818]: eth0: DHCPv4 address 164.92.106.96/19, gateway 164.92.96.1 acquired from 169.254.169.253 May 15 15:43:40.113641 ignition[822]: Ignition 2.21.0 May 15 15:43:40.115333 ignition[822]: Stage: fetch May 15 15:43:40.115617 ignition[822]: no configs at "/usr/lib/ignition/base.d" May 15 15:43:40.115637 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:43:40.115817 ignition[822]: parsed url from cmdline: "" May 15 15:43:40.115823 ignition[822]: no config URL provided May 15 15:43:40.115831 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" May 15 15:43:40.115843 ignition[822]: no config at "/usr/lib/ignition/user.ign" May 15 15:43:40.115897 ignition[822]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 15 15:43:40.134597 ignition[822]: GET result: OK May 15 15:43:40.135361 ignition[822]: parsing config with SHA512: a10cad6517c4764a943daa9d30e0ae6a29fff025fdd007cc95af12db8bd77a31b16b420000620d7c04b1ccdd62f39bb213eacaf569cfdd4035226783c8566879 May 15 15:43:40.144330 unknown[822]: fetched base config from "system" May 15 15:43:40.144354 unknown[822]: fetched base config from "system" May 15 15:43:40.145253 ignition[822]: fetch: fetch complete May 15 15:43:40.144365 unknown[822]: fetched user config from "digitalocean" May 15 15:43:40.145263 ignition[822]: fetch: fetch passed May 15 15:43:40.148200 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 15:43:40.145390 ignition[822]: Ignition finished successfully May 15 15:43:40.157952 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 15:43:40.219177 ignition[829]: Ignition 2.21.0 May 15 15:43:40.220290 ignition[829]: Stage: kargs May 15 15:43:40.221183 ignition[829]: no configs at "/usr/lib/ignition/base.d" May 15 15:43:40.221201 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:43:40.223684 ignition[829]: kargs: kargs passed May 15 15:43:40.223826 ignition[829]: Ignition finished successfully May 15 15:43:40.226138 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 15:43:40.230056 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 15:43:40.278216 ignition[836]: Ignition 2.21.0 May 15 15:43:40.278249 ignition[836]: Stage: disks May 15 15:43:40.278533 ignition[836]: no configs at "/usr/lib/ignition/base.d" May 15 15:43:40.278549 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:43:40.279944 ignition[836]: disks: disks passed May 15 15:43:40.280029 ignition[836]: Ignition finished successfully May 15 15:43:40.282025 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 15:43:40.284163 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 15:43:40.285073 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 15:43:40.286552 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 15:43:40.287988 systemd[1]: Reached target sysinit.target - System Initialization. May 15 15:43:40.289428 systemd[1]: Reached target basic.target - Basic System. May 15 15:43:40.292609 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 15:43:40.325766 systemd-fsck[845]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 15:43:40.328478 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 15:43:40.331860 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 15:43:40.470761 kernel: EXT4-fs (vda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 15:43:40.471126 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 15:43:40.472364 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 15:43:40.476872 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 15:43:40.479821 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 15:43:40.483907 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 15 15:43:40.486638 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 15:43:40.489119 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 15:43:40.489231 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 15:43:40.506157 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (853) May 15 15:43:40.511748 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:43:40.511843 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 15:43:40.511880 kernel: BTRFS info (device vda6): using free-space-tree May 15 15:43:40.520399 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 15:43:40.528220 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 15:43:40.536370 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 15:43:40.603937 coreos-metadata[856]: May 15 15:43:40.603 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 15:43:40.618785 coreos-metadata[855]: May 15 15:43:40.618 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 15:43:40.621379 coreos-metadata[856]: May 15 15:43:40.619 INFO Fetch successful May 15 15:43:40.623744 initrd-setup-root[883]: cut: /sysroot/etc/passwd: No such file or directory May 15 15:43:40.626541 coreos-metadata[856]: May 15 15:43:40.626 INFO wrote hostname ci-4334.0.0-a-8a7930f089 to /sysroot/etc/hostname May 15 15:43:40.627761 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 15:43:40.633112 coreos-metadata[855]: May 15 15:43:40.633 INFO Fetch successful May 15 15:43:40.640982 initrd-setup-root[891]: cut: /sysroot/etc/group: No such file or directory May 15 15:43:40.641512 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 15 15:43:40.641672 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 15 15:43:40.650462 initrd-setup-root[899]: cut: /sysroot/etc/shadow: No such file or directory May 15 15:43:40.657570 initrd-setup-root[906]: cut: /sysroot/etc/gshadow: No such file or directory May 15 15:43:40.786345 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 15:43:40.789224 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 15:43:40.790837 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 15:43:40.815035 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 15:43:40.816723 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:43:40.839803 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 15:43:40.853800 ignition[976]: INFO : Ignition 2.21.0 May 15 15:43:40.853800 ignition[976]: INFO : Stage: mount May 15 15:43:40.858102 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 15:43:40.858102 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:43:40.859967 ignition[976]: INFO : mount: mount passed May 15 15:43:40.859967 ignition[976]: INFO : Ignition finished successfully May 15 15:43:40.860780 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 15:43:40.863722 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 15:43:40.905036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 15:43:40.931067 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (988) May 15 15:43:40.931135 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 15:43:40.932878 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 15:43:40.935550 kernel: BTRFS info (device vda6): using free-space-tree May 15 15:43:40.940840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 15:43:40.975875 ignition[1005]: INFO : Ignition 2.21.0 May 15 15:43:40.975875 ignition[1005]: INFO : Stage: files May 15 15:43:40.977883 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 15:43:40.977883 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:43:40.977883 ignition[1005]: DEBUG : files: compiled without relabeling support, skipping May 15 15:43:40.980940 ignition[1005]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 15:43:40.980940 ignition[1005]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 15:43:40.983304 ignition[1005]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 15:43:40.983304 ignition[1005]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 15:43:40.983304 ignition[1005]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 15:43:40.982679 unknown[1005]: wrote ssh authorized keys file for user: core May 15 15:43:40.989118 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 15:43:40.989118 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 15:43:41.096108 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 15:43:41.351735 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 15:43:41.351735 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 15:43:41.351735 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 15:43:41.356740 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 15:43:41.356740 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 15:43:41.356740 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 15:43:41.356740 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 15:43:41.356740 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 15:43:41.356740 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 15:43:41.367609 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 15:43:41.367609 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 15:43:41.367609 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 15:43:41.367609 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 15:43:41.367609 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 15:43:41.367609 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 15:43:41.408919 systemd-networkd[818]: eth1: Gained IPv6LL May 15 15:43:41.729267 systemd-networkd[818]: eth0: Gained IPv6LL May 15 15:43:41.878543 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 15:43:42.170240 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 15:43:42.170240 ignition[1005]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 15:43:42.172815 ignition[1005]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 15:43:42.174693 ignition[1005]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 15:43:42.174693 ignition[1005]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 15:43:42.174693 ignition[1005]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 15 15:43:42.177461 ignition[1005]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 15 15:43:42.177461 ignition[1005]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 15:43:42.177461 ignition[1005]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 15:43:42.177461 ignition[1005]: INFO : files: files passed May 15 15:43:42.177461 ignition[1005]: INFO : Ignition finished successfully May 15 15:43:42.177016 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 15:43:42.180949 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 15:43:42.184153 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 15:43:42.197871 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 15:43:42.198858 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 15:43:42.208436 initrd-setup-root-after-ignition[1035]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 15:43:42.208436 initrd-setup-root-after-ignition[1035]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 15:43:42.212010 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 15:43:42.214544 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 15:43:42.216330 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 15:43:42.218159 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 15:43:42.281816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 15:43:42.281976 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 15:43:42.283634 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 15:43:42.284806 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 15:43:42.286115 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 15:43:42.287417 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 15:43:42.315751 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 15:43:42.319494 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 15:43:42.357339 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 15:43:42.359454 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 15:43:42.360465 systemd[1]: Stopped target timers.target - Timer Units. May 15 15:43:42.361834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 15:43:42.362038 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 15:43:42.363624 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 15:43:42.364568 systemd[1]: Stopped target basic.target - Basic System. May 15 15:43:42.365961 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 15:43:42.367600 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 15:43:42.368752 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 15:43:42.370054 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 15:43:42.371278 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 15:43:42.372633 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 15:43:42.374037 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 15:43:42.375326 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 15:43:42.376523 systemd[1]: Stopped target swap.target - Swaps. May 15 15:43:42.377869 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 15:43:42.378057 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 15:43:42.379574 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 15:43:42.380331 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 15:43:42.381903 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 15:43:42.382025 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 15:43:42.383313 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 15:43:42.383514 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 15:43:42.385280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 15:43:42.385489 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 15:43:42.386690 systemd[1]: ignition-files.service: Deactivated successfully. May 15 15:43:42.386967 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 15:43:42.387885 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 15:43:42.388031 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 15:43:42.391869 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 15:43:42.392576 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 15:43:42.392843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 15:43:42.398149 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 15:43:42.399641 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 15:43:42.399876 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 15:43:42.402248 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 15:43:42.402500 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 15:43:42.412329 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 15:43:42.413813 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 15:43:42.433838 ignition[1059]: INFO : Ignition 2.21.0 May 15 15:43:42.433838 ignition[1059]: INFO : Stage: umount May 15 15:43:42.435791 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 15:43:42.435791 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 15 15:43:42.440824 ignition[1059]: INFO : umount: umount passed May 15 15:43:42.441527 ignition[1059]: INFO : Ignition finished successfully May 15 15:43:42.442594 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 15:43:42.442809 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 15:43:42.445640 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 15:43:42.446298 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 15:43:42.447781 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 15:43:42.448445 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 15:43:42.450088 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 15:43:42.450165 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 15:43:42.450922 systemd[1]: Stopped target network.target - Network. May 15 15:43:42.452034 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 15:43:42.452099 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 15:43:42.453488 systemd[1]: Stopped target paths.target - Path Units. May 15 15:43:42.454795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 15:43:42.458942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 15:43:42.504159 systemd[1]: Stopped target slices.target - Slice Units. May 15 15:43:42.505517 systemd[1]: Stopped target sockets.target - Socket Units. May 15 15:43:42.507052 systemd[1]: iscsid.socket: Deactivated successfully. May 15 15:43:42.507129 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 15:43:42.508193 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 15:43:42.508250 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 15:43:42.509457 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 15:43:42.509576 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 15:43:42.510719 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 15:43:42.510785 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 15:43:42.512171 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 15:43:42.513737 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 15:43:42.517397 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 15:43:42.521455 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 15:43:42.521589 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 15:43:42.535608 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 15:43:42.536601 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 15:43:42.537069 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 15:43:42.545659 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 15:43:42.546454 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 15:43:42.546588 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 15:43:42.548915 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 15:43:42.549349 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 15:43:42.549477 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 15:43:42.551485 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 15:43:42.552865 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 15:43:42.552932 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 15:43:42.554300 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 15:43:42.554396 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 15:43:42.556847 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 15:43:42.557957 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 15:43:42.558031 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 15:43:42.560038 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 15:43:42.560095 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 15:43:42.562855 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 15:43:42.562920 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 15:43:42.565156 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 15:43:42.572170 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 15:43:42.583469 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 15:43:42.584495 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 15:43:42.586211 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 15:43:42.586331 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 15:43:42.587632 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 15:43:42.587693 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 15:43:42.589366 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 15:43:42.589473 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 15:43:42.591456 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 15:43:42.591520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 15:43:42.592784 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 15:43:42.592888 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 15:43:42.596862 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 15:43:42.597673 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 15:43:42.597898 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 15:43:42.600779 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 15:43:42.600854 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 15:43:42.603353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 15:43:42.603424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:43:42.605590 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 15:43:42.609847 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 15:43:42.617683 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 15:43:42.617886 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 15:43:42.619839 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 15:43:42.622042 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 15:43:42.646248 systemd[1]: Switching root. May 15 15:43:42.726679 systemd-journald[213]: Journal stopped May 15 15:43:44.112922 systemd-journald[213]: Received SIGTERM from PID 1 (systemd). May 15 15:43:44.113092 kernel: SELinux: policy capability network_peer_controls=1 May 15 15:43:44.113129 kernel: SELinux: policy capability open_perms=1 May 15 15:43:44.113154 kernel: SELinux: policy capability extended_socket_class=1 May 15 15:43:44.113171 kernel: SELinux: policy capability always_check_network=0 May 15 15:43:44.113196 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 15:43:44.113216 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 15:43:44.113234 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 15:43:44.113251 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 15:43:44.113302 kernel: SELinux: policy capability userspace_initial_context=0 May 15 15:43:44.113325 kernel: audit: type=1403 audit(1747323822.862:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 15:43:44.113350 systemd[1]: Successfully loaded SELinux policy in 55.548ms. May 15 15:43:44.113382 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.351ms. May 15 15:43:44.113403 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 15:43:44.113424 systemd[1]: Detected virtualization kvm. May 15 15:43:44.113442 systemd[1]: Detected architecture x86-64. May 15 15:43:44.113460 systemd[1]: Detected first boot. May 15 15:43:44.113478 systemd[1]: Hostname set to . May 15 15:43:44.113500 systemd[1]: Initializing machine ID from VM UUID. May 15 15:43:44.113513 zram_generator::config[1103]: No configuration found. May 15 15:43:44.113528 kernel: Guest personality initialized and is inactive May 15 15:43:44.113540 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 15:43:44.113551 kernel: Initialized host personality May 15 15:43:44.113563 kernel: NET: Registered PF_VSOCK protocol family May 15 15:43:44.113579 systemd[1]: Populated /etc with preset unit settings. May 15 15:43:44.113598 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 15:43:44.113626 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 15:43:44.113644 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 15:43:44.113656 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 15:43:44.113668 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 15:43:44.113680 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 15:43:44.113692 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 15:43:44.115266 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 15:43:44.115289 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 15:43:44.115302 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 15:43:44.115323 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 15:43:44.115335 systemd[1]: Created slice user.slice - User and Session Slice. May 15 15:43:44.115349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 15:43:44.115362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 15:43:44.115375 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 15:43:44.115387 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 15:43:44.115403 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 15:43:44.115416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 15:43:44.115429 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 15:43:44.115442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 15:43:44.115454 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 15:43:44.115466 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 15:43:44.115483 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 15:43:44.115500 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 15:43:44.115519 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 15:43:44.115542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 15:43:44.115611 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 15:43:44.115631 systemd[1]: Reached target slices.target - Slice Units. May 15 15:43:44.115650 systemd[1]: Reached target swap.target - Swaps. May 15 15:43:44.115671 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 15:43:44.115688 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 15:43:44.115725 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 15:43:44.115740 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 15:43:44.115753 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 15:43:44.115771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 15:43:44.115785 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 15:43:44.115798 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 15:43:44.115811 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 15:43:44.115822 systemd[1]: Mounting media.mount - External Media Directory... May 15 15:43:44.115835 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:44.115848 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 15:43:44.115861 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 15:43:44.115875 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 15:43:44.115891 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 15:43:44.115910 systemd[1]: Reached target machines.target - Containers. May 15 15:43:44.115923 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 15:43:44.115934 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:43:44.115947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 15:43:44.115960 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 15:43:44.115973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 15:43:44.115985 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 15:43:44.116000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 15:43:44.116017 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 15:43:44.116028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 15:43:44.116041 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 15:43:44.116054 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 15:43:44.116068 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 15:43:44.116080 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 15:43:44.116093 systemd[1]: Stopped systemd-fsck-usr.service. May 15 15:43:44.116107 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:43:44.116122 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 15:43:44.116135 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 15:43:44.116148 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 15:43:44.116161 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 15:43:44.116174 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 15:43:44.116191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 15:43:44.116204 systemd[1]: verity-setup.service: Deactivated successfully. May 15 15:43:44.116215 systemd[1]: Stopped verity-setup.service. May 15 15:43:44.116228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:44.116240 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 15:43:44.116255 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 15:43:44.116267 systemd[1]: Mounted media.mount - External Media Directory. May 15 15:43:44.116280 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 15:43:44.116293 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 15:43:44.116306 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 15:43:44.116317 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 15:43:44.116330 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 15:43:44.116341 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 15:43:44.116353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 15:43:44.116367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 15:43:44.116380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 15:43:44.116392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 15:43:44.116404 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 15:43:44.116416 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 15:43:44.116429 kernel: fuse: init (API version 7.41) May 15 15:43:44.116444 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 15:43:44.120548 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 15:43:44.120593 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 15:43:44.120608 kernel: loop: module loaded May 15 15:43:44.120625 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 15:43:44.120638 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 15:43:44.120651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:43:44.120665 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 15:43:44.120682 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 15:43:44.120786 systemd-journald[1173]: Collecting audit messages is disabled. May 15 15:43:44.120823 systemd-journald[1173]: Journal started May 15 15:43:44.120860 systemd-journald[1173]: Runtime Journal (/run/log/journal/af670fa0397448c5929f23191605c200) is 4.9M, max 39.5M, 34.6M free. May 15 15:43:43.619036 systemd[1]: Queued start job for default target multi-user.target. May 15 15:43:43.643120 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 15:43:44.128892 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 15:43:43.643864 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 15:43:44.139119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 15:43:44.145744 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 15:43:44.156744 systemd[1]: Started systemd-journald.service - Journal Service. May 15 15:43:44.158242 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 15:43:44.159020 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 15:43:44.161404 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 15:43:44.165590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 15:43:44.167728 kernel: ACPI: bus type drm_connector registered May 15 15:43:44.174215 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 15:43:44.176560 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 15:43:44.178293 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 15:43:44.181240 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 15:43:44.183944 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 15:43:44.237758 kernel: loop0: detected capacity change from 0 to 210664 May 15 15:43:44.249970 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 15:43:44.261930 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 15:43:44.268538 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 15:43:44.270975 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 15:43:44.275652 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 15:43:44.277329 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 15:43:44.292744 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 15:43:44.293446 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 15:43:44.297409 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 15:43:44.343999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 15:43:44.351953 systemd-journald[1173]: Time spent on flushing to /var/log/journal/af670fa0397448c5929f23191605c200 is 86.871ms for 1010 entries. May 15 15:43:44.351953 systemd-journald[1173]: System Journal (/var/log/journal/af670fa0397448c5929f23191605c200) is 8M, max 195.6M, 187.6M free. May 15 15:43:44.490023 systemd-journald[1173]: Received client request to flush runtime journal. May 15 15:43:44.490146 kernel: loop1: detected capacity change from 0 to 146240 May 15 15:43:44.490182 kernel: loop2: detected capacity change from 0 to 8 May 15 15:43:44.490209 kernel: loop3: detected capacity change from 0 to 113872 May 15 15:43:44.398483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 15:43:44.404859 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 15:43:44.415885 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 15:43:44.423682 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 15:43:44.497214 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 15:43:44.521938 kernel: loop4: detected capacity change from 0 to 210664 May 15 15:43:44.555932 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 15:43:44.564314 kernel: loop5: detected capacity change from 0 to 146240 May 15 15:43:44.566288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 15:43:44.628131 kernel: loop6: detected capacity change from 0 to 8 May 15 15:43:44.635767 kernel: loop7: detected capacity change from 0 to 113872 May 15 15:43:44.647096 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 15:43:44.666863 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 15 15:43:44.666886 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 15 15:43:44.682307 (sd-merge)[1247]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 15 15:43:44.687927 (sd-merge)[1247]: Merged extensions into '/usr'. May 15 15:43:44.701105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 15:43:44.704929 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... May 15 15:43:44.704958 systemd[1]: Reloading... May 15 15:43:44.871790 zram_generator::config[1277]: No configuration found. May 15 15:43:45.142850 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 15:43:45.241920 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 15:43:45.301064 systemd[1]: Reloading finished in 595 ms. May 15 15:43:45.327047 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 15:43:45.331651 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 15:43:45.340105 systemd[1]: Starting ensure-sysext.service... May 15 15:43:45.344939 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 15:43:45.380608 systemd[1]: Reload requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... May 15 15:43:45.380630 systemd[1]: Reloading... May 15 15:43:45.409789 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 15:43:45.410622 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 15:43:45.411158 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 15:43:45.411661 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 15:43:45.412990 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 15:43:45.413576 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. May 15 15:43:45.413734 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. May 15 15:43:45.419588 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. May 15 15:43:45.419863 systemd-tmpfiles[1321]: Skipping /boot May 15 15:43:45.445688 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. May 15 15:43:45.447191 systemd-tmpfiles[1321]: Skipping /boot May 15 15:43:45.567746 zram_generator::config[1351]: No configuration found. May 15 15:43:45.751016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 15:43:45.902407 systemd[1]: Reloading finished in 521 ms. May 15 15:43:45.917284 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 15:43:45.937869 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 15:43:45.951576 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 15:43:45.956048 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 15:43:45.958854 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 15:43:45.967127 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 15:43:45.970583 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 15:43:45.975156 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 15:43:45.980589 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:45.980924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:43:45.988248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 15:43:45.997158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 15:43:46.005617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 15:43:46.007161 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:43:46.007356 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:43:46.007504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:46.015268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:46.017130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:43:46.017375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:43:46.017492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:43:46.017588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:46.025034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:46.025329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:43:46.033521 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 15:43:46.035047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:43:46.035283 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:43:46.035514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:46.038455 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 15:43:46.049667 systemd[1]: Finished ensure-sysext.service. May 15 15:43:46.061193 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 15:43:46.061468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 15:43:46.075073 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 15:43:46.080390 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 15:43:46.090406 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 15:43:46.094445 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 15:43:46.096917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 15:43:46.098851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 15:43:46.100436 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 15:43:46.101978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 15:43:46.116904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 15:43:46.117025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 15:43:46.117775 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 15:43:46.120920 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 15:43:46.141009 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 15:43:46.141294 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 15:43:46.163180 systemd-udevd[1397]: Using default interface naming scheme 'v255'. May 15 15:43:46.166555 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 15:43:46.209715 augenrules[1434]: No rules May 15 15:43:46.211763 systemd[1]: audit-rules.service: Deactivated successfully. May 15 15:43:46.213482 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 15:43:46.226274 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 15:43:46.233148 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 15:43:46.234951 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 15:43:46.495199 systemd-networkd[1445]: lo: Link UP May 15 15:43:46.495214 systemd-networkd[1445]: lo: Gained carrier May 15 15:43:46.499272 systemd-networkd[1445]: Enumeration completed May 15 15:43:46.499858 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 15:43:46.520262 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 15 15:43:46.527181 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 15 15:43:46.529339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:46.529602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 15:43:46.538194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 15:43:46.546794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 15:43:46.556114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 15:43:46.558076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 15:43:46.558151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 15:43:46.564036 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 15:43:46.570051 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 15:43:46.572956 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 15:43:46.573031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 15:43:46.573498 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 15:43:46.576565 systemd[1]: Reached target time-set.target - System Time Set. May 15 15:43:46.621431 systemd-resolved[1396]: Positive Trust Anchors: May 15 15:43:46.621456 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 15:43:46.621497 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 15:43:46.625340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 15:43:46.625745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 15:43:46.628098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 15:43:46.635748 kernel: ISO 9660 Extensions: RRIP_1991A May 15 15:43:46.640553 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 15 15:43:46.658084 systemd-resolved[1396]: Using system hostname 'ci-4334.0.0-a-8a7930f089'. May 15 15:43:46.667720 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 15:43:46.673159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 15:43:46.674972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 15:43:46.679861 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 15:43:46.680136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 15:43:46.682806 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 15:43:46.689202 systemd[1]: Reached target network.target - Network. May 15 15:43:46.690930 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 15:43:46.694886 systemd[1]: Reached target sysinit.target - System Initialization. May 15 15:43:46.695650 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 15:43:46.696617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 15:43:46.698211 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 15:43:46.700089 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 15:43:46.701742 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 15:43:46.703012 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 15:43:46.704956 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 15:43:46.705094 systemd[1]: Reached target paths.target - Path Units. May 15 15:43:46.706825 systemd[1]: Reached target timers.target - Timer Units. May 15 15:43:46.709439 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 15:43:46.716588 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 15:43:46.726633 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 15:43:46.730226 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 15:43:46.731060 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 15:43:46.744186 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 15:43:46.746387 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 15:43:46.748094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 15:43:46.749967 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 15:43:46.761641 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 15:43:46.762899 systemd[1]: Reached target sockets.target - Socket Units. May 15 15:43:46.764462 systemd[1]: Reached target basic.target - Basic System. May 15 15:43:46.765278 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 15:43:46.765330 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 15:43:46.768574 systemd[1]: Starting containerd.service - containerd container runtime... May 15 15:43:46.775111 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 15:43:46.781066 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 15:43:46.792206 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 15:43:46.799973 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 15:43:46.813070 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 15:43:46.814478 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 15:43:46.821367 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 15:43:46.825326 coreos-metadata[1491]: May 15 15:43:46.823 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 15:43:46.825326 coreos-metadata[1491]: May 15 15:43:46.823 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 15 15:43:46.832832 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 15:43:46.837366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 15:43:46.853379 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 15:43:46.869013 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 15:43:46.880234 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 15:43:46.884809 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 15:43:46.888828 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 15:43:46.897792 systemd[1]: Starting update-engine.service - Update Engine... May 15 15:43:46.899132 oslogin_cache_refresh[1499]: Refreshing passwd entry cache May 15 15:43:46.901361 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing passwd entry cache May 15 15:43:46.902248 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 15:43:46.909799 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting users, quitting May 15 15:43:46.909799 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 15:43:46.909799 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing group entry cache May 15 15:43:46.906942 oslogin_cache_refresh[1499]: Failure getting users, quitting May 15 15:43:46.906973 oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 15:43:46.907048 oslogin_cache_refresh[1499]: Refreshing group entry cache May 15 15:43:46.929844 jq[1496]: false May 15 15:43:46.930174 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting groups, quitting May 15 15:43:46.930174 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 15:43:46.914788 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 15:43:46.930399 extend-filesystems[1497]: Found loop4 May 15 15:43:46.930399 extend-filesystems[1497]: Found loop5 May 15 15:43:46.930399 extend-filesystems[1497]: Found loop6 May 15 15:43:46.930399 extend-filesystems[1497]: Found loop7 May 15 15:43:46.930399 extend-filesystems[1497]: Found vda May 15 15:43:46.930399 extend-filesystems[1497]: Found vda1 May 15 15:43:46.930399 extend-filesystems[1497]: Found vda2 May 15 15:43:46.930399 extend-filesystems[1497]: Found vda3 May 15 15:43:46.930399 extend-filesystems[1497]: Found usr May 15 15:43:46.930399 extend-filesystems[1497]: Found vda4 May 15 15:43:46.930399 extend-filesystems[1497]: Found vda6 May 15 15:43:46.930399 extend-filesystems[1497]: Found vda7 May 15 15:43:46.930399 extend-filesystems[1497]: Found vda9 May 15 15:43:46.930399 extend-filesystems[1497]: Found vdb May 15 15:43:46.912170 oslogin_cache_refresh[1499]: Failure getting groups, quitting May 15 15:43:47.026018 jq[1515]: true May 15 15:43:46.917621 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 15:43:46.912192 oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 15:43:46.923166 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 15:43:46.923611 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 15:43:46.923835 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 15:43:47.038030 tar[1518]: linux-amd64/helm May 15 15:43:46.927593 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 15:43:46.928379 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 15:43:46.932005 systemd[1]: motdgen.service: Deactivated successfully. May 15 15:43:46.932351 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 15:43:46.960577 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 15:43:46.961805 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 15:43:47.059761 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 15:43:47.073152 update_engine[1514]: I20250515 15:43:47.071102 1514 main.cc:92] Flatcar Update Engine starting May 15 15:43:47.080832 jq[1519]: true May 15 15:43:47.099688 dbus-daemon[1492]: [system] SELinux support is enabled May 15 15:43:47.100374 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 15:43:47.114096 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 15:43:47.114147 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 15:43:47.118742 update_engine[1514]: I20250515 15:43:47.117332 1514 update_check_scheduler.cc:74] Next update check in 8m2s May 15 15:43:47.117786 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 15:43:47.117885 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 15 15:43:47.117914 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 15:43:47.121182 systemd[1]: Started update-engine.service - Update Engine. May 15 15:43:47.169815 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 15:43:47.213951 systemd-logind[1513]: New seat seat0. May 15 15:43:47.215876 systemd[1]: Started systemd-logind.service - User Login Management. May 15 15:43:47.269569 systemd-networkd[1445]: eth0: Configuring with /run/systemd/network/10-b6:e2:41:95:d5:1e.network. May 15 15:43:47.272591 systemd-networkd[1445]: eth0: Link UP May 15 15:43:47.273047 systemd-networkd[1445]: eth0: Gained carrier May 15 15:43:47.285487 bash[1556]: Updated "/home/core/.ssh/authorized_keys" May 15 15:43:47.287686 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 15:43:47.294134 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. May 15 15:43:47.297360 systemd[1]: Starting sshkeys.service... May 15 15:43:47.336532 systemd-networkd[1445]: eth1: Configuring with /run/systemd/network/10-ee:8c:43:48:69:9c.network. May 15 15:43:47.345040 systemd-networkd[1445]: eth1: Link UP May 15 15:43:47.348115 systemd-networkd[1445]: eth1: Gained carrier May 15 15:43:47.368872 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 15:43:47.373456 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 15:43:47.398011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 15:43:47.410232 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 15:43:47.495744 kernel: mousedev: PS/2 mouse device common for all mice May 15 15:43:47.558955 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 15:43:47.586770 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 15:43:47.605014 coreos-metadata[1560]: May 15 15:43:47.604 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 15 15:43:47.628734 coreos-metadata[1560]: May 15 15:43:47.626 INFO Fetch successful May 15 15:43:47.629181 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 15:43:47.652817 unknown[1560]: wrote ssh authorized keys file for user: core May 15 15:43:47.722306 kernel: ACPI: button: Power Button [PWRF] May 15 15:43:47.733942 update-ssh-keys[1575]: Updated "/home/core/.ssh/authorized_keys" May 15 15:43:47.738981 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 15:43:47.752025 systemd[1]: Finished sshkeys.service. May 15 15:43:47.756287 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 15 15:43:47.756475 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 15 15:43:47.756900 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 15 15:43:47.762195 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 15:43:47.762495 kernel: Console: switching to colour dummy device 80x25 May 15 15:43:47.762525 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 15 15:43:47.762549 kernel: [drm] features: -context_init May 15 15:43:47.767443 kernel: [drm] number of scanouts: 1 May 15 15:43:47.767550 kernel: [drm] number of cap sets: 0 May 15 15:43:47.767595 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 15 15:43:47.824009 coreos-metadata[1491]: May 15 15:43:47.823 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 15 15:43:47.837794 coreos-metadata[1491]: May 15 15:43:47.837 INFO Fetch successful May 15 15:43:47.914838 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 15:43:47.915586 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 15:43:47.991135 containerd[1530]: time="2025-05-15T15:43:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 15:43:47.999748 containerd[1530]: time="2025-05-15T15:43:47.999650916Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 15:43:48.027074 sshd_keygen[1538]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.088991299Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.445µs" May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089050885Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089091747Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089357295Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089382109Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089421486Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089514072Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089528268Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089946516Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089974099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.089991273Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 15:43:48.108833 containerd[1530]: time="2025-05-15T15:43:48.090003440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 15:43:48.109393 containerd[1530]: time="2025-05-15T15:43:48.090137356Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 15:43:48.109393 containerd[1530]: time="2025-05-15T15:43:48.090473562Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 15:43:48.109393 containerd[1530]: time="2025-05-15T15:43:48.090520665Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 15:43:48.109393 containerd[1530]: time="2025-05-15T15:43:48.090534946Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 15:43:48.109393 containerd[1530]: time="2025-05-15T15:43:48.090570026Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 15:43:48.109393 containerd[1530]: time="2025-05-15T15:43:48.094979938Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 15:43:48.109393 containerd[1530]: time="2025-05-15T15:43:48.095167844Z" level=info msg="metadata content store policy set" policy=shared May 15 15:43:48.116320 containerd[1530]: time="2025-05-15T15:43:48.116215203Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 15:43:48.116527 containerd[1530]: time="2025-05-15T15:43:48.116461273Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 15:43:48.116527 containerd[1530]: time="2025-05-15T15:43:48.116485036Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 15:43:48.116527 containerd[1530]: time="2025-05-15T15:43:48.116497805Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 15:43:48.116527 containerd[1530]: time="2025-05-15T15:43:48.116510442Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 15:43:48.116527 containerd[1530]: time="2025-05-15T15:43:48.116522403Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 15:43:48.116691 containerd[1530]: time="2025-05-15T15:43:48.116547914Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 15:43:48.116691 containerd[1530]: time="2025-05-15T15:43:48.116560039Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 15:43:48.116691 containerd[1530]: time="2025-05-15T15:43:48.116571312Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 15:43:48.116691 containerd[1530]: time="2025-05-15T15:43:48.116582240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 15:43:48.116691 containerd[1530]: time="2025-05-15T15:43:48.116611295Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 15:43:48.116691 containerd[1530]: time="2025-05-15T15:43:48.116627132Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 15:43:48.116933 containerd[1530]: time="2025-05-15T15:43:48.116842581Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 15:43:48.116933 containerd[1530]: time="2025-05-15T15:43:48.116864941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 15:43:48.116933 containerd[1530]: time="2025-05-15T15:43:48.116880178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 15:43:48.116933 containerd[1530]: time="2025-05-15T15:43:48.116896711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 15:43:48.116933 containerd[1530]: time="2025-05-15T15:43:48.116911694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 15:43:48.116933 containerd[1530]: time="2025-05-15T15:43:48.116928210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 15:43:48.117160 containerd[1530]: time="2025-05-15T15:43:48.116961274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 15:43:48.117160 containerd[1530]: time="2025-05-15T15:43:48.116994404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 15:43:48.117160 containerd[1530]: time="2025-05-15T15:43:48.117032188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 15:43:48.117160 containerd[1530]: time="2025-05-15T15:43:48.117058904Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 15:43:48.117160 containerd[1530]: time="2025-05-15T15:43:48.117071893Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 15:43:48.117160 containerd[1530]: time="2025-05-15T15:43:48.117148860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 15:43:48.117384 containerd[1530]: time="2025-05-15T15:43:48.117163419Z" level=info msg="Start snapshots syncer" May 15 15:43:48.117384 containerd[1530]: time="2025-05-15T15:43:48.117184686Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 15:43:48.117611 containerd[1530]: time="2025-05-15T15:43:48.117527943Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 15:43:48.117799 containerd[1530]: time="2025-05-15T15:43:48.117622682Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 15:43:48.123505 containerd[1530]: time="2025-05-15T15:43:48.123364665Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 15:43:48.123802 containerd[1530]: time="2025-05-15T15:43:48.123738882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 15:43:48.123851 containerd[1530]: time="2025-05-15T15:43:48.123804314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 15:43:48.123851 containerd[1530]: time="2025-05-15T15:43:48.123820141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 15:43:48.123851 containerd[1530]: time="2025-05-15T15:43:48.123833194Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 15:43:48.123851 containerd[1530]: time="2025-05-15T15:43:48.123848682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 15:43:48.123988 containerd[1530]: time="2025-05-15T15:43:48.123875499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 15:43:48.123988 containerd[1530]: time="2025-05-15T15:43:48.123890106Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 15:43:48.123988 containerd[1530]: time="2025-05-15T15:43:48.123922293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 15:43:48.123988 containerd[1530]: time="2025-05-15T15:43:48.123933637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 15:43:48.123988 containerd[1530]: time="2025-05-15T15:43:48.123962599Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 15:43:48.124176 containerd[1530]: time="2025-05-15T15:43:48.124095259Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 15:43:48.124176 containerd[1530]: time="2025-05-15T15:43:48.124119938Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 15:43:48.124176 containerd[1530]: time="2025-05-15T15:43:48.124135178Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 15:43:48.124176 containerd[1530]: time="2025-05-15T15:43:48.124166133Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 15:43:48.124176 containerd[1530]: time="2025-05-15T15:43:48.124175627Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 15:43:48.124334 containerd[1530]: time="2025-05-15T15:43:48.124185294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 15:43:48.124334 containerd[1530]: time="2025-05-15T15:43:48.124200657Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 15:43:48.124334 containerd[1530]: time="2025-05-15T15:43:48.124213181Z" level=info msg="runtime interface created" May 15 15:43:48.124334 containerd[1530]: time="2025-05-15T15:43:48.124218719Z" level=info msg="created NRI interface" May 15 15:43:48.124334 containerd[1530]: time="2025-05-15T15:43:48.124242575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 15:43:48.124334 containerd[1530]: time="2025-05-15T15:43:48.124259079Z" level=info msg="Connect containerd service" May 15 15:43:48.124334 containerd[1530]: time="2025-05-15T15:43:48.124287646Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 15:43:48.138809 containerd[1530]: time="2025-05-15T15:43:48.138745514Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 15:43:48.166823 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 15:43:48.174167 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 15:43:48.238274 systemd[1]: issuegen.service: Deactivated successfully. May 15 15:43:48.238606 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 15:43:48.250355 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 15:43:48.345146 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 15:43:48.349363 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 15:43:48.357469 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 15:43:48.357871 systemd[1]: Reached target getty.target - Login Prompts. May 15 15:43:48.526388 systemd-logind[1513]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 15:43:48.541361 systemd-logind[1513]: Watching system buttons on /dev/input/event2 (Power Button) May 15 15:43:48.566108 containerd[1530]: time="2025-05-15T15:43:48.565898274Z" level=info msg="Start subscribing containerd event" May 15 15:43:48.566108 containerd[1530]: time="2025-05-15T15:43:48.565965882Z" level=info msg="Start recovering state" May 15 15:43:48.566728 containerd[1530]: time="2025-05-15T15:43:48.566303524Z" level=info msg="Start event monitor" May 15 15:43:48.566728 containerd[1530]: time="2025-05-15T15:43:48.566331622Z" level=info msg="Start cni network conf syncer for default" May 15 15:43:48.566728 containerd[1530]: time="2025-05-15T15:43:48.566341288Z" level=info msg="Start streaming server" May 15 15:43:48.566728 containerd[1530]: time="2025-05-15T15:43:48.566354041Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 15:43:48.566728 containerd[1530]: time="2025-05-15T15:43:48.566364189Z" level=info msg="runtime interface starting up..." May 15 15:43:48.566728 containerd[1530]: time="2025-05-15T15:43:48.566370925Z" level=info msg="starting plugins..." May 15 15:43:48.566728 containerd[1530]: time="2025-05-15T15:43:48.566385827Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 15:43:48.571744 containerd[1530]: time="2025-05-15T15:43:48.570245086Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 15:43:48.571744 containerd[1530]: time="2025-05-15T15:43:48.570359042Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 15:43:48.573502 containerd[1530]: time="2025-05-15T15:43:48.573465966Z" level=info msg="containerd successfully booted in 0.583111s" May 15 15:43:48.576841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:43:48.577372 systemd[1]: Started containerd.service - containerd container runtime. May 15 15:43:48.614473 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 15:43:48.615018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:43:48.619432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 15:43:48.642613 kernel: EDAC MC: Ver: 3.0.0 May 15 15:43:48.729292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 15:43:48.844762 tar[1518]: linux-amd64/LICENSE May 15 15:43:48.845286 tar[1518]: linux-amd64/README.md May 15 15:43:48.865838 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 15:43:49.217157 systemd-networkd[1445]: eth1: Gained IPv6LL May 15 15:43:49.221219 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 15:43:49.222023 systemd[1]: Reached target network-online.target - Network is Online. May 15 15:43:49.226689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:43:49.230067 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 15:43:49.277127 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 15:43:49.281059 systemd-networkd[1445]: eth0: Gained IPv6LL May 15 15:43:49.755850 systemd-timesyncd[1414]: Contacted time server 144.202.66.214:123 (0.flatcar.pool.ntp.org). May 15 15:43:49.756473 systemd-timesyncd[1414]: Initial clock synchronization to Thu 2025-05-15 15:43:49.449465 UTC. May 15 15:43:50.281056 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 15:43:50.284916 systemd[1]: Started sshd@0-164.92.106.96:22-139.178.68.195:37636.service - OpenSSH per-connection server daemon (139.178.68.195:37636). May 15 15:43:50.363225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:43:50.366215 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 15:43:50.367829 systemd[1]: Startup finished in 4.332s (kernel) + 6.163s (initrd) + 7.559s (userspace) = 18.055s. May 15 15:43:50.371645 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 15:43:50.391259 sshd[1654]: Accepted publickey for core from 139.178.68.195 port 37636 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:43:50.391841 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:43:50.402068 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 15:43:50.403339 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 15:43:50.419634 systemd-logind[1513]: New session 1 of user core. May 15 15:43:50.441511 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 15:43:50.447301 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 15:43:50.465002 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 15:43:50.470297 systemd-logind[1513]: New session c1 of user core. May 15 15:43:50.643029 systemd[1668]: Queued start job for default target default.target. May 15 15:43:50.653900 systemd[1668]: Created slice app.slice - User Application Slice. May 15 15:43:50.653957 systemd[1668]: Reached target paths.target - Paths. May 15 15:43:50.654148 systemd[1668]: Reached target timers.target - Timers. May 15 15:43:50.656223 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 15:43:50.693879 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 15:43:50.694059 systemd[1668]: Reached target sockets.target - Sockets. May 15 15:43:50.694461 systemd[1668]: Reached target basic.target - Basic System. May 15 15:43:50.694535 systemd[1668]: Reached target default.target - Main User Target. May 15 15:43:50.694575 systemd[1668]: Startup finished in 207ms. May 15 15:43:50.694691 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 15:43:50.702207 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 15:43:50.782091 systemd[1]: Started sshd@1-164.92.106.96:22-139.178.68.195:37648.service - OpenSSH per-connection server daemon (139.178.68.195:37648). May 15 15:43:50.871983 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 37648 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:43:50.873315 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:43:50.881791 systemd-logind[1513]: New session 2 of user core. May 15 15:43:50.888078 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 15:43:50.956410 sshd[1685]: Connection closed by 139.178.68.195 port 37648 May 15 15:43:50.957975 sshd-session[1683]: pam_unix(sshd:session): session closed for user core May 15 15:43:50.969559 systemd[1]: sshd@1-164.92.106.96:22-139.178.68.195:37648.service: Deactivated successfully. May 15 15:43:50.973119 systemd[1]: session-2.scope: Deactivated successfully. May 15 15:43:50.974575 systemd-logind[1513]: Session 2 logged out. Waiting for processes to exit. May 15 15:43:50.979832 systemd-logind[1513]: Removed session 2. May 15 15:43:50.981034 systemd[1]: Started sshd@2-164.92.106.96:22-139.178.68.195:37660.service - OpenSSH per-connection server daemon (139.178.68.195:37660). May 15 15:43:51.043729 sshd[1692]: Accepted publickey for core from 139.178.68.195 port 37660 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:43:51.046170 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:43:51.055313 systemd-logind[1513]: New session 3 of user core. May 15 15:43:51.061047 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 15:43:51.122630 sshd[1694]: Connection closed by 139.178.68.195 port 37660 May 15 15:43:51.126304 sshd-session[1692]: pam_unix(sshd:session): session closed for user core May 15 15:43:51.135643 systemd[1]: sshd@2-164.92.106.96:22-139.178.68.195:37660.service: Deactivated successfully. May 15 15:43:51.138570 systemd[1]: session-3.scope: Deactivated successfully. May 15 15:43:51.141688 systemd-logind[1513]: Session 3 logged out. Waiting for processes to exit. May 15 15:43:51.147544 systemd[1]: Started sshd@3-164.92.106.96:22-139.178.68.195:37664.service - OpenSSH per-connection server daemon (139.178.68.195:37664). May 15 15:43:51.149101 systemd-logind[1513]: Removed session 3. May 15 15:43:51.192537 kubelet[1660]: E0515 15:43:51.192468 1660 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 15:43:51.196320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 15:43:51.196478 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 15:43:51.197494 systemd[1]: kubelet.service: Consumed 1.479s CPU time, 242.4M memory peak. May 15 15:43:51.214946 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 37664 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:43:51.216274 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:43:51.224047 systemd-logind[1513]: New session 4 of user core. May 15 15:43:51.234041 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 15:43:51.297956 sshd[1703]: Connection closed by 139.178.68.195 port 37664 May 15 15:43:51.298907 sshd-session[1700]: pam_unix(sshd:session): session closed for user core May 15 15:43:51.308908 systemd[1]: sshd@3-164.92.106.96:22-139.178.68.195:37664.service: Deactivated successfully. May 15 15:43:51.311120 systemd[1]: session-4.scope: Deactivated successfully. May 15 15:43:51.313191 systemd-logind[1513]: Session 4 logged out. Waiting for processes to exit. May 15 15:43:51.315440 systemd[1]: Started sshd@4-164.92.106.96:22-139.178.68.195:37678.service - OpenSSH per-connection server daemon (139.178.68.195:37678). May 15 15:43:51.317456 systemd-logind[1513]: Removed session 4. May 15 15:43:51.377881 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 37678 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:43:51.379860 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:43:51.388853 systemd-logind[1513]: New session 5 of user core. May 15 15:43:51.400103 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 15:43:51.472235 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 15:43:51.472529 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 15:43:51.487895 sudo[1712]: pam_unix(sudo:session): session closed for user root May 15 15:43:51.492236 sshd[1711]: Connection closed by 139.178.68.195 port 37678 May 15 15:43:51.493650 sshd-session[1709]: pam_unix(sshd:session): session closed for user core May 15 15:43:51.507206 systemd[1]: sshd@4-164.92.106.96:22-139.178.68.195:37678.service: Deactivated successfully. May 15 15:43:51.509858 systemd[1]: session-5.scope: Deactivated successfully. May 15 15:43:51.511012 systemd-logind[1513]: Session 5 logged out. Waiting for processes to exit. May 15 15:43:51.516181 systemd[1]: Started sshd@5-164.92.106.96:22-139.178.68.195:37688.service - OpenSSH per-connection server daemon (139.178.68.195:37688). May 15 15:43:51.517553 systemd-logind[1513]: Removed session 5. May 15 15:43:51.595942 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 37688 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:43:51.598489 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:43:51.607803 systemd-logind[1513]: New session 6 of user core. May 15 15:43:51.615138 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 15:43:51.678909 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 15:43:51.679230 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 15:43:51.686859 sudo[1722]: pam_unix(sudo:session): session closed for user root May 15 15:43:51.695510 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 15:43:51.696361 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 15:43:51.713257 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 15:43:51.782655 augenrules[1744]: No rules May 15 15:43:51.784316 systemd[1]: audit-rules.service: Deactivated successfully. May 15 15:43:51.784615 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 15:43:51.787104 sudo[1721]: pam_unix(sudo:session): session closed for user root May 15 15:43:51.790992 sshd[1720]: Connection closed by 139.178.68.195 port 37688 May 15 15:43:51.791982 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 15 15:43:51.805238 systemd[1]: sshd@5-164.92.106.96:22-139.178.68.195:37688.service: Deactivated successfully. May 15 15:43:51.808091 systemd[1]: session-6.scope: Deactivated successfully. May 15 15:43:51.809554 systemd-logind[1513]: Session 6 logged out. Waiting for processes to exit. May 15 15:43:51.816182 systemd[1]: Started sshd@6-164.92.106.96:22-139.178.68.195:37696.service - OpenSSH per-connection server daemon (139.178.68.195:37696). May 15 15:43:51.817842 systemd-logind[1513]: Removed session 6. May 15 15:43:51.886370 sshd[1753]: Accepted publickey for core from 139.178.68.195 port 37696 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:43:51.888457 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:43:51.895492 systemd-logind[1513]: New session 7 of user core. May 15 15:43:51.910062 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 15:43:51.970632 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 15:43:51.971561 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 15:43:52.614595 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 15:43:52.642495 (dockerd)[1775]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 15:43:53.021220 dockerd[1775]: time="2025-05-15T15:43:53.021140074Z" level=info msg="Starting up" May 15 15:43:53.023883 dockerd[1775]: time="2025-05-15T15:43:53.023837582Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 15:43:53.068370 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3721428934-merged.mount: Deactivated successfully. May 15 15:43:53.196740 dockerd[1775]: time="2025-05-15T15:43:53.196421301Z" level=info msg="Loading containers: start." May 15 15:43:53.214748 kernel: Initializing XFRM netlink socket May 15 15:43:53.562823 systemd-networkd[1445]: docker0: Link UP May 15 15:43:53.569430 dockerd[1775]: time="2025-05-15T15:43:53.569116329Z" level=info msg="Loading containers: done." May 15 15:43:53.597368 dockerd[1775]: time="2025-05-15T15:43:53.597282746Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 15:43:53.597636 dockerd[1775]: time="2025-05-15T15:43:53.597422487Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 15:43:53.597636 dockerd[1775]: time="2025-05-15T15:43:53.597605183Z" level=info msg="Initializing buildkit" May 15 15:43:53.630622 dockerd[1775]: time="2025-05-15T15:43:53.630547236Z" level=info msg="Completed buildkit initialization" May 15 15:43:53.641927 dockerd[1775]: time="2025-05-15T15:43:53.641827640Z" level=info msg="Daemon has completed initialization" May 15 15:43:53.642425 dockerd[1775]: time="2025-05-15T15:43:53.642144421Z" level=info msg="API listen on /run/docker.sock" May 15 15:43:53.642750 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 15:43:54.687996 containerd[1530]: time="2025-05-15T15:43:54.687925012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 15:43:55.266752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379064147.mount: Deactivated successfully. May 15 15:43:57.161171 containerd[1530]: time="2025-05-15T15:43:57.161058992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:43:57.162478 containerd[1530]: time="2025-05-15T15:43:57.162418020Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 15 15:43:57.165262 containerd[1530]: time="2025-05-15T15:43:57.165182807Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:43:57.169376 containerd[1530]: time="2025-05-15T15:43:57.169298778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:43:57.170943 containerd[1530]: time="2025-05-15T15:43:57.170381966Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.482390469s" May 15 15:43:57.170943 containerd[1530]: time="2025-05-15T15:43:57.170426991Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 15:43:57.198668 containerd[1530]: time="2025-05-15T15:43:57.198605407Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 15:43:59.503940 containerd[1530]: time="2025-05-15T15:43:59.503778420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:43:59.505211 containerd[1530]: time="2025-05-15T15:43:59.504991193Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 15 15:43:59.506106 containerd[1530]: time="2025-05-15T15:43:59.506017757Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:43:59.510366 containerd[1530]: time="2025-05-15T15:43:59.510310711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:43:59.512205 containerd[1530]: time="2025-05-15T15:43:59.511998419Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.31332606s" May 15 15:43:59.512205 containerd[1530]: time="2025-05-15T15:43:59.512062379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 15:43:59.541111 containerd[1530]: time="2025-05-15T15:43:59.541061507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 15:44:00.927764 containerd[1530]: time="2025-05-15T15:44:00.926508255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:00.927764 containerd[1530]: time="2025-05-15T15:44:00.927735930Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 15 15:44:00.928532 containerd[1530]: time="2025-05-15T15:44:00.928502496Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:00.932052 containerd[1530]: time="2025-05-15T15:44:00.931987037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:00.933388 containerd[1530]: time="2025-05-15T15:44:00.933334215Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.392214276s" May 15 15:44:00.933591 containerd[1530]: time="2025-05-15T15:44:00.933565587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 15:44:00.962644 containerd[1530]: time="2025-05-15T15:44:00.962576418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 15:44:00.965406 systemd-resolved[1396]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 15 15:44:01.411563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 15:44:01.414441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:44:01.687924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:44:01.706509 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 15:44:01.844372 kubelet[2092]: E0515 15:44:01.844279 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 15:44:01.851446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 15:44:01.852320 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 15:44:01.853328 systemd[1]: kubelet.service: Consumed 302ms CPU time, 96.4M memory peak. May 15 15:44:02.381493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428188227.mount: Deactivated successfully. May 15 15:44:03.307119 containerd[1530]: time="2025-05-15T15:44:03.307035193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:03.311040 containerd[1530]: time="2025-05-15T15:44:03.310965727Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 15 15:44:03.314109 containerd[1530]: time="2025-05-15T15:44:03.313964294Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:03.318768 containerd[1530]: time="2025-05-15T15:44:03.318573147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:03.320295 containerd[1530]: time="2025-05-15T15:44:03.319450343Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.356564602s" May 15 15:44:03.320295 containerd[1530]: time="2025-05-15T15:44:03.319755502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 15:44:03.365559 containerd[1530]: time="2025-05-15T15:44:03.365060805Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:44:03.935318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032977848.mount: Deactivated successfully. May 15 15:44:04.065251 systemd-resolved[1396]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 15 15:44:05.383687 containerd[1530]: time="2025-05-15T15:44:05.383503611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:05.386075 containerd[1530]: time="2025-05-15T15:44:05.385993850Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 15:44:05.388776 containerd[1530]: time="2025-05-15T15:44:05.388241042Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:05.398866 containerd[1530]: time="2025-05-15T15:44:05.398747357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:05.404078 containerd[1530]: time="2025-05-15T15:44:05.404009231Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.03888202s" May 15 15:44:05.405912 containerd[1530]: time="2025-05-15T15:44:05.404546572Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:44:05.444784 containerd[1530]: time="2025-05-15T15:44:05.444681203Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 15:44:05.981386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234134781.mount: Deactivated successfully. May 15 15:44:05.995520 containerd[1530]: time="2025-05-15T15:44:05.995394879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:05.997128 containerd[1530]: time="2025-05-15T15:44:05.997041366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 15 15:44:05.999059 containerd[1530]: time="2025-05-15T15:44:05.998943799Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:06.002826 containerd[1530]: time="2025-05-15T15:44:06.002738601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:06.004759 containerd[1530]: time="2025-05-15T15:44:06.003540225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 558.769126ms" May 15 15:44:06.004759 containerd[1530]: time="2025-05-15T15:44:06.003611510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 15:44:06.044826 containerd[1530]: time="2025-05-15T15:44:06.044762751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 15:44:06.676555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651127562.mount: Deactivated successfully. May 15 15:44:10.171825 containerd[1530]: time="2025-05-15T15:44:10.171658167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:10.174745 containerd[1530]: time="2025-05-15T15:44:10.174641163Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 15 15:44:10.178741 containerd[1530]: time="2025-05-15T15:44:10.176890724Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:10.187535 containerd[1530]: time="2025-05-15T15:44:10.187444262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:10.189572 containerd[1530]: time="2025-05-15T15:44:10.189461358Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.144349876s" May 15 15:44:10.189897 containerd[1530]: time="2025-05-15T15:44:10.189857876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 15:44:11.911327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 15:44:11.918166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:44:12.174034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:44:12.195360 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 15:44:12.284749 kubelet[2301]: E0515 15:44:12.284359 2301 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 15:44:12.288590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 15:44:12.289460 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 15:44:12.290408 systemd[1]: kubelet.service: Consumed 243ms CPU time, 97.4M memory peak. May 15 15:44:14.921837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:44:14.922635 systemd[1]: kubelet.service: Consumed 243ms CPU time, 97.4M memory peak. May 15 15:44:14.926751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:44:14.956911 systemd[1]: Reload requested from client PID 2314 ('systemctl') (unit session-7.scope)... May 15 15:44:14.956935 systemd[1]: Reloading... May 15 15:44:15.135735 zram_generator::config[2358]: No configuration found. May 15 15:44:15.352372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 15:44:15.543933 systemd[1]: Reloading finished in 586 ms. May 15 15:44:15.626786 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 15:44:15.626949 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 15:44:15.627596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:44:15.627772 systemd[1]: kubelet.service: Consumed 134ms CPU time, 83.6M memory peak. May 15 15:44:15.631281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:44:15.819760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:44:15.833461 (kubelet)[2412]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 15:44:15.896606 kubelet[2412]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 15:44:15.897093 kubelet[2412]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 15:44:15.897149 kubelet[2412]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 15:44:15.900883 kubelet[2412]: I0515 15:44:15.900746 2412 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 15:44:16.695037 kubelet[2412]: I0515 15:44:16.694936 2412 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 15:44:16.695037 kubelet[2412]: I0515 15:44:16.694998 2412 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 15:44:16.695354 kubelet[2412]: I0515 15:44:16.695307 2412 server.go:927] "Client rotation is on, will bootstrap in background" May 15 15:44:16.727410 kubelet[2412]: I0515 15:44:16.725356 2412 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 15:44:16.729415 kubelet[2412]: E0515 15:44:16.729368 2412 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.106.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.750579 kubelet[2412]: I0515 15:44:16.750507 2412 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 15:44:16.753644 kubelet[2412]: I0515 15:44:16.753515 2412 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 15:44:16.753970 kubelet[2412]: I0515 15:44:16.753615 2412 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-8a7930f089","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 15:44:16.754736 kubelet[2412]: I0515 15:44:16.754619 2412 topology_manager.go:138] "Creating topology manager with none policy" May 15 15:44:16.754736 kubelet[2412]: I0515 15:44:16.754738 2412 container_manager_linux.go:301] "Creating device plugin manager" May 15 15:44:16.754978 kubelet[2412]: I0515 15:44:16.754951 2412 state_mem.go:36] "Initialized new in-memory state store" May 15 15:44:16.756037 kubelet[2412]: I0515 15:44:16.755976 2412 kubelet.go:400] "Attempting to sync node with API server" May 15 15:44:16.757121 kubelet[2412]: I0515 15:44:16.757096 2412 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 15:44:16.757252 kubelet[2412]: I0515 15:44:16.757152 2412 kubelet.go:312] "Adding apiserver pod source" May 15 15:44:16.757252 kubelet[2412]: I0515 15:44:16.757180 2412 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 15:44:16.758741 kubelet[2412]: W0515 15:44:16.756979 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.106.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-8a7930f089&limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.758741 kubelet[2412]: E0515 15:44:16.757385 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.106.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-8a7930f089&limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.760996 kubelet[2412]: W0515 15:44:16.760916 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.106.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.760996 kubelet[2412]: E0515 15:44:16.761006 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.106.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.761774 kubelet[2412]: I0515 15:44:16.761743 2412 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 15:44:16.764565 kubelet[2412]: I0515 15:44:16.763610 2412 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 15:44:16.764565 kubelet[2412]: W0515 15:44:16.763744 2412 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 15:44:16.764800 kubelet[2412]: I0515 15:44:16.764746 2412 server.go:1264] "Started kubelet" May 15 15:44:16.769500 kubelet[2412]: I0515 15:44:16.769448 2412 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 15:44:16.771438 kubelet[2412]: I0515 15:44:16.771390 2412 server.go:455] "Adding debug handlers to kubelet server" May 15 15:44:16.773789 kubelet[2412]: E0515 15:44:16.771735 2412 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.106.96:6443/api/v1/namespaces/default/events\": dial tcp 164.92.106.96:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-8a7930f089.183fbdca3e0651fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-8a7930f089,UID:ci-4334.0.0-a-8a7930f089,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-8a7930f089,},FirstTimestamp:2025-05-15 15:44:16.764686844 +0000 UTC m=+0.925295317,LastTimestamp:2025-05-15 15:44:16.764686844 +0000 UTC m=+0.925295317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-8a7930f089,}" May 15 15:44:16.773789 kubelet[2412]: I0515 15:44:16.771361 2412 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 15:44:16.773789 kubelet[2412]: I0515 15:44:16.773175 2412 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 15:44:16.775862 kubelet[2412]: I0515 15:44:16.775825 2412 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 15:44:16.776423 kubelet[2412]: I0515 15:44:16.776322 2412 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 15:44:16.779402 kubelet[2412]: I0515 15:44:16.779360 2412 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 15:44:16.779550 kubelet[2412]: I0515 15:44:16.779475 2412 reconciler.go:26] "Reconciler: start to sync state" May 15 15:44:16.783885 kubelet[2412]: W0515 15:44:16.783798 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.106.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.783885 kubelet[2412]: E0515 15:44:16.783889 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.106.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.789341 kubelet[2412]: E0515 15:44:16.789265 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.106.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-8a7930f089?timeout=10s\": dial tcp 164.92.106.96:6443: connect: connection refused" interval="200ms" May 15 15:44:16.797388 kubelet[2412]: I0515 15:44:16.795963 2412 factory.go:221] Registration of the systemd container factory successfully May 15 15:44:16.797388 kubelet[2412]: I0515 15:44:16.796156 2412 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 15:44:16.805744 kubelet[2412]: I0515 15:44:16.805082 2412 factory.go:221] Registration of the containerd container factory successfully May 15 15:44:16.820554 kubelet[2412]: E0515 15:44:16.820373 2412 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 15:44:16.831060 kubelet[2412]: I0515 15:44:16.830927 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 15:44:16.839357 kubelet[2412]: I0515 15:44:16.839306 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 15:44:16.841906 kubelet[2412]: I0515 15:44:16.841868 2412 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 15:44:16.844692 kubelet[2412]: I0515 15:44:16.844651 2412 kubelet.go:2337] "Starting kubelet main sync loop" May 15 15:44:16.845672 kubelet[2412]: E0515 15:44:16.845606 2412 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 15:44:16.846262 kubelet[2412]: W0515 15:44:16.845873 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.106.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.846262 kubelet[2412]: E0515 15:44:16.845946 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.106.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:16.854997 kubelet[2412]: I0515 15:44:16.854910 2412 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 15:44:16.854997 kubelet[2412]: I0515 15:44:16.854973 2412 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 15:44:16.855189 kubelet[2412]: I0515 15:44:16.855004 2412 state_mem.go:36] "Initialized new in-memory state store" May 15 15:44:16.858414 kubelet[2412]: I0515 15:44:16.858370 2412 policy_none.go:49] "None policy: Start" May 15 15:44:16.860342 kubelet[2412]: I0515 15:44:16.860203 2412 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 15:44:16.860342 kubelet[2412]: I0515 15:44:16.860343 2412 state_mem.go:35] "Initializing new in-memory state store" May 15 15:44:16.874028 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 15:44:16.887486 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 15:44:16.891691 kubelet[2412]: I0515 15:44:16.891035 2412 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:16.891691 kubelet[2412]: E0515 15:44:16.891622 2412 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.106.96:6443/api/v1/nodes\": dial tcp 164.92.106.96:6443: connect: connection refused" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:16.896109 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 15:44:16.912689 kubelet[2412]: I0515 15:44:16.911915 2412 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 15:44:16.912689 kubelet[2412]: I0515 15:44:16.912405 2412 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 15:44:16.912689 kubelet[2412]: I0515 15:44:16.912625 2412 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 15:44:16.916597 kubelet[2412]: E0515 15:44:16.916521 2412 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-8a7930f089\" not found" May 15 15:44:16.946668 kubelet[2412]: I0515 15:44:16.946427 2412 topology_manager.go:215] "Topology Admit Handler" podUID="9702e54170f818d4092f8f42c44125bb" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:16.951350 kubelet[2412]: I0515 15:44:16.951265 2412 topology_manager.go:215] "Topology Admit Handler" podUID="d60603f7c0509443f01e929d0e8cb1b7" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:44:16.953821 kubelet[2412]: I0515 15:44:16.953103 2412 topology_manager.go:215] "Topology Admit Handler" podUID="9f8a3e4920d38fce69d8a823591c79ce" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:16.964012 systemd[1]: Created slice kubepods-burstable-pod9702e54170f818d4092f8f42c44125bb.slice - libcontainer container kubepods-burstable-pod9702e54170f818d4092f8f42c44125bb.slice. May 15 15:44:16.981257 kubelet[2412]: I0515 15:44:16.981177 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:16.981717 kubelet[2412]: I0515 15:44:16.981541 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:16.987985 systemd[1]: Created slice kubepods-burstable-podd60603f7c0509443f01e929d0e8cb1b7.slice - libcontainer container kubepods-burstable-podd60603f7c0509443f01e929d0e8cb1b7.slice. May 15 15:44:16.990547 kubelet[2412]: E0515 15:44:16.990061 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.106.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-8a7930f089?timeout=10s\": dial tcp 164.92.106.96:6443: connect: connection refused" interval="400ms" May 15 15:44:17.005139 systemd[1]: Created slice kubepods-burstable-pod9f8a3e4920d38fce69d8a823591c79ce.slice - libcontainer container kubepods-burstable-pod9f8a3e4920d38fce69d8a823591c79ce.slice. May 15 15:44:17.082571 kubelet[2412]: I0515 15:44:17.082467 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f8a3e4920d38fce69d8a823591c79ce-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-8a7930f089\" (UID: \"9f8a3e4920d38fce69d8a823591c79ce\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:17.082571 kubelet[2412]: I0515 15:44:17.082530 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f8a3e4920d38fce69d8a823591c79ce-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-8a7930f089\" (UID: \"9f8a3e4920d38fce69d8a823591c79ce\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:17.082571 kubelet[2412]: I0515 15:44:17.082564 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f8a3e4920d38fce69d8a823591c79ce-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-8a7930f089\" (UID: \"9f8a3e4920d38fce69d8a823591c79ce\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:17.082863 kubelet[2412]: I0515 15:44:17.082671 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:17.082863 kubelet[2412]: I0515 15:44:17.082748 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:17.082863 kubelet[2412]: I0515 15:44:17.082776 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d60603f7c0509443f01e929d0e8cb1b7-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-8a7930f089\" (UID: \"d60603f7c0509443f01e929d0e8cb1b7\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:44:17.082863 kubelet[2412]: I0515 15:44:17.082803 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:17.093895 kubelet[2412]: I0515 15:44:17.093811 2412 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:17.094724 kubelet[2412]: E0515 15:44:17.094649 2412 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.106.96:6443/api/v1/nodes\": dial tcp 164.92.106.96:6443: connect: connection refused" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:17.284878 kubelet[2412]: E0515 15:44:17.284665 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:17.286290 containerd[1530]: time="2025-05-15T15:44:17.286194079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-8a7930f089,Uid:9702e54170f818d4092f8f42c44125bb,Namespace:kube-system,Attempt:0,}" May 15 15:44:17.294671 kubelet[2412]: E0515 15:44:17.294560 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:17.296057 containerd[1530]: time="2025-05-15T15:44:17.295974015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-8a7930f089,Uid:d60603f7c0509443f01e929d0e8cb1b7,Namespace:kube-system,Attempt:0,}" May 15 15:44:17.304198 systemd-resolved[1396]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 15 15:44:17.316144 kubelet[2412]: E0515 15:44:17.310286 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:17.317209 containerd[1530]: time="2025-05-15T15:44:17.311221766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-8a7930f089,Uid:9f8a3e4920d38fce69d8a823591c79ce,Namespace:kube-system,Attempt:0,}" May 15 15:44:17.392315 kubelet[2412]: E0515 15:44:17.392245 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.106.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-8a7930f089?timeout=10s\": dial tcp 164.92.106.96:6443: connect: connection refused" interval="800ms" May 15 15:44:17.497879 kubelet[2412]: I0515 15:44:17.497823 2412 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:17.498456 kubelet[2412]: E0515 15:44:17.498407 2412 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.106.96:6443/api/v1/nodes\": dial tcp 164.92.106.96:6443: connect: connection refused" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:17.691508 kubelet[2412]: W0515 15:44:17.691254 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.106.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:17.691840 kubelet[2412]: E0515 15:44:17.691813 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.106.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:17.815677 kubelet[2412]: W0515 15:44:17.815549 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.106.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:17.815677 kubelet[2412]: E0515 15:44:17.815631 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.106.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:17.869976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037104627.mount: Deactivated successfully. May 15 15:44:17.880186 containerd[1530]: time="2025-05-15T15:44:17.880088846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:44:17.884678 containerd[1530]: time="2025-05-15T15:44:17.884197014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 15:44:17.885563 containerd[1530]: time="2025-05-15T15:44:17.885481003Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:44:17.887010 containerd[1530]: time="2025-05-15T15:44:17.886863476Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:44:17.889907 containerd[1530]: time="2025-05-15T15:44:17.889788820Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:44:17.894571 containerd[1530]: time="2025-05-15T15:44:17.894496592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 15 15:44:17.896885 containerd[1530]: time="2025-05-15T15:44:17.896828957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 15 15:44:17.898434 containerd[1530]: time="2025-05-15T15:44:17.898258622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 15:44:17.900240 containerd[1530]: time="2025-05-15T15:44:17.899598926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 580.366292ms" May 15 15:44:17.900756 containerd[1530]: time="2025-05-15T15:44:17.900686414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.43272ms" May 15 15:44:17.904320 containerd[1530]: time="2025-05-15T15:44:17.903687545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 605.905436ms" May 15 15:44:18.152528 containerd[1530]: time="2025-05-15T15:44:18.152341182Z" level=info msg="connecting to shim 87f1b75f75a5eb167fc711967af97be69eb97ff8e14469e4672c1bf7f5e54d98" address="unix:///run/containerd/s/6a1a9d2adc3c8b7127b8a641b792f29aee0f940d16b303dcaf2c326991767852" namespace=k8s.io protocol=ttrpc version=3 May 15 15:44:18.154463 containerd[1530]: time="2025-05-15T15:44:18.154402234Z" level=info msg="connecting to shim 2c7a6bd822f9f8a1ee885c4cd2f50b8a85f16266aafecb4907dcaeb5794a5f38" address="unix:///run/containerd/s/11eb200137911fffac4da57e5409535105c1756d0152d1c14673ef4ae02c7466" namespace=k8s.io protocol=ttrpc version=3 May 15 15:44:18.170333 containerd[1530]: time="2025-05-15T15:44:18.170066437Z" level=info msg="connecting to shim 1fee8bfc38de6c5fa4d1ed5ad279a71daec02b8a51a4ba76c84ccd1816c3b434" address="unix:///run/containerd/s/a891c75ff4a7507555055352c361f1477976f351051dd6847142723cb08cba8c" namespace=k8s.io protocol=ttrpc version=3 May 15 15:44:18.171259 kubelet[2412]: W0515 15:44:18.171174 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.106.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:18.171259 kubelet[2412]: E0515 15:44:18.171272 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.106.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:18.194090 kubelet[2412]: E0515 15:44:18.193008 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.106.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-8a7930f089?timeout=10s\": dial tcp 164.92.106.96:6443: connect: connection refused" interval="1.6s" May 15 15:44:18.303533 kubelet[2412]: I0515 15:44:18.303474 2412 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:18.304866 kubelet[2412]: W0515 15:44:18.304796 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.106.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-8a7930f089&limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:18.305075 kubelet[2412]: E0515 15:44:18.304879 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.106.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-8a7930f089&limit=500&resourceVersion=0": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:18.305124 kubelet[2412]: E0515 15:44:18.305086 2412 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.106.96:6443/api/v1/nodes\": dial tcp 164.92.106.96:6443: connect: connection refused" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:18.324199 systemd[1]: Started cri-containerd-2c7a6bd822f9f8a1ee885c4cd2f50b8a85f16266aafecb4907dcaeb5794a5f38.scope - libcontainer container 2c7a6bd822f9f8a1ee885c4cd2f50b8a85f16266aafecb4907dcaeb5794a5f38. May 15 15:44:18.349959 systemd[1]: Started cri-containerd-1fee8bfc38de6c5fa4d1ed5ad279a71daec02b8a51a4ba76c84ccd1816c3b434.scope - libcontainer container 1fee8bfc38de6c5fa4d1ed5ad279a71daec02b8a51a4ba76c84ccd1816c3b434. May 15 15:44:18.353878 systemd[1]: Started cri-containerd-87f1b75f75a5eb167fc711967af97be69eb97ff8e14469e4672c1bf7f5e54d98.scope - libcontainer container 87f1b75f75a5eb167fc711967af97be69eb97ff8e14469e4672c1bf7f5e54d98. May 15 15:44:18.524674 containerd[1530]: time="2025-05-15T15:44:18.524615967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-8a7930f089,Uid:9702e54170f818d4092f8f42c44125bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c7a6bd822f9f8a1ee885c4cd2f50b8a85f16266aafecb4907dcaeb5794a5f38\"" May 15 15:44:18.527745 containerd[1530]: time="2025-05-15T15:44:18.527353843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-8a7930f089,Uid:9f8a3e4920d38fce69d8a823591c79ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f1b75f75a5eb167fc711967af97be69eb97ff8e14469e4672c1bf7f5e54d98\"" May 15 15:44:18.529016 kubelet[2412]: E0515 15:44:18.528840 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:18.529444 kubelet[2412]: E0515 15:44:18.528972 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:18.537732 containerd[1530]: time="2025-05-15T15:44:18.537093713Z" level=info msg="CreateContainer within sandbox \"87f1b75f75a5eb167fc711967af97be69eb97ff8e14469e4672c1bf7f5e54d98\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 15:44:18.539158 containerd[1530]: time="2025-05-15T15:44:18.539098383Z" level=info msg="CreateContainer within sandbox \"2c7a6bd822f9f8a1ee885c4cd2f50b8a85f16266aafecb4907dcaeb5794a5f38\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 15:44:18.552058 containerd[1530]: time="2025-05-15T15:44:18.551996563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-8a7930f089,Uid:d60603f7c0509443f01e929d0e8cb1b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fee8bfc38de6c5fa4d1ed5ad279a71daec02b8a51a4ba76c84ccd1816c3b434\"" May 15 15:44:18.553296 kubelet[2412]: E0515 15:44:18.553258 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:18.559569 containerd[1530]: time="2025-05-15T15:44:18.559410398Z" level=info msg="CreateContainer within sandbox \"1fee8bfc38de6c5fa4d1ed5ad279a71daec02b8a51a4ba76c84ccd1816c3b434\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 15:44:18.561914 containerd[1530]: time="2025-05-15T15:44:18.561638349Z" level=info msg="Container 4f9325802f72b8094ffe82e81d9aac97b49bac13a72ad152ae6bae0eac226b43: CDI devices from CRI Config.CDIDevices: []" May 15 15:44:18.565632 containerd[1530]: time="2025-05-15T15:44:18.565441120Z" level=info msg="Container c3490f72fece47b67d9d97e25709df3f251ab5c76e3a479a40dd7214d45459a8: CDI devices from CRI Config.CDIDevices: []" May 15 15:44:18.593465 containerd[1530]: time="2025-05-15T15:44:18.593275634Z" level=info msg="CreateContainer within sandbox \"87f1b75f75a5eb167fc711967af97be69eb97ff8e14469e4672c1bf7f5e54d98\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f9325802f72b8094ffe82e81d9aac97b49bac13a72ad152ae6bae0eac226b43\"" May 15 15:44:18.594936 containerd[1530]: time="2025-05-15T15:44:18.594854214Z" level=info msg="StartContainer for \"4f9325802f72b8094ffe82e81d9aac97b49bac13a72ad152ae6bae0eac226b43\"" May 15 15:44:18.596541 containerd[1530]: time="2025-05-15T15:44:18.596459223Z" level=info msg="connecting to shim 4f9325802f72b8094ffe82e81d9aac97b49bac13a72ad152ae6bae0eac226b43" address="unix:///run/containerd/s/6a1a9d2adc3c8b7127b8a641b792f29aee0f940d16b303dcaf2c326991767852" protocol=ttrpc version=3 May 15 15:44:18.600527 containerd[1530]: time="2025-05-15T15:44:18.600031270Z" level=info msg="Container b0c340c02205ef36d230594cec723ee10524621ad8f07b397a53baa85718b162: CDI devices from CRI Config.CDIDevices: []" May 15 15:44:18.604751 containerd[1530]: time="2025-05-15T15:44:18.604659129Z" level=info msg="CreateContainer within sandbox \"2c7a6bd822f9f8a1ee885c4cd2f50b8a85f16266aafecb4907dcaeb5794a5f38\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c3490f72fece47b67d9d97e25709df3f251ab5c76e3a479a40dd7214d45459a8\"" May 15 15:44:18.605605 containerd[1530]: time="2025-05-15T15:44:18.605570487Z" level=info msg="StartContainer for \"c3490f72fece47b67d9d97e25709df3f251ab5c76e3a479a40dd7214d45459a8\"" May 15 15:44:18.608114 containerd[1530]: time="2025-05-15T15:44:18.608054396Z" level=info msg="connecting to shim c3490f72fece47b67d9d97e25709df3f251ab5c76e3a479a40dd7214d45459a8" address="unix:///run/containerd/s/11eb200137911fffac4da57e5409535105c1756d0152d1c14673ef4ae02c7466" protocol=ttrpc version=3 May 15 15:44:18.612755 containerd[1530]: time="2025-05-15T15:44:18.612655005Z" level=info msg="CreateContainer within sandbox \"1fee8bfc38de6c5fa4d1ed5ad279a71daec02b8a51a4ba76c84ccd1816c3b434\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b0c340c02205ef36d230594cec723ee10524621ad8f07b397a53baa85718b162\"" May 15 15:44:18.614148 containerd[1530]: time="2025-05-15T15:44:18.614105107Z" level=info msg="StartContainer for \"b0c340c02205ef36d230594cec723ee10524621ad8f07b397a53baa85718b162\"" May 15 15:44:18.618330 containerd[1530]: time="2025-05-15T15:44:18.618276122Z" level=info msg="connecting to shim b0c340c02205ef36d230594cec723ee10524621ad8f07b397a53baa85718b162" address="unix:///run/containerd/s/a891c75ff4a7507555055352c361f1477976f351051dd6847142723cb08cba8c" protocol=ttrpc version=3 May 15 15:44:18.645001 systemd[1]: Started cri-containerd-c3490f72fece47b67d9d97e25709df3f251ab5c76e3a479a40dd7214d45459a8.scope - libcontainer container c3490f72fece47b67d9d97e25709df3f251ab5c76e3a479a40dd7214d45459a8. May 15 15:44:18.657067 systemd[1]: Started cri-containerd-4f9325802f72b8094ffe82e81d9aac97b49bac13a72ad152ae6bae0eac226b43.scope - libcontainer container 4f9325802f72b8094ffe82e81d9aac97b49bac13a72ad152ae6bae0eac226b43. May 15 15:44:18.683131 systemd[1]: Started cri-containerd-b0c340c02205ef36d230594cec723ee10524621ad8f07b397a53baa85718b162.scope - libcontainer container b0c340c02205ef36d230594cec723ee10524621ad8f07b397a53baa85718b162. May 15 15:44:18.748729 kubelet[2412]: E0515 15:44:18.748623 2412 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.106.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.106.96:6443: connect: connection refused May 15 15:44:18.794835 containerd[1530]: time="2025-05-15T15:44:18.793975601Z" level=info msg="StartContainer for \"c3490f72fece47b67d9d97e25709df3f251ab5c76e3a479a40dd7214d45459a8\" returns successfully" May 15 15:44:18.812260 containerd[1530]: time="2025-05-15T15:44:18.812175818Z" level=info msg="StartContainer for \"4f9325802f72b8094ffe82e81d9aac97b49bac13a72ad152ae6bae0eac226b43\" returns successfully" May 15 15:44:18.911974 kubelet[2412]: E0515 15:44:18.911637 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:18.913222 containerd[1530]: time="2025-05-15T15:44:18.913138633Z" level=info msg="StartContainer for \"b0c340c02205ef36d230594cec723ee10524621ad8f07b397a53baa85718b162\" returns successfully" May 15 15:44:18.923021 kubelet[2412]: E0515 15:44:18.922931 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:19.909619 kubelet[2412]: I0515 15:44:19.909333 2412 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:19.923348 kubelet[2412]: E0515 15:44:19.923251 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:19.925752 kubelet[2412]: E0515 15:44:19.924107 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:20.928057 kubelet[2412]: E0515 15:44:20.927974 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:21.310212 kubelet[2412]: E0515 15:44:21.310032 2412 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-8a7930f089\" not found" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:21.368819 kubelet[2412]: I0515 15:44:21.368630 2412 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:21.761744 kubelet[2412]: I0515 15:44:21.761637 2412 apiserver.go:52] "Watching apiserver" May 15 15:44:21.780545 kubelet[2412]: I0515 15:44:21.780466 2412 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 15:44:22.536613 kubelet[2412]: W0515 15:44:22.536529 2412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 15:44:22.538727 kubelet[2412]: E0515 15:44:22.538546 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:22.931675 kubelet[2412]: E0515 15:44:22.931580 2412 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:23.824989 systemd[1]: Reload requested from client PID 2685 ('systemctl') (unit session-7.scope)... May 15 15:44:23.825031 systemd[1]: Reloading... May 15 15:44:23.963850 zram_generator::config[2724]: No configuration found. May 15 15:44:24.150417 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 15:44:24.329863 systemd[1]: Reloading finished in 504 ms. May 15 15:44:24.365369 kubelet[2412]: I0515 15:44:24.365254 2412 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 15:44:24.365531 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:44:24.385692 systemd[1]: kubelet.service: Deactivated successfully. May 15 15:44:24.386048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:44:24.386132 systemd[1]: kubelet.service: Consumed 1.531s CPU time, 110.1M memory peak. May 15 15:44:24.389215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 15:44:24.609071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 15:44:24.624172 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 15:44:24.762269 kubelet[2778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 15:44:24.762269 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 15:44:24.762269 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 15:44:24.763582 kubelet[2778]: I0515 15:44:24.762340 2778 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 15:44:24.778650 kubelet[2778]: I0515 15:44:24.778593 2778 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 15:44:24.778650 kubelet[2778]: I0515 15:44:24.778636 2778 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 15:44:24.780032 kubelet[2778]: I0515 15:44:24.779985 2778 server.go:927] "Client rotation is on, will bootstrap in background" May 15 15:44:24.782597 kubelet[2778]: I0515 15:44:24.782527 2778 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 15:44:24.785231 kubelet[2778]: I0515 15:44:24.784313 2778 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 15:44:24.804976 kubelet[2778]: I0515 15:44:24.804917 2778 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 15:44:24.805237 kubelet[2778]: I0515 15:44:24.805189 2778 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 15:44:24.805740 kubelet[2778]: I0515 15:44:24.805219 2778 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-8a7930f089","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 15:44:24.805740 kubelet[2778]: I0515 15:44:24.805444 2778 topology_manager.go:138] "Creating topology manager with none policy" May 15 15:44:24.805740 kubelet[2778]: I0515 15:44:24.805456 2778 container_manager_linux.go:301] "Creating device plugin manager" May 15 15:44:24.805740 kubelet[2778]: I0515 15:44:24.805508 2778 state_mem.go:36] "Initialized new in-memory state store" May 15 15:44:24.805740 kubelet[2778]: I0515 15:44:24.805622 2778 kubelet.go:400] "Attempting to sync node with API server" May 15 15:44:24.806181 kubelet[2778]: I0515 15:44:24.805637 2778 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 15:44:24.806181 kubelet[2778]: I0515 15:44:24.805659 2778 kubelet.go:312] "Adding apiserver pod source" May 15 15:44:24.806181 kubelet[2778]: I0515 15:44:24.805678 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 15:44:24.810356 kubelet[2778]: I0515 15:44:24.810271 2778 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 15:44:24.812233 kubelet[2778]: I0515 15:44:24.812191 2778 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 15:44:24.815610 kubelet[2778]: I0515 15:44:24.815557 2778 server.go:1264] "Started kubelet" May 15 15:44:24.832983 kubelet[2778]: I0515 15:44:24.832753 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 15:44:24.839015 kubelet[2778]: I0515 15:44:24.838885 2778 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 15:44:24.849866 kubelet[2778]: I0515 15:44:24.848962 2778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 15:44:24.852997 kubelet[2778]: I0515 15:44:24.852933 2778 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 15:44:24.860858 kubelet[2778]: I0515 15:44:24.859104 2778 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 15:44:24.860858 kubelet[2778]: I0515 15:44:24.859299 2778 reconciler.go:26] "Reconciler: start to sync state" May 15 15:44:24.860858 kubelet[2778]: I0515 15:44:24.859763 2778 server.go:455] "Adding debug handlers to kubelet server" May 15 15:44:24.864204 kubelet[2778]: I0515 15:44:24.862667 2778 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 15:44:24.865993 kubelet[2778]: I0515 15:44:24.865949 2778 factory.go:221] Registration of the systemd container factory successfully May 15 15:44:24.866149 kubelet[2778]: I0515 15:44:24.866088 2778 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 15:44:24.871786 kubelet[2778]: E0515 15:44:24.869938 2778 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 15:44:24.877484 kubelet[2778]: I0515 15:44:24.877444 2778 factory.go:221] Registration of the containerd container factory successfully May 15 15:44:24.888079 kubelet[2778]: I0515 15:44:24.887586 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 15:44:24.897346 kubelet[2778]: I0515 15:44:24.897150 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 15:44:24.900225 kubelet[2778]: I0515 15:44:24.900093 2778 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 15:44:24.901185 kubelet[2778]: I0515 15:44:24.901053 2778 kubelet.go:2337] "Starting kubelet main sync loop" May 15 15:44:24.913675 kubelet[2778]: E0515 15:44:24.913389 2778 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 15:44:24.959306 kubelet[2778]: I0515 15:44:24.959267 2778 kubelet_node_status.go:73] "Attempting to register node" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:24.988568 kubelet[2778]: I0515 15:44:24.987521 2778 kubelet_node_status.go:112] "Node was previously registered" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:24.988568 kubelet[2778]: I0515 15:44:24.987628 2778 kubelet_node_status.go:76] "Successfully registered node" node="ci-4334.0.0-a-8a7930f089" May 15 15:44:25.014499 kubelet[2778]: E0515 15:44:25.014438 2778 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 15:44:25.037454 kubelet[2778]: I0515 15:44:25.037418 2778 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 15:44:25.037835 kubelet[2778]: I0515 15:44:25.037654 2778 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 15:44:25.037835 kubelet[2778]: I0515 15:44:25.037716 2778 state_mem.go:36] "Initialized new in-memory state store" May 15 15:44:25.038185 kubelet[2778]: I0515 15:44:25.038162 2778 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 15:44:25.038346 kubelet[2778]: I0515 15:44:25.038269 2778 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 15:44:25.038437 kubelet[2778]: I0515 15:44:25.038425 2778 policy_none.go:49] "None policy: Start" May 15 15:44:25.040268 kubelet[2778]: I0515 15:44:25.039763 2778 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 15:44:25.040268 kubelet[2778]: I0515 15:44:25.039816 2778 state_mem.go:35] "Initializing new in-memory state store" May 15 15:44:25.040268 kubelet[2778]: I0515 15:44:25.040087 2778 state_mem.go:75] "Updated machine memory state" May 15 15:44:25.052855 kubelet[2778]: I0515 15:44:25.052812 2778 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 15:44:25.054328 kubelet[2778]: I0515 15:44:25.054265 2778 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 15:44:25.058960 kubelet[2778]: I0515 15:44:25.058924 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 15:44:25.216781 kubelet[2778]: I0515 15:44:25.215853 2778 topology_manager.go:215] "Topology Admit Handler" podUID="9f8a3e4920d38fce69d8a823591c79ce" podNamespace="kube-system" podName="kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.216781 kubelet[2778]: I0515 15:44:25.215987 2778 topology_manager.go:215] "Topology Admit Handler" podUID="9702e54170f818d4092f8f42c44125bb" podNamespace="kube-system" podName="kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.216781 kubelet[2778]: I0515 15:44:25.216052 2778 topology_manager.go:215] "Topology Admit Handler" podUID="d60603f7c0509443f01e929d0e8cb1b7" podNamespace="kube-system" podName="kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.232149 kubelet[2778]: W0515 15:44:25.232097 2778 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 15:44:25.233047 kubelet[2778]: E0515 15:44:25.232969 2778 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-8a7930f089\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.236226 kubelet[2778]: W0515 15:44:25.236181 2778 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 15:44:25.237769 kubelet[2778]: W0515 15:44:25.237694 2778 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 15:44:25.264340 kubelet[2778]: I0515 15:44:25.263906 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.264340 kubelet[2778]: I0515 15:44:25.263977 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.264340 kubelet[2778]: I0515 15:44:25.264014 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.264340 kubelet[2778]: I0515 15:44:25.264065 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f8a3e4920d38fce69d8a823591c79ce-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-8a7930f089\" (UID: \"9f8a3e4920d38fce69d8a823591c79ce\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.264340 kubelet[2778]: I0515 15:44:25.264099 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f8a3e4920d38fce69d8a823591c79ce-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-8a7930f089\" (UID: \"9f8a3e4920d38fce69d8a823591c79ce\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.264878 kubelet[2778]: I0515 15:44:25.264129 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.264878 kubelet[2778]: I0515 15:44:25.264158 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9702e54170f818d4092f8f42c44125bb-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-8a7930f089\" (UID: \"9702e54170f818d4092f8f42c44125bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.264878 kubelet[2778]: I0515 15:44:25.264187 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d60603f7c0509443f01e929d0e8cb1b7-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-8a7930f089\" (UID: \"d60603f7c0509443f01e929d0e8cb1b7\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.264878 kubelet[2778]: I0515 15:44:25.264219 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f8a3e4920d38fce69d8a823591c79ce-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-8a7930f089\" (UID: \"9f8a3e4920d38fce69d8a823591c79ce\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:44:25.536405 kubelet[2778]: E0515 15:44:25.535669 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:25.539281 kubelet[2778]: E0515 15:44:25.538638 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:25.539710 kubelet[2778]: E0515 15:44:25.539478 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:25.808175 kubelet[2778]: I0515 15:44:25.807991 2778 apiserver.go:52] "Watching apiserver" May 15 15:44:25.860330 kubelet[2778]: I0515 15:44:25.860251 2778 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 15:44:26.002761 kubelet[2778]: E0515 15:44:26.002679 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:26.013396 kubelet[2778]: E0515 15:44:26.013329 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:26.013919 kubelet[2778]: E0515 15:44:26.013882 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:26.062879 kubelet[2778]: I0515 15:44:26.062224 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" podStartSLOduration=1.062188239 podStartE2EDuration="1.062188239s" podCreationTimestamp="2025-05-15 15:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:44:26.027905125 +0000 UTC m=+1.391252718" watchObservedRunningTime="2025-05-15 15:44:26.062188239 +0000 UTC m=+1.425535821" May 15 15:44:26.094449 kubelet[2778]: I0515 15:44:26.094376 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" podStartSLOduration=1.094352423 podStartE2EDuration="1.094352423s" podCreationTimestamp="2025-05-15 15:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:44:26.062557005 +0000 UTC m=+1.425904640" watchObservedRunningTime="2025-05-15 15:44:26.094352423 +0000 UTC m=+1.457700005" May 15 15:44:26.143438 kubelet[2778]: I0515 15:44:26.143348 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" podStartSLOduration=4.143288544 podStartE2EDuration="4.143288544s" podCreationTimestamp="2025-05-15 15:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:44:26.097035807 +0000 UTC m=+1.460383401" watchObservedRunningTime="2025-05-15 15:44:26.143288544 +0000 UTC m=+1.506636134" May 15 15:44:27.002300 kubelet[2778]: E0515 15:44:27.002239 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:28.006423 kubelet[2778]: E0515 15:44:28.006376 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:31.233239 sudo[1756]: pam_unix(sudo:session): session closed for user root May 15 15:44:31.238247 sshd[1755]: Connection closed by 139.178.68.195 port 37696 May 15 15:44:31.239462 sshd-session[1753]: pam_unix(sshd:session): session closed for user core May 15 15:44:31.247424 systemd[1]: sshd@6-164.92.106.96:22-139.178.68.195:37696.service: Deactivated successfully. May 15 15:44:31.252279 systemd[1]: session-7.scope: Deactivated successfully. May 15 15:44:31.253029 systemd[1]: session-7.scope: Consumed 7.793s CPU time, 186.6M memory peak. May 15 15:44:31.255505 systemd-logind[1513]: Session 7 logged out. Waiting for processes to exit. May 15 15:44:31.259062 systemd-logind[1513]: Removed session 7. May 15 15:44:32.128478 kubelet[2778]: E0515 15:44:32.127000 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:32.314226 update_engine[1514]: I20250515 15:44:32.313856 1514 update_attempter.cc:509] Updating boot flags... May 15 15:44:33.018996 kubelet[2778]: E0515 15:44:33.018955 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:33.290562 kubelet[2778]: E0515 15:44:33.290412 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:34.022020 kubelet[2778]: E0515 15:44:34.021970 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:37.772744 kubelet[2778]: E0515 15:44:37.771524 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:37.889616 kubelet[2778]: I0515 15:44:37.889524 2778 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 15:44:37.891944 containerd[1530]: time="2025-05-15T15:44:37.891795248Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 15:44:37.894405 kubelet[2778]: I0515 15:44:37.894061 2778 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 15:44:38.765293 kubelet[2778]: I0515 15:44:38.762909 2778 topology_manager.go:215] "Topology Admit Handler" podUID="4aa247b9-5887-4a6b-9104-841edcd54339" podNamespace="kube-system" podName="kube-proxy-mmxxf" May 15 15:44:38.781115 systemd[1]: Created slice kubepods-besteffort-pod4aa247b9_5887_4a6b_9104_841edcd54339.slice - libcontainer container kubepods-besteffort-pod4aa247b9_5887_4a6b_9104_841edcd54339.slice. May 15 15:44:38.944998 kubelet[2778]: I0515 15:44:38.944655 2778 topology_manager.go:215] "Topology Admit Handler" podUID="6b0cd25c-cf9a-4891-89b8-290ccc6590da" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-qfvrk" May 15 15:44:38.957173 systemd[1]: Created slice kubepods-besteffort-pod6b0cd25c_cf9a_4891_89b8_290ccc6590da.slice - libcontainer container kubepods-besteffort-pod6b0cd25c_cf9a_4891_89b8_290ccc6590da.slice. May 15 15:44:38.958201 kubelet[2778]: I0515 15:44:38.958153 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4aa247b9-5887-4a6b-9104-841edcd54339-lib-modules\") pod \"kube-proxy-mmxxf\" (UID: \"4aa247b9-5887-4a6b-9104-841edcd54339\") " pod="kube-system/kube-proxy-mmxxf" May 15 15:44:38.958369 kubelet[2778]: I0515 15:44:38.958349 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4aa247b9-5887-4a6b-9104-841edcd54339-xtables-lock\") pod \"kube-proxy-mmxxf\" (UID: \"4aa247b9-5887-4a6b-9104-841edcd54339\") " pod="kube-system/kube-proxy-mmxxf" May 15 15:44:38.958432 kubelet[2778]: I0515 15:44:38.958421 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4aa247b9-5887-4a6b-9104-841edcd54339-kube-proxy\") pod \"kube-proxy-mmxxf\" (UID: \"4aa247b9-5887-4a6b-9104-841edcd54339\") " pod="kube-system/kube-proxy-mmxxf" May 15 15:44:38.958509 kubelet[2778]: I0515 15:44:38.958494 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwlch\" (UniqueName: \"kubernetes.io/projected/4aa247b9-5887-4a6b-9104-841edcd54339-kube-api-access-cwlch\") pod \"kube-proxy-mmxxf\" (UID: \"4aa247b9-5887-4a6b-9104-841edcd54339\") " pod="kube-system/kube-proxy-mmxxf" May 15 15:44:39.059808 kubelet[2778]: I0515 15:44:39.059552 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b0cd25c-cf9a-4891-89b8-290ccc6590da-var-lib-calico\") pod \"tigera-operator-797db67f8-qfvrk\" (UID: \"6b0cd25c-cf9a-4891-89b8-290ccc6590da\") " pod="tigera-operator/tigera-operator-797db67f8-qfvrk" May 15 15:44:39.059808 kubelet[2778]: I0515 15:44:39.059631 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzxf5\" (UniqueName: \"kubernetes.io/projected/6b0cd25c-cf9a-4891-89b8-290ccc6590da-kube-api-access-bzxf5\") pod \"tigera-operator-797db67f8-qfvrk\" (UID: \"6b0cd25c-cf9a-4891-89b8-290ccc6590da\") " pod="tigera-operator/tigera-operator-797db67f8-qfvrk" May 15 15:44:39.093910 kubelet[2778]: E0515 15:44:39.093594 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:39.095288 containerd[1530]: time="2025-05-15T15:44:39.095237380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmxxf,Uid:4aa247b9-5887-4a6b-9104-841edcd54339,Namespace:kube-system,Attempt:0,}" May 15 15:44:39.129733 containerd[1530]: time="2025-05-15T15:44:39.129530940Z" level=info msg="connecting to shim 5456e09f0800970b2da6fe8c88d28624cf19828757d96d5ddafb9b94e444cded" address="unix:///run/containerd/s/f73a97ac6ef609ea30407bc6c40d10614e6e778542efab099bc80fe5f1feb488" namespace=k8s.io protocol=ttrpc version=3 May 15 15:44:39.173068 systemd[1]: Started cri-containerd-5456e09f0800970b2da6fe8c88d28624cf19828757d96d5ddafb9b94e444cded.scope - libcontainer container 5456e09f0800970b2da6fe8c88d28624cf19828757d96d5ddafb9b94e444cded. May 15 15:44:39.225447 containerd[1530]: time="2025-05-15T15:44:39.225372897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mmxxf,Uid:4aa247b9-5887-4a6b-9104-841edcd54339,Namespace:kube-system,Attempt:0,} returns sandbox id \"5456e09f0800970b2da6fe8c88d28624cf19828757d96d5ddafb9b94e444cded\"" May 15 15:44:39.227340 kubelet[2778]: E0515 15:44:39.227270 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:39.236637 containerd[1530]: time="2025-05-15T15:44:39.236122134Z" level=info msg="CreateContainer within sandbox \"5456e09f0800970b2da6fe8c88d28624cf19828757d96d5ddafb9b94e444cded\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 15:44:39.259074 containerd[1530]: time="2025-05-15T15:44:39.258497371Z" level=info msg="Container 7886abb6653beee72e943c501d63e7c9ff41bb3f4c49a0e8c6a503f45d8c3a73: CDI devices from CRI Config.CDIDevices: []" May 15 15:44:39.266935 containerd[1530]: time="2025-05-15T15:44:39.266876837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-qfvrk,Uid:6b0cd25c-cf9a-4891-89b8-290ccc6590da,Namespace:tigera-operator,Attempt:0,}" May 15 15:44:39.272773 containerd[1530]: time="2025-05-15T15:44:39.272587804Z" level=info msg="CreateContainer within sandbox \"5456e09f0800970b2da6fe8c88d28624cf19828757d96d5ddafb9b94e444cded\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7886abb6653beee72e943c501d63e7c9ff41bb3f4c49a0e8c6a503f45d8c3a73\"" May 15 15:44:39.274977 containerd[1530]: time="2025-05-15T15:44:39.274299212Z" level=info msg="StartContainer for \"7886abb6653beee72e943c501d63e7c9ff41bb3f4c49a0e8c6a503f45d8c3a73\"" May 15 15:44:39.277513 containerd[1530]: time="2025-05-15T15:44:39.277466053Z" level=info msg="connecting to shim 7886abb6653beee72e943c501d63e7c9ff41bb3f4c49a0e8c6a503f45d8c3a73" address="unix:///run/containerd/s/f73a97ac6ef609ea30407bc6c40d10614e6e778542efab099bc80fe5f1feb488" protocol=ttrpc version=3 May 15 15:44:39.297859 containerd[1530]: time="2025-05-15T15:44:39.297788474Z" level=info msg="connecting to shim 6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d" address="unix:///run/containerd/s/5f284d8739288baa82b78cc5989dcdc838467a5b0abf65254b1f52182c15a86d" namespace=k8s.io protocol=ttrpc version=3 May 15 15:44:39.314123 systemd[1]: Started cri-containerd-7886abb6653beee72e943c501d63e7c9ff41bb3f4c49a0e8c6a503f45d8c3a73.scope - libcontainer container 7886abb6653beee72e943c501d63e7c9ff41bb3f4c49a0e8c6a503f45d8c3a73. May 15 15:44:39.343477 systemd[1]: Started cri-containerd-6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d.scope - libcontainer container 6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d. May 15 15:44:39.402664 containerd[1530]: time="2025-05-15T15:44:39.402563810Z" level=info msg="StartContainer for \"7886abb6653beee72e943c501d63e7c9ff41bb3f4c49a0e8c6a503f45d8c3a73\" returns successfully" May 15 15:44:39.457402 containerd[1530]: time="2025-05-15T15:44:39.457320962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-qfvrk,Uid:6b0cd25c-cf9a-4891-89b8-290ccc6590da,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\"" May 15 15:44:39.465069 containerd[1530]: time="2025-05-15T15:44:39.465017958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 15:44:40.040359 kubelet[2778]: E0515 15:44:40.040310 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:40.094077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335815300.mount: Deactivated successfully. May 15 15:44:41.457343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount453987810.mount: Deactivated successfully. May 15 15:44:42.313817 containerd[1530]: time="2025-05-15T15:44:42.313730127Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:42.315254 containerd[1530]: time="2025-05-15T15:44:42.314961033Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 15:44:42.316224 containerd[1530]: time="2025-05-15T15:44:42.316171898Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:42.318682 containerd[1530]: time="2025-05-15T15:44:42.318633779Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:42.320063 containerd[1530]: time="2025-05-15T15:44:42.319914622Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.854846626s" May 15 15:44:42.320063 containerd[1530]: time="2025-05-15T15:44:42.319958641Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 15:44:42.326799 containerd[1530]: time="2025-05-15T15:44:42.325810794Z" level=info msg="CreateContainer within sandbox \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 15:44:42.338424 containerd[1530]: time="2025-05-15T15:44:42.338369624Z" level=info msg="Container 6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b: CDI devices from CRI Config.CDIDevices: []" May 15 15:44:42.353444 containerd[1530]: time="2025-05-15T15:44:42.353369016Z" level=info msg="CreateContainer within sandbox \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\"" May 15 15:44:42.354909 containerd[1530]: time="2025-05-15T15:44:42.354855488Z" level=info msg="StartContainer for \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\"" May 15 15:44:42.356892 containerd[1530]: time="2025-05-15T15:44:42.356391915Z" level=info msg="connecting to shim 6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b" address="unix:///run/containerd/s/5f284d8739288baa82b78cc5989dcdc838467a5b0abf65254b1f52182c15a86d" protocol=ttrpc version=3 May 15 15:44:42.392029 systemd[1]: Started cri-containerd-6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b.scope - libcontainer container 6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b. May 15 15:44:42.435560 containerd[1530]: time="2025-05-15T15:44:42.435440660Z" level=info msg="StartContainer for \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" returns successfully" May 15 15:44:43.065429 kubelet[2778]: I0515 15:44:43.065012 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mmxxf" podStartSLOduration=5.064987268 podStartE2EDuration="5.064987268s" podCreationTimestamp="2025-05-15 15:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 15:44:40.055580702 +0000 UTC m=+15.418928284" watchObservedRunningTime="2025-05-15 15:44:43.064987268 +0000 UTC m=+18.428334844" May 15 15:44:44.948414 kubelet[2778]: I0515 15:44:44.948146 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-qfvrk" podStartSLOduration=4.0891602 podStartE2EDuration="6.948127041s" podCreationTimestamp="2025-05-15 15:44:38 +0000 UTC" firstStartedPulling="2025-05-15 15:44:39.462394419 +0000 UTC m=+14.825741980" lastFinishedPulling="2025-05-15 15:44:42.321361261 +0000 UTC m=+17.684708821" observedRunningTime="2025-05-15 15:44:43.067152793 +0000 UTC m=+18.430500548" watchObservedRunningTime="2025-05-15 15:44:44.948127041 +0000 UTC m=+20.311474624" May 15 15:44:45.693818 kubelet[2778]: I0515 15:44:45.692694 2778 topology_manager.go:215] "Topology Admit Handler" podUID="4166b2be-b827-4afd-a035-d6bf847d3ec9" podNamespace="calico-system" podName="calico-typha-c75d45c47-9qmhx" May 15 15:44:45.726464 systemd[1]: Created slice kubepods-besteffort-pod4166b2be_b827_4afd_a035_d6bf847d3ec9.slice - libcontainer container kubepods-besteffort-pod4166b2be_b827_4afd_a035_d6bf847d3ec9.slice. May 15 15:44:45.803334 kubelet[2778]: I0515 15:44:45.803263 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4166b2be-b827-4afd-a035-d6bf847d3ec9-tigera-ca-bundle\") pod \"calico-typha-c75d45c47-9qmhx\" (UID: \"4166b2be-b827-4afd-a035-d6bf847d3ec9\") " pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:44:45.803334 kubelet[2778]: I0515 15:44:45.803321 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4166b2be-b827-4afd-a035-d6bf847d3ec9-typha-certs\") pod \"calico-typha-c75d45c47-9qmhx\" (UID: \"4166b2be-b827-4afd-a035-d6bf847d3ec9\") " pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:44:45.803334 kubelet[2778]: I0515 15:44:45.803342 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vj8c\" (UniqueName: \"kubernetes.io/projected/4166b2be-b827-4afd-a035-d6bf847d3ec9-kube-api-access-6vj8c\") pod \"calico-typha-c75d45c47-9qmhx\" (UID: \"4166b2be-b827-4afd-a035-d6bf847d3ec9\") " pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:44:45.884832 kubelet[2778]: I0515 15:44:45.884762 2778 topology_manager.go:215] "Topology Admit Handler" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" podNamespace="calico-system" podName="calico-node-nfvst" May 15 15:44:45.898838 systemd[1]: Created slice kubepods-besteffort-pod85ff5786_c114_43e4_8f58_d6ff4433361a.slice - libcontainer container kubepods-besteffort-pod85ff5786_c114_43e4_8f58_d6ff4433361a.slice. May 15 15:44:46.004854 kubelet[2778]: I0515 15:44:46.004661 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-lib-modules\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.004854 kubelet[2778]: I0515 15:44:46.004728 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-policysync\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.004854 kubelet[2778]: I0515 15:44:46.004750 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-var-lib-calico\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.004854 kubelet[2778]: I0515 15:44:46.004771 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-var-run-calico\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.004854 kubelet[2778]: I0515 15:44:46.004787 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-cni-log-dir\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.005448 kubelet[2778]: I0515 15:44:46.004804 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ff5786-c114-43e4-8f58-d6ff4433361a-tigera-ca-bundle\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.005448 kubelet[2778]: I0515 15:44:46.004822 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/85ff5786-c114-43e4-8f58-d6ff4433361a-node-certs\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.005448 kubelet[2778]: I0515 15:44:46.004840 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-xtables-lock\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.005448 kubelet[2778]: I0515 15:44:46.004877 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-cni-bin-dir\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.005448 kubelet[2778]: I0515 15:44:46.004899 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-flexvol-driver-host\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.005574 kubelet[2778]: I0515 15:44:46.004930 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrpmj\" (UniqueName: \"kubernetes.io/projected/85ff5786-c114-43e4-8f58-d6ff4433361a-kube-api-access-nrpmj\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.005574 kubelet[2778]: I0515 15:44:46.004946 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/85ff5786-c114-43e4-8f58-d6ff4433361a-cni-net-dir\") pod \"calico-node-nfvst\" (UID: \"85ff5786-c114-43e4-8f58-d6ff4433361a\") " pod="calico-system/calico-node-nfvst" May 15 15:44:46.034495 kubelet[2778]: E0515 15:44:46.034440 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:46.037015 containerd[1530]: time="2025-05-15T15:44:46.036950875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c75d45c47-9qmhx,Uid:4166b2be-b827-4afd-a035-d6bf847d3ec9,Namespace:calico-system,Attempt:0,}" May 15 15:44:46.084004 containerd[1530]: time="2025-05-15T15:44:46.081694667Z" level=info msg="connecting to shim 28a9044d958b75828267cef05b6b35ba0bcfca6a064c8444d153a321f0bb15cd" address="unix:///run/containerd/s/9e059022c53f00bbc3b6219a90968da7323f687f93e372092af2de27b9024a4f" namespace=k8s.io protocol=ttrpc version=3 May 15 15:44:46.121776 kubelet[2778]: E0515 15:44:46.120931 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.121776 kubelet[2778]: W0515 15:44:46.120971 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.121776 kubelet[2778]: E0515 15:44:46.121026 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.122156 kubelet[2778]: E0515 15:44:46.122050 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.122156 kubelet[2778]: W0515 15:44:46.122088 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.122156 kubelet[2778]: E0515 15:44:46.122117 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.124991 kubelet[2778]: E0515 15:44:46.124932 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.124991 kubelet[2778]: W0515 15:44:46.124960 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.124991 kubelet[2778]: E0515 15:44:46.124986 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.127005 kubelet[2778]: E0515 15:44:46.126961 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.127005 kubelet[2778]: W0515 15:44:46.126989 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.127005 kubelet[2778]: E0515 15:44:46.127022 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.130997 kubelet[2778]: E0515 15:44:46.130962 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.131336 kubelet[2778]: W0515 15:44:46.131271 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.131841 kubelet[2778]: E0515 15:44:46.131312 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.144448 kubelet[2778]: E0515 15:44:46.144390 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.144448 kubelet[2778]: W0515 15:44:46.144431 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.144448 kubelet[2778]: E0515 15:44:46.144461 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.171514 systemd[1]: Started cri-containerd-28a9044d958b75828267cef05b6b35ba0bcfca6a064c8444d153a321f0bb15cd.scope - libcontainer container 28a9044d958b75828267cef05b6b35ba0bcfca6a064c8444d153a321f0bb15cd. May 15 15:44:46.212678 kubelet[2778]: E0515 15:44:46.212624 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:46.215453 containerd[1530]: time="2025-05-15T15:44:46.215267600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nfvst,Uid:85ff5786-c114-43e4-8f58-d6ff4433361a,Namespace:calico-system,Attempt:0,}" May 15 15:44:46.216942 kubelet[2778]: I0515 15:44:46.216849 2778 topology_manager.go:215] "Topology Admit Handler" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" podNamespace="calico-system" podName="csi-node-driver-h6786" May 15 15:44:46.217326 kubelet[2778]: E0515 15:44:46.217296 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:44:46.235908 kubelet[2778]: E0515 15:44:46.235860 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.236397 kubelet[2778]: W0515 15:44:46.236358 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.237174 kubelet[2778]: E0515 15:44:46.236852 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.238267 kubelet[2778]: E0515 15:44:46.237592 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.238267 kubelet[2778]: W0515 15:44:46.237607 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.238267 kubelet[2778]: E0515 15:44:46.237629 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.241174 kubelet[2778]: E0515 15:44:46.241043 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.243279 kubelet[2778]: W0515 15:44:46.242938 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.243279 kubelet[2778]: E0515 15:44:46.242995 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.245675 kubelet[2778]: E0515 15:44:46.245256 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.247071 kubelet[2778]: W0515 15:44:46.246505 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.247071 kubelet[2778]: E0515 15:44:46.246549 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.251347 kubelet[2778]: E0515 15:44:46.251234 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.251347 kubelet[2778]: W0515 15:44:46.251270 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.251985 kubelet[2778]: E0515 15:44:46.251581 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.252431 kubelet[2778]: E0515 15:44:46.252317 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.252431 kubelet[2778]: W0515 15:44:46.252366 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.252431 kubelet[2778]: E0515 15:44:46.252395 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.253713 kubelet[2778]: E0515 15:44:46.253232 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.253713 kubelet[2778]: W0515 15:44:46.253455 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.253713 kubelet[2778]: E0515 15:44:46.253484 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.254280 kubelet[2778]: E0515 15:44:46.254252 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.255805 kubelet[2778]: W0515 15:44:46.254832 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.255805 kubelet[2778]: E0515 15:44:46.254861 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.255805 kubelet[2778]: E0515 15:44:46.255107 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.255805 kubelet[2778]: W0515 15:44:46.255121 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.255805 kubelet[2778]: E0515 15:44:46.255134 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.257177 kubelet[2778]: E0515 15:44:46.256807 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.257177 kubelet[2778]: W0515 15:44:46.256837 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.257177 kubelet[2778]: E0515 15:44:46.256859 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.257177 kubelet[2778]: E0515 15:44:46.257038 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.257177 kubelet[2778]: W0515 15:44:46.257058 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.257177 kubelet[2778]: E0515 15:44:46.257072 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.260185 kubelet[2778]: E0515 15:44:46.260152 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.260857 kubelet[2778]: W0515 15:44:46.260488 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.261088 kubelet[2778]: E0515 15:44:46.260974 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.265738 kubelet[2778]: E0515 15:44:46.265433 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.265738 kubelet[2778]: W0515 15:44:46.265461 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.265738 kubelet[2778]: E0515 15:44:46.265496 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.265738 kubelet[2778]: E0515 15:44:46.265648 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.265738 kubelet[2778]: W0515 15:44:46.265655 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.265738 kubelet[2778]: E0515 15:44:46.265673 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.267105 kubelet[2778]: E0515 15:44:46.266975 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.267105 kubelet[2778]: W0515 15:44:46.267003 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.267105 kubelet[2778]: E0515 15:44:46.267054 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.268404 kubelet[2778]: E0515 15:44:46.268325 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.270384 kubelet[2778]: W0515 15:44:46.270188 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.270559 containerd[1530]: time="2025-05-15T15:44:46.270458467Z" level=info msg="connecting to shim 613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f" address="unix:///run/containerd/s/8280441f141ab184c4ba9783f0f24f6a722c797718422063846e1a0f3b9536a1" namespace=k8s.io protocol=ttrpc version=3 May 15 15:44:46.273648 kubelet[2778]: E0515 15:44:46.273443 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.275014 kubelet[2778]: E0515 15:44:46.274914 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.275014 kubelet[2778]: W0515 15:44:46.274950 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.275570 kubelet[2778]: E0515 15:44:46.274987 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.281003 kubelet[2778]: E0515 15:44:46.280792 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.281557 kubelet[2778]: W0515 15:44:46.280949 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.281557 kubelet[2778]: E0515 15:44:46.281343 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.282977 kubelet[2778]: E0515 15:44:46.282946 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.283497 kubelet[2778]: W0515 15:44:46.283439 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.283889 kubelet[2778]: E0515 15:44:46.283516 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.286033 kubelet[2778]: E0515 15:44:46.285960 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.286033 kubelet[2778]: W0515 15:44:46.286007 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.286185 kubelet[2778]: E0515 15:44:46.286041 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.309934 kubelet[2778]: E0515 15:44:46.309768 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.309934 kubelet[2778]: W0515 15:44:46.309810 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.309934 kubelet[2778]: E0515 15:44:46.309850 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.309934 kubelet[2778]: I0515 15:44:46.309908 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d39bfc53-e893-4a7d-a3e9-870e79b27f93-registration-dir\") pod \"csi-node-driver-h6786\" (UID: \"d39bfc53-e893-4a7d-a3e9-870e79b27f93\") " pod="calico-system/csi-node-driver-h6786" May 15 15:44:46.311296 kubelet[2778]: E0515 15:44:46.310829 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.311296 kubelet[2778]: W0515 15:44:46.310853 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.311296 kubelet[2778]: E0515 15:44:46.310885 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.311296 kubelet[2778]: I0515 15:44:46.310920 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvw74\" (UniqueName: \"kubernetes.io/projected/d39bfc53-e893-4a7d-a3e9-870e79b27f93-kube-api-access-mvw74\") pod \"csi-node-driver-h6786\" (UID: \"d39bfc53-e893-4a7d-a3e9-870e79b27f93\") " pod="calico-system/csi-node-driver-h6786" May 15 15:44:46.313478 kubelet[2778]: E0515 15:44:46.312414 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.313478 kubelet[2778]: W0515 15:44:46.312448 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.313478 kubelet[2778]: E0515 15:44:46.312491 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.314094 kubelet[2778]: E0515 15:44:46.314055 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.314337 kubelet[2778]: W0515 15:44:46.314248 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.314794 kubelet[2778]: E0515 15:44:46.314731 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.314794 kubelet[2778]: W0515 15:44:46.314745 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.315247 kubelet[2778]: E0515 15:44:46.315195 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.315470 kubelet[2778]: W0515 15:44:46.315446 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.315752 kubelet[2778]: E0515 15:44:46.315588 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.315752 kubelet[2778]: E0515 15:44:46.315355 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.315752 kubelet[2778]: I0515 15:44:46.315684 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d39bfc53-e893-4a7d-a3e9-870e79b27f93-varrun\") pod \"csi-node-driver-h6786\" (UID: \"d39bfc53-e893-4a7d-a3e9-870e79b27f93\") " pod="calico-system/csi-node-driver-h6786" May 15 15:44:46.316114 kubelet[2778]: E0515 15:44:46.315338 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.316892 kubelet[2778]: E0515 15:44:46.316779 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.316892 kubelet[2778]: W0515 15:44:46.316839 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.317268 kubelet[2778]: E0515 15:44:46.316874 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.317494 kubelet[2778]: E0515 15:44:46.317324 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.317494 kubelet[2778]: W0515 15:44:46.317341 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.317494 kubelet[2778]: E0515 15:44:46.317358 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.318890 kubelet[2778]: E0515 15:44:46.318856 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.319099 kubelet[2778]: W0515 15:44:46.318985 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.319099 kubelet[2778]: E0515 15:44:46.319017 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.319869 kubelet[2778]: E0515 15:44:46.319854 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.320054 kubelet[2778]: W0515 15:44:46.319923 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.320054 kubelet[2778]: E0515 15:44:46.319940 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.320401 kubelet[2778]: I0515 15:44:46.320287 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d39bfc53-e893-4a7d-a3e9-870e79b27f93-kubelet-dir\") pod \"csi-node-driver-h6786\" (UID: \"d39bfc53-e893-4a7d-a3e9-870e79b27f93\") " pod="calico-system/csi-node-driver-h6786" May 15 15:44:46.320804 kubelet[2778]: E0515 15:44:46.320768 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.320804 kubelet[2778]: W0515 15:44:46.320782 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.321007 kubelet[2778]: E0515 15:44:46.320928 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.321434 kubelet[2778]: E0515 15:44:46.321316 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.321434 kubelet[2778]: W0515 15:44:46.321356 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.321652 kubelet[2778]: E0515 15:44:46.321596 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.322033 kubelet[2778]: E0515 15:44:46.322020 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.322223 kubelet[2778]: W0515 15:44:46.322102 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.322223 kubelet[2778]: E0515 15:44:46.322123 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.322737 kubelet[2778]: I0515 15:44:46.322429 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d39bfc53-e893-4a7d-a3e9-870e79b27f93-socket-dir\") pod \"csi-node-driver-h6786\" (UID: \"d39bfc53-e893-4a7d-a3e9-870e79b27f93\") " pod="calico-system/csi-node-driver-h6786" May 15 15:44:46.323136 kubelet[2778]: E0515 15:44:46.323044 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.323136 kubelet[2778]: W0515 15:44:46.323079 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.323136 kubelet[2778]: E0515 15:44:46.323097 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.323513 kubelet[2778]: E0515 15:44:46.323472 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.323513 kubelet[2778]: W0515 15:44:46.323485 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.323664 kubelet[2778]: E0515 15:44:46.323623 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.378670 systemd[1]: Started cri-containerd-613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f.scope - libcontainer container 613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f. May 15 15:44:46.425482 kubelet[2778]: E0515 15:44:46.425416 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.425807 kubelet[2778]: W0515 15:44:46.425566 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.425807 kubelet[2778]: E0515 15:44:46.425605 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.426518 kubelet[2778]: E0515 15:44:46.426454 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.426518 kubelet[2778]: W0515 15:44:46.426493 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.427007 kubelet[2778]: E0515 15:44:46.426831 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.428003 kubelet[2778]: E0515 15:44:46.427871 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.428181 kubelet[2778]: W0515 15:44:46.428093 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.428568 kubelet[2778]: E0515 15:44:46.428342 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.429274 kubelet[2778]: E0515 15:44:46.429068 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.429533 kubelet[2778]: W0515 15:44:46.429375 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.429533 kubelet[2778]: E0515 15:44:46.429409 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.430617 kubelet[2778]: E0515 15:44:46.430598 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.430714 kubelet[2778]: W0515 15:44:46.430689 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.431394 kubelet[2778]: E0515 15:44:46.430876 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.432744 kubelet[2778]: E0515 15:44:46.432395 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.432939 kubelet[2778]: W0515 15:44:46.432852 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.433358 kubelet[2778]: E0515 15:44:46.433113 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.434788 kubelet[2778]: E0515 15:44:46.434763 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.435527 kubelet[2778]: W0515 15:44:46.435343 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.435749 kubelet[2778]: E0515 15:44:46.435620 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.436544 kubelet[2778]: E0515 15:44:46.436527 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.436976 kubelet[2778]: W0515 15:44:46.436754 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.437276 kubelet[2778]: E0515 15:44:46.437154 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.437763 kubelet[2778]: E0515 15:44:46.437747 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.437999 kubelet[2778]: W0515 15:44:46.437832 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.438274 kubelet[2778]: E0515 15:44:46.438067 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.438557 kubelet[2778]: E0515 15:44:46.438544 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.438775 kubelet[2778]: W0515 15:44:46.438605 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.439288 kubelet[2778]: E0515 15:44:46.439235 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.439617 kubelet[2778]: E0515 15:44:46.439600 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.439900 kubelet[2778]: W0515 15:44:46.439743 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.439999 kubelet[2778]: E0515 15:44:46.439985 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.440346 kubelet[2778]: E0515 15:44:46.440281 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.440346 kubelet[2778]: W0515 15:44:46.440324 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.440645 kubelet[2778]: E0515 15:44:46.440572 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.441258 kubelet[2778]: E0515 15:44:46.441205 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.441258 kubelet[2778]: W0515 15:44:46.441223 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.441529 kubelet[2778]: E0515 15:44:46.441494 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.441898 kubelet[2778]: E0515 15:44:46.441816 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.441898 kubelet[2778]: W0515 15:44:46.441827 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.442200 kubelet[2778]: E0515 15:44:46.441994 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.442538 kubelet[2778]: E0515 15:44:46.442426 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.442538 kubelet[2778]: W0515 15:44:46.442442 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.442777 kubelet[2778]: E0515 15:44:46.442754 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.443338 kubelet[2778]: E0515 15:44:46.443121 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.443338 kubelet[2778]: W0515 15:44:46.443140 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.443946 kubelet[2778]: E0515 15:44:46.443903 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.443946 kubelet[2778]: W0515 15:44:46.443922 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.444319 kubelet[2778]: E0515 15:44:46.444255 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.444319 kubelet[2778]: E0515 15:44:46.444297 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.444533 kubelet[2778]: E0515 15:44:46.444498 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.444533 kubelet[2778]: W0515 15:44:46.444509 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.444776 kubelet[2778]: E0515 15:44:46.444741 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.445023 kubelet[2778]: E0515 15:44:46.444922 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.445023 kubelet[2778]: W0515 15:44:46.444932 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.445023 kubelet[2778]: E0515 15:44:46.444944 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.445255 kubelet[2778]: E0515 15:44:46.445244 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.445365 kubelet[2778]: W0515 15:44:46.445304 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.445365 kubelet[2778]: E0515 15:44:46.445328 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.445577 kubelet[2778]: E0515 15:44:46.445555 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.445577 kubelet[2778]: W0515 15:44:46.445566 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.445799 kubelet[2778]: E0515 15:44:46.445768 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.445946 kubelet[2778]: E0515 15:44:46.445926 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.445946 kubelet[2778]: W0515 15:44:46.445935 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.446129 kubelet[2778]: E0515 15:44:46.446066 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.446455 kubelet[2778]: E0515 15:44:46.446366 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.446455 kubelet[2778]: W0515 15:44:46.446379 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.446591 kubelet[2778]: E0515 15:44:46.446575 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.446858 kubelet[2778]: E0515 15:44:46.446834 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.446858 kubelet[2778]: W0515 15:44:46.446844 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.447101 kubelet[2778]: E0515 15:44:46.446950 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.447499 kubelet[2778]: E0515 15:44:46.447486 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.447597 kubelet[2778]: W0515 15:44:46.447559 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.447597 kubelet[2778]: E0515 15:44:46.447573 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.503090 kubelet[2778]: E0515 15:44:46.503054 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 15:44:46.503090 kubelet[2778]: W0515 15:44:46.503117 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 15:44:46.503090 kubelet[2778]: E0515 15:44:46.503146 2778 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 15:44:46.536077 containerd[1530]: time="2025-05-15T15:44:46.535652922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nfvst,Uid:85ff5786-c114-43e4-8f58-d6ff4433361a,Namespace:calico-system,Attempt:0,} returns sandbox id \"613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f\"" May 15 15:44:46.537198 containerd[1530]: time="2025-05-15T15:44:46.537153450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c75d45c47-9qmhx,Uid:4166b2be-b827-4afd-a035-d6bf847d3ec9,Namespace:calico-system,Attempt:0,} returns sandbox id \"28a9044d958b75828267cef05b6b35ba0bcfca6a064c8444d153a321f0bb15cd\"" May 15 15:44:46.537832 kubelet[2778]: E0515 15:44:46.537813 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:46.538151 kubelet[2778]: E0515 15:44:46.537784 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:46.545817 containerd[1530]: time="2025-05-15T15:44:46.545769003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 15:44:47.907276 kubelet[2778]: E0515 15:44:47.907080 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:44:48.433655 containerd[1530]: time="2025-05-15T15:44:48.433567537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:48.435433 containerd[1530]: time="2025-05-15T15:44:48.435213156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 15:44:48.436760 containerd[1530]: time="2025-05-15T15:44:48.436689718Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:48.440960 containerd[1530]: time="2025-05-15T15:44:48.440897999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:48.442381 containerd[1530]: time="2025-05-15T15:44:48.442234279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.896420423s" May 15 15:44:48.442381 containerd[1530]: time="2025-05-15T15:44:48.442284441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 15:44:48.445338 containerd[1530]: time="2025-05-15T15:44:48.445113895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 15:44:48.448227 containerd[1530]: time="2025-05-15T15:44:48.448141528Z" level=info msg="CreateContainer within sandbox \"613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 15:44:48.464736 containerd[1530]: time="2025-05-15T15:44:48.461871851Z" level=info msg="Container baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca: CDI devices from CRI Config.CDIDevices: []" May 15 15:44:48.469269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814298234.mount: Deactivated successfully. May 15 15:44:48.483028 containerd[1530]: time="2025-05-15T15:44:48.482966956Z" level=info msg="CreateContainer within sandbox \"613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca\"" May 15 15:44:48.488033 containerd[1530]: time="2025-05-15T15:44:48.487946431Z" level=info msg="StartContainer for \"baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca\"" May 15 15:44:48.491985 containerd[1530]: time="2025-05-15T15:44:48.491855418Z" level=info msg="connecting to shim baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca" address="unix:///run/containerd/s/8280441f141ab184c4ba9783f0f24f6a722c797718422063846e1a0f3b9536a1" protocol=ttrpc version=3 May 15 15:44:48.530214 systemd[1]: Started cri-containerd-baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca.scope - libcontainer container baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca. May 15 15:44:48.589984 containerd[1530]: time="2025-05-15T15:44:48.589593496Z" level=info msg="StartContainer for \"baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca\" returns successfully" May 15 15:44:48.607099 systemd[1]: cri-containerd-baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca.scope: Deactivated successfully. May 15 15:44:48.613497 containerd[1530]: time="2025-05-15T15:44:48.613357402Z" level=info msg="received exit event container_id:\"baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca\" id:\"baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca\" pid:3361 exited_at:{seconds:1747323888 nanos:612888787}" May 15 15:44:48.614289 containerd[1530]: time="2025-05-15T15:44:48.614115668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca\" id:\"baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca\" pid:3361 exited_at:{seconds:1747323888 nanos:612888787}" May 15 15:44:48.653575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baa7ab31ad0a8f9d0ce784b609c103feddb506d837299b1e3f8927a8ddfc54ca-rootfs.mount: Deactivated successfully. May 15 15:44:49.086815 kubelet[2778]: E0515 15:44:49.086748 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:49.907835 kubelet[2778]: E0515 15:44:49.907576 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:44:51.291855 containerd[1530]: time="2025-05-15T15:44:51.291792986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:51.294023 containerd[1530]: time="2025-05-15T15:44:51.293842506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 15:44:51.295079 containerd[1530]: time="2025-05-15T15:44:51.295032770Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:51.298097 containerd[1530]: time="2025-05-15T15:44:51.298005788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:51.299437 containerd[1530]: time="2025-05-15T15:44:51.299202546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.854014616s" May 15 15:44:51.299437 containerd[1530]: time="2025-05-15T15:44:51.299257733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 15:44:51.301852 containerd[1530]: time="2025-05-15T15:44:51.301564119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 15:44:51.322682 containerd[1530]: time="2025-05-15T15:44:51.322617575Z" level=info msg="CreateContainer within sandbox \"28a9044d958b75828267cef05b6b35ba0bcfca6a064c8444d153a321f0bb15cd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 15:44:51.355120 containerd[1530]: time="2025-05-15T15:44:51.354887333Z" level=info msg="Container c5977fdc81d1563f3287c77050ebf75ff54177a4f59281b8d8e9edf0a0ece66f: CDI devices from CRI Config.CDIDevices: []" May 15 15:44:51.385680 containerd[1530]: time="2025-05-15T15:44:51.383616039Z" level=info msg="CreateContainer within sandbox \"28a9044d958b75828267cef05b6b35ba0bcfca6a064c8444d153a321f0bb15cd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c5977fdc81d1563f3287c77050ebf75ff54177a4f59281b8d8e9edf0a0ece66f\"" May 15 15:44:51.386471 containerd[1530]: time="2025-05-15T15:44:51.386054555Z" level=info msg="StartContainer for \"c5977fdc81d1563f3287c77050ebf75ff54177a4f59281b8d8e9edf0a0ece66f\"" May 15 15:44:51.390297 containerd[1530]: time="2025-05-15T15:44:51.390192797Z" level=info msg="connecting to shim c5977fdc81d1563f3287c77050ebf75ff54177a4f59281b8d8e9edf0a0ece66f" address="unix:///run/containerd/s/9e059022c53f00bbc3b6219a90968da7323f687f93e372092af2de27b9024a4f" protocol=ttrpc version=3 May 15 15:44:51.457598 systemd[1]: Started cri-containerd-c5977fdc81d1563f3287c77050ebf75ff54177a4f59281b8d8e9edf0a0ece66f.scope - libcontainer container c5977fdc81d1563f3287c77050ebf75ff54177a4f59281b8d8e9edf0a0ece66f. May 15 15:44:51.573746 containerd[1530]: time="2025-05-15T15:44:51.572997312Z" level=info msg="StartContainer for \"c5977fdc81d1563f3287c77050ebf75ff54177a4f59281b8d8e9edf0a0ece66f\" returns successfully" May 15 15:44:51.906847 kubelet[2778]: E0515 15:44:51.906356 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:44:52.104137 kubelet[2778]: E0515 15:44:52.103881 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:53.104436 kubelet[2778]: I0515 15:44:53.104396 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 15:44:53.107242 kubelet[2778]: E0515 15:44:53.105368 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:53.906941 kubelet[2778]: E0515 15:44:53.906883 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:44:54.542982 kubelet[2778]: I0515 15:44:54.542848 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 15:44:54.548850 kubelet[2778]: E0515 15:44:54.548779 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:54.601262 kubelet[2778]: I0515 15:44:54.601067 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c75d45c47-9qmhx" podStartSLOduration=4.844689055 podStartE2EDuration="9.60101612s" podCreationTimestamp="2025-05-15 15:44:45 +0000 UTC" firstStartedPulling="2025-05-15 15:44:46.544293945 +0000 UTC m=+21.907641504" lastFinishedPulling="2025-05-15 15:44:51.300620978 +0000 UTC m=+26.663968569" observedRunningTime="2025-05-15 15:44:52.118663117 +0000 UTC m=+27.482010702" watchObservedRunningTime="2025-05-15 15:44:54.60101612 +0000 UTC m=+29.964363715" May 15 15:44:55.118019 kubelet[2778]: E0515 15:44:55.117546 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:55.908056 kubelet[2778]: E0515 15:44:55.907977 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:44:57.817826 containerd[1530]: time="2025-05-15T15:44:57.817757584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:57.819193 containerd[1530]: time="2025-05-15T15:44:57.819123625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 15:44:57.820315 containerd[1530]: time="2025-05-15T15:44:57.820247896Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:57.822288 containerd[1530]: time="2025-05-15T15:44:57.821994817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:44:57.822923 containerd[1530]: time="2025-05-15T15:44:57.822887039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.52127113s" May 15 15:44:57.823005 containerd[1530]: time="2025-05-15T15:44:57.822935373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 15:44:57.829431 containerd[1530]: time="2025-05-15T15:44:57.829365947Z" level=info msg="CreateContainer within sandbox \"613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 15:44:57.864632 containerd[1530]: time="2025-05-15T15:44:57.863933502Z" level=info msg="Container 277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2: CDI devices from CRI Config.CDIDevices: []" May 15 15:44:57.894767 containerd[1530]: time="2025-05-15T15:44:57.894667303Z" level=info msg="CreateContainer within sandbox \"613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2\"" May 15 15:44:57.898081 containerd[1530]: time="2025-05-15T15:44:57.896086803Z" level=info msg="StartContainer for \"277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2\"" May 15 15:44:57.898354 containerd[1530]: time="2025-05-15T15:44:57.898080561Z" level=info msg="connecting to shim 277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2" address="unix:///run/containerd/s/8280441f141ab184c4ba9783f0f24f6a722c797718422063846e1a0f3b9536a1" protocol=ttrpc version=3 May 15 15:44:57.907993 kubelet[2778]: E0515 15:44:57.907000 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:44:57.934193 systemd[1]: Started cri-containerd-277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2.scope - libcontainer container 277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2. May 15 15:44:58.004840 containerd[1530]: time="2025-05-15T15:44:58.004792572Z" level=info msg="StartContainer for \"277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2\" returns successfully" May 15 15:44:58.149949 kubelet[2778]: E0515 15:44:58.148834 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:58.693162 systemd[1]: cri-containerd-277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2.scope: Deactivated successfully. May 15 15:44:58.694253 systemd[1]: cri-containerd-277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2.scope: Consumed 602ms CPU time, 147M memory peak, 1.6M read from disk, 154M written to disk. May 15 15:44:58.702503 containerd[1530]: time="2025-05-15T15:44:58.700730307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2\" id:\"277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2\" pid:3460 exited_at:{seconds:1747323898 nanos:700313483}" May 15 15:44:58.704315 containerd[1530]: time="2025-05-15T15:44:58.704115758Z" level=info msg="received exit event container_id:\"277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2\" id:\"277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2\" pid:3460 exited_at:{seconds:1747323898 nanos:700313483}" May 15 15:44:58.749504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277d2417b5b6306dc9b1134da52e39cd3201cc402b5253a0bbafd28f75e674d2-rootfs.mount: Deactivated successfully. May 15 15:44:58.831813 kubelet[2778]: I0515 15:44:58.830322 2778 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 15:44:58.872648 kubelet[2778]: I0515 15:44:58.872580 2778 topology_manager.go:215] "Topology Admit Handler" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" podNamespace="calico-system" podName="calico-kube-controllers-65cd59455f-72w5b" May 15 15:44:58.878769 kubelet[2778]: I0515 15:44:58.878275 2778 topology_manager.go:215] "Topology Admit Handler" podUID="313f2947-fbea-432c-a75e-2aede18039e7" podNamespace="calico-apiserver" podName="calico-apiserver-7b8b48d5df-mc8mx" May 15 15:44:58.880198 kubelet[2778]: I0515 15:44:58.880142 2778 topology_manager.go:215] "Topology Admit Handler" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lmnwc" May 15 15:44:58.882134 kubelet[2778]: I0515 15:44:58.882084 2778 topology_manager.go:215] "Topology Admit Handler" podUID="5e09e623-1ef7-4492-acf8-3fd63c18d853" podNamespace="calico-apiserver" podName="calico-apiserver-7b8b48d5df-56f7s" May 15 15:44:58.884052 kubelet[2778]: I0515 15:44:58.883996 2778 topology_manager.go:215] "Topology Admit Handler" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vdlk8" May 15 15:44:58.889328 systemd[1]: Created slice kubepods-besteffort-pod86e0d73b_0507_46e9_944b_4fbf6879e642.slice - libcontainer container kubepods-besteffort-pod86e0d73b_0507_46e9_944b_4fbf6879e642.slice. May 15 15:44:58.903083 kubelet[2778]: W0515 15:44:58.902832 2778 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4334.0.0-a-8a7930f089" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4334.0.0-a-8a7930f089' and this object May 15 15:44:58.903083 kubelet[2778]: E0515 15:44:58.902904 2778 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4334.0.0-a-8a7930f089" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4334.0.0-a-8a7930f089' and this object May 15 15:44:58.911145 systemd[1]: Created slice kubepods-besteffort-pod313f2947_fbea_432c_a75e_2aede18039e7.slice - libcontainer container kubepods-besteffort-pod313f2947_fbea_432c_a75e_2aede18039e7.slice. May 15 15:44:58.924370 systemd[1]: Created slice kubepods-burstable-pod2060f7d9_6d6b_4e81_9323_08b479f092eb.slice - libcontainer container kubepods-burstable-pod2060f7d9_6d6b_4e81_9323_08b479f092eb.slice. May 15 15:44:58.939832 systemd[1]: Created slice kubepods-besteffort-pod5e09e623_1ef7_4492_acf8_3fd63c18d853.slice - libcontainer container kubepods-besteffort-pod5e09e623_1ef7_4492_acf8_3fd63c18d853.slice. May 15 15:44:58.948218 systemd[1]: Created slice kubepods-burstable-podd4ab97e1_a8ea_4ff1_b2ca_fc307beaaf5c.slice - libcontainer container kubepods-burstable-podd4ab97e1_a8ea_4ff1_b2ca_fc307beaaf5c.slice. May 15 15:44:58.954620 kubelet[2778]: I0515 15:44:58.954380 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t9gj\" (UniqueName: \"kubernetes.io/projected/d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c-kube-api-access-7t9gj\") pod \"coredns-7db6d8ff4d-vdlk8\" (UID: \"d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c\") " pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:44:58.955541 kubelet[2778]: I0515 15:44:58.955125 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/313f2947-fbea-432c-a75e-2aede18039e7-calico-apiserver-certs\") pod \"calico-apiserver-7b8b48d5df-mc8mx\" (UID: \"313f2947-fbea-432c-a75e-2aede18039e7\") " pod="calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx" May 15 15:44:58.956894 kubelet[2778]: I0515 15:44:58.956757 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nft5c\" (UniqueName: \"kubernetes.io/projected/5e09e623-1ef7-4492-acf8-3fd63c18d853-kube-api-access-nft5c\") pod \"calico-apiserver-7b8b48d5df-56f7s\" (UID: \"5e09e623-1ef7-4492-acf8-3fd63c18d853\") " pod="calico-apiserver/calico-apiserver-7b8b48d5df-56f7s" May 15 15:44:58.957189 kubelet[2778]: I0515 15:44:58.957061 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2060f7d9-6d6b-4e81-9323-08b479f092eb-config-volume\") pod \"coredns-7db6d8ff4d-lmnwc\" (UID: \"2060f7d9-6d6b-4e81-9323-08b479f092eb\") " pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:44:58.957320 kubelet[2778]: I0515 15:44:58.957094 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e09e623-1ef7-4492-acf8-3fd63c18d853-calico-apiserver-certs\") pod \"calico-apiserver-7b8b48d5df-56f7s\" (UID: \"5e09e623-1ef7-4492-acf8-3fd63c18d853\") " pod="calico-apiserver/calico-apiserver-7b8b48d5df-56f7s" May 15 15:44:58.957734 kubelet[2778]: I0515 15:44:58.957396 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tklb8\" (UniqueName: \"kubernetes.io/projected/86e0d73b-0507-46e9-944b-4fbf6879e642-kube-api-access-tklb8\") pod \"calico-kube-controllers-65cd59455f-72w5b\" (UID: \"86e0d73b-0507-46e9-944b-4fbf6879e642\") " pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:44:58.957734 kubelet[2778]: I0515 15:44:58.957434 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h26zw\" (UniqueName: \"kubernetes.io/projected/313f2947-fbea-432c-a75e-2aede18039e7-kube-api-access-h26zw\") pod \"calico-apiserver-7b8b48d5df-mc8mx\" (UID: \"313f2947-fbea-432c-a75e-2aede18039e7\") " pod="calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx" May 15 15:44:58.957734 kubelet[2778]: I0515 15:44:58.957459 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c-config-volume\") pod \"coredns-7db6d8ff4d-vdlk8\" (UID: \"d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c\") " pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:44:58.957734 kubelet[2778]: I0515 15:44:58.957517 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86e0d73b-0507-46e9-944b-4fbf6879e642-tigera-ca-bundle\") pod \"calico-kube-controllers-65cd59455f-72w5b\" (UID: \"86e0d73b-0507-46e9-944b-4fbf6879e642\") " pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:44:58.957734 kubelet[2778]: I0515 15:44:58.957539 2778 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp9ft\" (UniqueName: \"kubernetes.io/projected/2060f7d9-6d6b-4e81-9323-08b479f092eb-kube-api-access-cp9ft\") pod \"coredns-7db6d8ff4d-lmnwc\" (UID: \"2060f7d9-6d6b-4e81-9323-08b479f092eb\") " pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:44:59.152789 kubelet[2778]: E0515 15:44:59.152681 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:59.153769 containerd[1530]: time="2025-05-15T15:44:59.153733486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 15:44:59.205106 containerd[1530]: time="2025-05-15T15:44:59.204970954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:44:59.233772 kubelet[2778]: E0515 15:44:59.233313 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:59.237046 containerd[1530]: time="2025-05-15T15:44:59.236674200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:44:59.255034 kubelet[2778]: E0515 15:44:59.254902 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:44:59.265115 containerd[1530]: time="2025-05-15T15:44:59.265066398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:44:59.464298 containerd[1530]: time="2025-05-15T15:44:59.463992313Z" level=error msg="Failed to destroy network for sandbox \"af98e102baf5159a8be57b5c7aca687b0d50c329a25cc40f2cc0788976948c12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.466732 containerd[1530]: time="2025-05-15T15:44:59.466603083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af98e102baf5159a8be57b5c7aca687b0d50c329a25cc40f2cc0788976948c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.469180 kubelet[2778]: E0515 15:44:59.468131 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af98e102baf5159a8be57b5c7aca687b0d50c329a25cc40f2cc0788976948c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.469180 kubelet[2778]: E0515 15:44:59.468324 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af98e102baf5159a8be57b5c7aca687b0d50c329a25cc40f2cc0788976948c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:44:59.469180 kubelet[2778]: E0515 15:44:59.468358 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af98e102baf5159a8be57b5c7aca687b0d50c329a25cc40f2cc0788976948c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:44:59.470272 kubelet[2778]: E0515 15:44:59.468441 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af98e102baf5159a8be57b5c7aca687b0d50c329a25cc40f2cc0788976948c12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:44:59.486601 containerd[1530]: time="2025-05-15T15:44:59.486503651Z" level=error msg="Failed to destroy network for sandbox \"63e1c22afb9914587c2228278fb739a47ccb9de6f57a6810ba5bf76dd37aac07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.488629 containerd[1530]: time="2025-05-15T15:44:59.488534200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63e1c22afb9914587c2228278fb739a47ccb9de6f57a6810ba5bf76dd37aac07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.489878 kubelet[2778]: E0515 15:44:59.488991 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63e1c22afb9914587c2228278fb739a47ccb9de6f57a6810ba5bf76dd37aac07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.489878 kubelet[2778]: E0515 15:44:59.489081 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63e1c22afb9914587c2228278fb739a47ccb9de6f57a6810ba5bf76dd37aac07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:44:59.489878 kubelet[2778]: E0515 15:44:59.489120 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63e1c22afb9914587c2228278fb739a47ccb9de6f57a6810ba5bf76dd37aac07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:44:59.490070 kubelet[2778]: E0515 15:44:59.489187 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63e1c22afb9914587c2228278fb739a47ccb9de6f57a6810ba5bf76dd37aac07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:44:59.504121 containerd[1530]: time="2025-05-15T15:44:59.503924374Z" level=error msg="Failed to destroy network for sandbox \"fae06e99770ac49d178ec208e4ba385d4829fde77f470af1a4911298240c8fbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.506652 containerd[1530]: time="2025-05-15T15:44:59.506443265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae06e99770ac49d178ec208e4ba385d4829fde77f470af1a4911298240c8fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.507294 kubelet[2778]: E0515 15:44:59.507056 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae06e99770ac49d178ec208e4ba385d4829fde77f470af1a4911298240c8fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.507294 kubelet[2778]: E0515 15:44:59.507154 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae06e99770ac49d178ec208e4ba385d4829fde77f470af1a4911298240c8fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:44:59.507294 kubelet[2778]: E0515 15:44:59.507189 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae06e99770ac49d178ec208e4ba385d4829fde77f470af1a4911298240c8fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:44:59.509299 kubelet[2778]: E0515 15:44:59.507282 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fae06e99770ac49d178ec208e4ba385d4829fde77f470af1a4911298240c8fbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:44:59.915359 systemd[1]: Created slice kubepods-besteffort-podd39bfc53_e893_4a7d_a3e9_870e79b27f93.slice - libcontainer container kubepods-besteffort-podd39bfc53_e893_4a7d_a3e9_870e79b27f93.slice. May 15 15:44:59.918709 containerd[1530]: time="2025-05-15T15:44:59.918389623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:44:59.992301 containerd[1530]: time="2025-05-15T15:44:59.992247326Z" level=error msg="Failed to destroy network for sandbox \"857c20dff243573bfd7626ebd595495b7daf4e4604d63a8efb741c8ffab097f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.993742 containerd[1530]: time="2025-05-15T15:44:59.993642059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"857c20dff243573bfd7626ebd595495b7daf4e4604d63a8efb741c8ffab097f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.994675 kubelet[2778]: E0515 15:44:59.994182 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"857c20dff243573bfd7626ebd595495b7daf4e4604d63a8efb741c8ffab097f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:44:59.994675 kubelet[2778]: E0515 15:44:59.994259 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"857c20dff243573bfd7626ebd595495b7daf4e4604d63a8efb741c8ffab097f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:44:59.994675 kubelet[2778]: E0515 15:44:59.994283 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"857c20dff243573bfd7626ebd595495b7daf4e4604d63a8efb741c8ffab097f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:44:59.995167 kubelet[2778]: E0515 15:44:59.994337 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"857c20dff243573bfd7626ebd595495b7daf4e4604d63a8efb741c8ffab097f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:45:00.071572 kubelet[2778]: E0515 15:45:00.071438 2778 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 15 15:45:00.071572 kubelet[2778]: E0515 15:45:00.071496 2778 projected.go:200] Error preparing data for projected volume kube-api-access-nft5c for pod calico-apiserver/calico-apiserver-7b8b48d5df-56f7s: failed to sync configmap cache: timed out waiting for the condition May 15 15:45:00.079511 kubelet[2778]: E0515 15:45:00.078117 2778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e09e623-1ef7-4492-acf8-3fd63c18d853-kube-api-access-nft5c podName:5e09e623-1ef7-4492-acf8-3fd63c18d853 nodeName:}" failed. No retries permitted until 2025-05-15 15:45:00.578072949 +0000 UTC m=+35.941420532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nft5c" (UniqueName: "kubernetes.io/projected/5e09e623-1ef7-4492-acf8-3fd63c18d853-kube-api-access-nft5c") pod "calico-apiserver-7b8b48d5df-56f7s" (UID: "5e09e623-1ef7-4492-acf8-3fd63c18d853") : failed to sync configmap cache: timed out waiting for the condition May 15 15:45:00.081159 kubelet[2778]: E0515 15:45:00.080983 2778 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 15 15:45:00.081159 kubelet[2778]: E0515 15:45:00.081040 2778 projected.go:200] Error preparing data for projected volume kube-api-access-h26zw for pod calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx: failed to sync configmap cache: timed out waiting for the condition May 15 15:45:00.081159 kubelet[2778]: E0515 15:45:00.081119 2778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/313f2947-fbea-432c-a75e-2aede18039e7-kube-api-access-h26zw podName:313f2947-fbea-432c-a75e-2aede18039e7 nodeName:}" failed. No retries permitted until 2025-05-15 15:45:00.581091713 +0000 UTC m=+35.944439284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h26zw" (UniqueName: "kubernetes.io/projected/313f2947-fbea-432c-a75e-2aede18039e7-kube-api-access-h26zw") pod "calico-apiserver-7b8b48d5df-mc8mx" (UID: "313f2947-fbea-432c-a75e-2aede18039e7") : failed to sync configmap cache: timed out waiting for the condition May 15 15:45:00.084102 systemd[1]: run-netns-cni\x2dc3c99fc1\x2dd8e0\x2d3d83\x2dca09\x2df6dfa81fe43e.mount: Deactivated successfully. May 15 15:45:00.084234 systemd[1]: run-netns-cni\x2d19bcd8fb\x2dd544\x2d6697\x2d5b38\x2d9f9a9a31b0e3.mount: Deactivated successfully. May 15 15:45:00.721331 containerd[1530]: time="2025-05-15T15:45:00.720961159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b8b48d5df-mc8mx,Uid:313f2947-fbea-432c-a75e-2aede18039e7,Namespace:calico-apiserver,Attempt:0,}" May 15 15:45:00.749577 containerd[1530]: time="2025-05-15T15:45:00.748635598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b8b48d5df-56f7s,Uid:5e09e623-1ef7-4492-acf8-3fd63c18d853,Namespace:calico-apiserver,Attempt:0,}" May 15 15:45:00.871528 containerd[1530]: time="2025-05-15T15:45:00.871455853Z" level=error msg="Failed to destroy network for sandbox \"7fbdd0e7d353a67e12e48625cad905397bbc77989aa0fdc8bc32d33b9eb4bdd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:00.873329 containerd[1530]: time="2025-05-15T15:45:00.873250049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b8b48d5df-mc8mx,Uid:313f2947-fbea-432c-a75e-2aede18039e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbdd0e7d353a67e12e48625cad905397bbc77989aa0fdc8bc32d33b9eb4bdd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:00.874378 kubelet[2778]: E0515 15:45:00.874297 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbdd0e7d353a67e12e48625cad905397bbc77989aa0fdc8bc32d33b9eb4bdd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:00.877425 kubelet[2778]: E0515 15:45:00.874386 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbdd0e7d353a67e12e48625cad905397bbc77989aa0fdc8bc32d33b9eb4bdd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx" May 15 15:45:00.877425 kubelet[2778]: E0515 15:45:00.874415 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbdd0e7d353a67e12e48625cad905397bbc77989aa0fdc8bc32d33b9eb4bdd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx" May 15 15:45:00.877425 kubelet[2778]: E0515 15:45:00.874544 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b8b48d5df-mc8mx_calico-apiserver(313f2947-fbea-432c-a75e-2aede18039e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b8b48d5df-mc8mx_calico-apiserver(313f2947-fbea-432c-a75e-2aede18039e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fbdd0e7d353a67e12e48625cad905397bbc77989aa0fdc8bc32d33b9eb4bdd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx" podUID="313f2947-fbea-432c-a75e-2aede18039e7" May 15 15:45:00.903618 containerd[1530]: time="2025-05-15T15:45:00.903549362Z" level=error msg="Failed to destroy network for sandbox \"593e22d7e38958d9be1cb0ede58dc514ff9dbf254857e2345bcbabec4f46dbf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:00.906092 containerd[1530]: time="2025-05-15T15:45:00.906000946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b8b48d5df-56f7s,Uid:5e09e623-1ef7-4492-acf8-3fd63c18d853,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"593e22d7e38958d9be1cb0ede58dc514ff9dbf254857e2345bcbabec4f46dbf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:00.906968 kubelet[2778]: E0515 15:45:00.906915 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593e22d7e38958d9be1cb0ede58dc514ff9dbf254857e2345bcbabec4f46dbf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:00.907451 kubelet[2778]: E0515 15:45:00.907184 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593e22d7e38958d9be1cb0ede58dc514ff9dbf254857e2345bcbabec4f46dbf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b8b48d5df-56f7s" May 15 15:45:00.907451 kubelet[2778]: E0515 15:45:00.907237 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593e22d7e38958d9be1cb0ede58dc514ff9dbf254857e2345bcbabec4f46dbf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b8b48d5df-56f7s" May 15 15:45:00.907813 kubelet[2778]: E0515 15:45:00.907316 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b8b48d5df-56f7s_calico-apiserver(5e09e623-1ef7-4492-acf8-3fd63c18d853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b8b48d5df-56f7s_calico-apiserver(5e09e623-1ef7-4492-acf8-3fd63c18d853)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"593e22d7e38958d9be1cb0ede58dc514ff9dbf254857e2345bcbabec4f46dbf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b8b48d5df-56f7s" podUID="5e09e623-1ef7-4492-acf8-3fd63c18d853" May 15 15:45:01.084169 systemd[1]: run-netns-cni\x2d8d98d745\x2d8c86\x2db5a4\x2da6a3\x2d591f4fc8dddd.mount: Deactivated successfully. May 15 15:45:01.084344 systemd[1]: run-netns-cni\x2d10c1edda\x2d7127\x2d5657\x2d8c85\x2daa8e1c101899.mount: Deactivated successfully. May 15 15:45:02.537790 kernel: hrtimer: interrupt took 998842 ns May 15 15:45:05.341133 kubelet[2778]: I0515 15:45:05.332065 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:05.342351 kubelet[2778]: I0515 15:45:05.341455 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:05.352074 kubelet[2778]: I0515 15:45:05.351979 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:05.452109 kubelet[2778]: I0515 15:45:05.451484 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:05.452109 kubelet[2778]: I0515 15:45:05.451691 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx","calico-apiserver/calico-apiserver-7b8b48d5df-56f7s","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-node-nfvst","calico-system/csi-node-driver-h6786","tigera-operator/tigera-operator-797db67f8-qfvrk","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:05.479738 kubelet[2778]: I0515 15:45:05.479619 2778 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx" May 15 15:45:05.479738 kubelet[2778]: I0515 15:45:05.479679 2778 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx"] May 15 15:45:05.569271 kubelet[2778]: I0515 15:45:05.568039 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-7b8b48d5df-lr9rl" nodeCondition=["DiskPressure"] May 15 15:45:05.623914 kubelet[2778]: I0515 15:45:05.623555 2778 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h26zw\" (UniqueName: \"kubernetes.io/projected/313f2947-fbea-432c-a75e-2aede18039e7-kube-api-access-h26zw\") pod \"313f2947-fbea-432c-a75e-2aede18039e7\" (UID: \"313f2947-fbea-432c-a75e-2aede18039e7\") " May 15 15:45:05.627815 kubelet[2778]: I0515 15:45:05.623693 2778 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/313f2947-fbea-432c-a75e-2aede18039e7-calico-apiserver-certs\") pod \"313f2947-fbea-432c-a75e-2aede18039e7\" (UID: \"313f2947-fbea-432c-a75e-2aede18039e7\") " May 15 15:45:05.656140 systemd[1]: var-lib-kubelet-pods-313f2947\x2dfbea\x2d432c\x2da75e\x2d2aede18039e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh26zw.mount: Deactivated successfully. May 15 15:45:05.662073 kubelet[2778]: I0515 15:45:05.661985 2778 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313f2947-fbea-432c-a75e-2aede18039e7-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "313f2947-fbea-432c-a75e-2aede18039e7" (UID: "313f2947-fbea-432c-a75e-2aede18039e7"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 15:45:05.663352 systemd[1]: var-lib-kubelet-pods-313f2947\x2dfbea\x2d432c\x2da75e\x2d2aede18039e7-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 15:45:05.665811 kubelet[2778]: I0515 15:45:05.665728 2778 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313f2947-fbea-432c-a75e-2aede18039e7-kube-api-access-h26zw" (OuterVolumeSpecName: "kube-api-access-h26zw") pod "313f2947-fbea-432c-a75e-2aede18039e7" (UID: "313f2947-fbea-432c-a75e-2aede18039e7"). InnerVolumeSpecName "kube-api-access-h26zw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 15:45:05.689848 kubelet[2778]: I0515 15:45:05.687333 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-7b8b48d5df-sbprc" nodeCondition=["DiskPressure"] May 15 15:45:05.728534 kubelet[2778]: I0515 15:45:05.728471 2778 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/313f2947-fbea-432c-a75e-2aede18039e7-calico-apiserver-certs\") on node \"ci-4334.0.0-a-8a7930f089\" DevicePath \"\"" May 15 15:45:05.728534 kubelet[2778]: I0515 15:45:05.728520 2778 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h26zw\" (UniqueName: \"kubernetes.io/projected/313f2947-fbea-432c-a75e-2aede18039e7-kube-api-access-h26zw\") on node \"ci-4334.0.0-a-8a7930f089\" DevicePath \"\"" May 15 15:45:05.770692 kubelet[2778]: I0515 15:45:05.770436 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-7b8b48d5df-gk6h6" nodeCondition=["DiskPressure"] May 15 15:45:05.858062 kubelet[2778]: I0515 15:45:05.857986 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-7b8b48d5df-4twjv" nodeCondition=["DiskPressure"] May 15 15:45:05.971682 kubelet[2778]: I0515 15:45:05.971587 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-7b8b48d5df-hjvbl" nodeCondition=["DiskPressure"] May 15 15:45:06.066357 kubelet[2778]: I0515 15:45:06.066243 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-7b8b48d5df-59z5g" nodeCondition=["DiskPressure"] May 15 15:45:06.160959 kubelet[2778]: I0515 15:45:06.160905 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="calico-apiserver/calico-apiserver-7b8b48d5df-w88mc" nodeCondition=["DiskPressure"] May 15 15:45:06.272472 systemd[1]: Removed slice kubepods-besteffort-pod313f2947_fbea_432c_a75e_2aede18039e7.slice - libcontainer container kubepods-besteffort-pod313f2947_fbea_432c_a75e_2aede18039e7.slice. May 15 15:45:06.456583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278833998.mount: Deactivated successfully. May 15 15:45:06.461760 containerd[1530]: time="2025-05-15T15:45:06.461036754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount278833998: mkdir /var/lib/containerd/tmpmounts/containerd-mount278833998/usr/lib/.build-id/88: no space left on device" May 15 15:45:06.461760 containerd[1530]: time="2025-05-15T15:45:06.461087796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 15:45:06.463816 kubelet[2778]: E0515 15:45:06.463680 2778 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount278833998: mkdir /var/lib/containerd/tmpmounts/containerd-mount278833998/usr/lib/.build-id/88: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:45:06.464639 kubelet[2778]: E0515 15:45:06.463835 2778 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount278833998: mkdir /var/lib/containerd/tmpmounts/containerd-mount278833998/usr/lib/.build-id/88: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:45:06.472096 kubelet[2778]: E0515 15:45:06.471936 2778 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrpmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-nfvst_calico-system(85ff5786-c114-43e4-8f58-d6ff4433361a): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount278833998: mkdir /var/lib/containerd/tmpmounts/containerd-mount278833998/usr/lib/.build-id/88: no space left on device May 15 15:45:06.472589 kubelet[2778]: E0515 15:45:06.472032 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount278833998: mkdir /var/lib/containerd/tmpmounts/containerd-mount278833998/usr/lib/.build-id/88: no space left on device\"" pod="calico-system/calico-node-nfvst" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" May 15 15:45:06.480147 kubelet[2778]: I0515 15:45:06.479962 2778 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-7b8b48d5df-mc8mx"] May 15 15:45:06.505324 kubelet[2778]: I0515 15:45:06.505255 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:06.506330 kubelet[2778]: I0515 15:45:06.505369 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:06.514979 kubelet[2778]: I0515 15:45:06.514383 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:06.547851 kubelet[2778]: I0515 15:45:06.546180 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:06.547851 kubelet[2778]: I0515 15:45:06.546428 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-7b8b48d5df-56f7s","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","tigera-operator/tigera-operator-797db67f8-qfvrk","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:06.564121 kubelet[2778]: I0515 15:45:06.563945 2778 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-7b8b48d5df-56f7s" May 15 15:45:06.565105 kubelet[2778]: I0515 15:45:06.565053 2778 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-7b8b48d5df-56f7s"] May 15 15:45:06.644580 kubelet[2778]: I0515 15:45:06.643896 2778 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e09e623-1ef7-4492-acf8-3fd63c18d853-calico-apiserver-certs\") pod \"5e09e623-1ef7-4492-acf8-3fd63c18d853\" (UID: \"5e09e623-1ef7-4492-acf8-3fd63c18d853\") " May 15 15:45:06.644580 kubelet[2778]: I0515 15:45:06.643982 2778 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nft5c\" (UniqueName: \"kubernetes.io/projected/5e09e623-1ef7-4492-acf8-3fd63c18d853-kube-api-access-nft5c\") pod \"5e09e623-1ef7-4492-acf8-3fd63c18d853\" (UID: \"5e09e623-1ef7-4492-acf8-3fd63c18d853\") " May 15 15:45:06.654393 kubelet[2778]: I0515 15:45:06.654321 2778 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e09e623-1ef7-4492-acf8-3fd63c18d853-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "5e09e623-1ef7-4492-acf8-3fd63c18d853" (UID: "5e09e623-1ef7-4492-acf8-3fd63c18d853"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 15:45:06.659187 systemd[1]: var-lib-kubelet-pods-5e09e623\x2d1ef7\x2d4492\x2dacf8\x2d3fd63c18d853-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 15:45:06.664054 kubelet[2778]: I0515 15:45:06.663980 2778 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e09e623-1ef7-4492-acf8-3fd63c18d853-kube-api-access-nft5c" (OuterVolumeSpecName: "kube-api-access-nft5c") pod "5e09e623-1ef7-4492-acf8-3fd63c18d853" (UID: "5e09e623-1ef7-4492-acf8-3fd63c18d853"). InnerVolumeSpecName "kube-api-access-nft5c". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 15:45:06.667939 systemd[1]: var-lib-kubelet-pods-5e09e623\x2d1ef7\x2d4492\x2dacf8\x2d3fd63c18d853-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnft5c.mount: Deactivated successfully. May 15 15:45:06.746797 kubelet[2778]: I0515 15:45:06.744499 2778 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nft5c\" (UniqueName: \"kubernetes.io/projected/5e09e623-1ef7-4492-acf8-3fd63c18d853-kube-api-access-nft5c\") on node \"ci-4334.0.0-a-8a7930f089\" DevicePath \"\"" May 15 15:45:06.746797 kubelet[2778]: I0515 15:45:06.746686 2778 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e09e623-1ef7-4492-acf8-3fd63c18d853-calico-apiserver-certs\") on node \"ci-4334.0.0-a-8a7930f089\" DevicePath \"\"" May 15 15:45:06.930376 systemd[1]: Removed slice kubepods-besteffort-pod5e09e623_1ef7_4492_acf8_3fd63c18d853.slice - libcontainer container kubepods-besteffort-pod5e09e623_1ef7_4492_acf8_3fd63c18d853.slice. May 15 15:45:07.231036 kubelet[2778]: E0515 15:45:07.230571 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:07.252610 kubelet[2778]: E0515 15:45:07.252343 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-nfvst" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" May 15 15:45:07.566146 kubelet[2778]: I0515 15:45:07.565805 2778 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-7b8b48d5df-56f7s"] May 15 15:45:07.587080 kubelet[2778]: I0515 15:45:07.587025 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:07.587080 kubelet[2778]: I0515 15:45:07.587097 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:07.592760 kubelet[2778]: I0515 15:45:07.592637 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:07.614374 kubelet[2778]: I0515 15:45:07.614303 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:07.615063 kubelet[2778]: I0515 15:45:07.614946 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","tigera-operator/tigera-operator-797db67f8-qfvrk","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:07.615063 kubelet[2778]: E0515 15:45:07.615031 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:07.615570 kubelet[2778]: E0515 15:45:07.615044 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:07.615570 kubelet[2778]: E0515 15:45:07.615358 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:07.615570 kubelet[2778]: E0515 15:45:07.615368 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:45:07.615570 kubelet[2778]: E0515 15:45:07.615377 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:45:07.617400 containerd[1530]: time="2025-05-15T15:45:07.617336460Z" level=info msg="StopContainer for \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" with timeout 60 (s)" May 15 15:45:07.632075 containerd[1530]: time="2025-05-15T15:45:07.631994586Z" level=info msg="Stop container \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" with signal terminated" May 15 15:45:07.692154 systemd[1]: cri-containerd-6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b.scope: Deactivated successfully. May 15 15:45:07.693256 systemd[1]: cri-containerd-6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b.scope: Consumed 1.508s CPU time, 39.2M memory peak, 10.6M read from disk. May 15 15:45:07.697759 containerd[1530]: time="2025-05-15T15:45:07.697647314Z" level=info msg="received exit event container_id:\"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" id:\"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" pid:3140 exited_at:{seconds:1747323907 nanos:696231217}" May 15 15:45:07.709891 containerd[1530]: time="2025-05-15T15:45:07.709779048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" id:\"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" pid:3140 exited_at:{seconds:1747323907 nanos:696231217}" May 15 15:45:07.739148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b-rootfs.mount: Deactivated successfully. May 15 15:45:07.765401 containerd[1530]: time="2025-05-15T15:45:07.765316005Z" level=info msg="StopContainer for \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" returns successfully" May 15 15:45:07.767830 containerd[1530]: time="2025-05-15T15:45:07.767314071Z" level=info msg="StopPodSandbox for \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\"" May 15 15:45:07.793418 containerd[1530]: time="2025-05-15T15:45:07.793324178Z" level=info msg="Container to stop \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 15:45:07.809835 systemd[1]: cri-containerd-6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d.scope: Deactivated successfully. May 15 15:45:07.815485 containerd[1530]: time="2025-05-15T15:45:07.815395279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" id:\"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" pid:2970 exit_status:137 exited_at:{seconds:1747323907 nanos:813353295}" May 15 15:45:07.872417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d-rootfs.mount: Deactivated successfully. May 15 15:45:07.878413 containerd[1530]: time="2025-05-15T15:45:07.877814834Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/5f284d8739288baa82b78cc5989dcdc838467a5b0abf65254b1f52182c15a86d->@: write: broken pipe" runtime=io.containerd.runc.v2 May 15 15:45:07.879366 containerd[1530]: time="2025-05-15T15:45:07.878363139Z" level=info msg="shim disconnected" id=6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d namespace=k8s.io May 15 15:45:07.879366 containerd[1530]: time="2025-05-15T15:45:07.879043782Z" level=warning msg="cleaning up after shim disconnected" id=6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d namespace=k8s.io May 15 15:45:07.879366 containerd[1530]: time="2025-05-15T15:45:07.879067572Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 15:45:07.943347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d-shm.mount: Deactivated successfully. May 15 15:45:07.956300 containerd[1530]: time="2025-05-15T15:45:07.956087046Z" level=info msg="received exit event sandbox_id:\"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" exit_status:137 exited_at:{seconds:1747323907 nanos:813353295}" May 15 15:45:07.965195 containerd[1530]: time="2025-05-15T15:45:07.965110534Z" level=info msg="TearDown network for sandbox \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" successfully" May 15 15:45:07.965195 containerd[1530]: time="2025-05-15T15:45:07.965183296Z" level=info msg="StopPodSandbox for \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" returns successfully" May 15 15:45:07.983635 kubelet[2778]: I0515 15:45:07.983551 2778 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-797db67f8-qfvrk" May 15 15:45:07.983635 kubelet[2778]: I0515 15:45:07.983582 2778 eviction_manager.go:205] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-797db67f8-qfvrk"] May 15 15:45:08.033580 kubelet[2778]: I0515 15:45:08.033404 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zc955" nodeCondition=["DiskPressure"] May 15 15:45:08.058071 kubelet[2778]: I0515 15:45:08.057993 2778 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b0cd25c-cf9a-4891-89b8-290ccc6590da-var-lib-calico\") pod \"6b0cd25c-cf9a-4891-89b8-290ccc6590da\" (UID: \"6b0cd25c-cf9a-4891-89b8-290ccc6590da\") " May 15 15:45:08.058071 kubelet[2778]: I0515 15:45:08.058077 2778 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzxf5\" (UniqueName: \"kubernetes.io/projected/6b0cd25c-cf9a-4891-89b8-290ccc6590da-kube-api-access-bzxf5\") pod \"6b0cd25c-cf9a-4891-89b8-290ccc6590da\" (UID: \"6b0cd25c-cf9a-4891-89b8-290ccc6590da\") " May 15 15:45:08.058941 kubelet[2778]: I0515 15:45:08.058838 2778 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b0cd25c-cf9a-4891-89b8-290ccc6590da-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "6b0cd25c-cf9a-4891-89b8-290ccc6590da" (UID: "6b0cd25c-cf9a-4891-89b8-290ccc6590da"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 15:45:08.074981 systemd[1]: var-lib-kubelet-pods-6b0cd25c\x2dcf9a\x2d4891\x2d89b8\x2d290ccc6590da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbzxf5.mount: Deactivated successfully. May 15 15:45:08.075249 kubelet[2778]: I0515 15:45:08.075071 2778 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0cd25c-cf9a-4891-89b8-290ccc6590da-kube-api-access-bzxf5" (OuterVolumeSpecName: "kube-api-access-bzxf5") pod "6b0cd25c-cf9a-4891-89b8-290ccc6590da" (UID: "6b0cd25c-cf9a-4891-89b8-290ccc6590da"). InnerVolumeSpecName "kube-api-access-bzxf5". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 15:45:08.092986 kubelet[2778]: I0515 15:45:08.092844 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-k4pnt" nodeCondition=["DiskPressure"] May 15 15:45:08.158843 kubelet[2778]: I0515 15:45:08.158638 2778 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b0cd25c-cf9a-4891-89b8-290ccc6590da-var-lib-calico\") on node \"ci-4334.0.0-a-8a7930f089\" DevicePath \"\"" May 15 15:45:08.158843 kubelet[2778]: I0515 15:45:08.158692 2778 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bzxf5\" (UniqueName: \"kubernetes.io/projected/6b0cd25c-cf9a-4891-89b8-290ccc6590da-kube-api-access-bzxf5\") on node \"ci-4334.0.0-a-8a7930f089\" DevicePath \"\"" May 15 15:45:08.162240 kubelet[2778]: I0515 15:45:08.162151 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rtpr4" nodeCondition=["DiskPressure"] May 15 15:45:08.235906 kubelet[2778]: I0515 15:45:08.235856 2778 scope.go:117] "RemoveContainer" containerID="6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b" May 15 15:45:08.245612 containerd[1530]: time="2025-05-15T15:45:08.245547176Z" level=info msg="RemoveContainer for \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\"" May 15 15:45:08.259589 systemd[1]: Removed slice kubepods-besteffort-pod6b0cd25c_cf9a_4891_89b8_290ccc6590da.slice - libcontainer container kubepods-besteffort-pod6b0cd25c_cf9a_4891_89b8_290ccc6590da.slice. May 15 15:45:08.259864 systemd[1]: kubepods-besteffort-pod6b0cd25c_cf9a_4891_89b8_290ccc6590da.slice: Consumed 1.550s CPU time, 39.5M memory peak, 10.6M read from disk. May 15 15:45:08.277758 containerd[1530]: time="2025-05-15T15:45:08.277652254Z" level=info msg="RemoveContainer for \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" returns successfully" May 15 15:45:08.299066 kubelet[2778]: I0515 15:45:08.297086 2778 scope.go:117] "RemoveContainer" containerID="6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b" May 15 15:45:08.299780 containerd[1530]: time="2025-05-15T15:45:08.299486754Z" level=error msg="ContainerStatus for \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\": not found" May 15 15:45:08.301935 kubelet[2778]: E0515 15:45:08.299795 2778 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\": not found" containerID="6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b" May 15 15:45:08.301935 kubelet[2778]: I0515 15:45:08.299855 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b"} err="failed to get container status \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ad841068af906a7653be27f6ced173fe7fc78189772f0079f6f699e5696ac2b\": not found" May 15 15:45:08.328883 kubelet[2778]: I0515 15:45:08.328807 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-vfp5c" nodeCondition=["DiskPressure"] May 15 15:45:08.384084 kubelet[2778]: I0515 15:45:08.383996 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-gws2v" nodeCondition=["DiskPressure"] May 15 15:45:08.449005 kubelet[2778]: I0515 15:45:08.448883 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-5ghr2" nodeCondition=["DiskPressure"] May 15 15:45:08.504445 kubelet[2778]: I0515 15:45:08.504366 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-d2f8k" nodeCondition=["DiskPressure"] May 15 15:45:08.561407 kubelet[2778]: I0515 15:45:08.560786 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-dh5z4" nodeCondition=["DiskPressure"] May 15 15:45:08.612320 kubelet[2778]: I0515 15:45:08.612217 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-7v7pb" nodeCondition=["DiskPressure"] May 15 15:45:08.652005 kubelet[2778]: I0515 15:45:08.651916 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-744hz" nodeCondition=["DiskPressure"] May 15 15:45:08.697614 kubelet[2778]: I0515 15:45:08.697515 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-c57rq" nodeCondition=["DiskPressure"] May 15 15:45:08.772106 kubelet[2778]: I0515 15:45:08.771896 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-nq7cm" nodeCondition=["DiskPressure"] May 15 15:45:08.930335 kubelet[2778]: I0515 15:45:08.930213 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-z89gk" nodeCondition=["DiskPressure"] May 15 15:45:08.984853 kubelet[2778]: I0515 15:45:08.984226 2778 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-797db67f8-qfvrk"] May 15 15:45:09.000933 kubelet[2778]: I0515 15:45:09.000859 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:09.002203 kubelet[2778]: I0515 15:45:09.001314 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:09.009938 containerd[1530]: time="2025-05-15T15:45:09.009822408Z" level=info msg="StopPodSandbox for \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\"" May 15 15:45:09.013026 containerd[1530]: time="2025-05-15T15:45:09.011593896Z" level=info msg="TearDown network for sandbox \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" successfully" May 15 15:45:09.013026 containerd[1530]: time="2025-05-15T15:45:09.011681590Z" level=info msg="StopPodSandbox for \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" returns successfully" May 15 15:45:09.013538 containerd[1530]: time="2025-05-15T15:45:09.013122928Z" level=info msg="RemovePodSandbox for \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\"" May 15 15:45:09.013538 containerd[1530]: time="2025-05-15T15:45:09.013189913Z" level=info msg="Forcibly stopping sandbox \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\"" May 15 15:45:09.013538 containerd[1530]: time="2025-05-15T15:45:09.013329430Z" level=info msg="TearDown network for sandbox \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" successfully" May 15 15:45:09.020488 containerd[1530]: time="2025-05-15T15:45:09.020406394Z" level=info msg="Ensure that sandbox 6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d in task-service has been cleanup successfully" May 15 15:45:09.027077 containerd[1530]: time="2025-05-15T15:45:09.026849839Z" level=info msg="RemovePodSandbox \"6af666f45cca39ae5d79fa6e497f52ed16e603e3d56d21d221d328df8999a43d\" returns successfully" May 15 15:45:09.028733 kubelet[2778]: I0515 15:45:09.028260 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:09.048381 kubelet[2778]: I0515 15:45:09.048259 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:09.048934 kubelet[2778]: I0515 15:45:09.048894 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-node-nfvst","calico-system/csi-node-driver-h6786","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:09.049144 kubelet[2778]: E0515 15:45:09.049119 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:09.049307 kubelet[2778]: E0515 15:45:09.049289 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:09.049586 kubelet[2778]: E0515 15:45:09.049373 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:09.049586 kubelet[2778]: E0515 15:45:09.049391 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:45:09.049586 kubelet[2778]: E0515 15:45:09.049404 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:45:09.049586 kubelet[2778]: E0515 15:45:09.049426 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:45:09.049586 kubelet[2778]: E0515 15:45:09.049444 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:45:09.049586 kubelet[2778]: E0515 15:45:09.049463 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:45:09.049586 kubelet[2778]: E0515 15:45:09.049483 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:45:09.049586 kubelet[2778]: E0515 15:45:09.049501 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:45:09.049586 kubelet[2778]: I0515 15:45:09.049519 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:45:09.071203 kubelet[2778]: I0515 15:45:09.070377 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-bbxbt" nodeCondition=["DiskPressure"] May 15 15:45:09.223887 kubelet[2778]: I0515 15:45:09.223806 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-gt5xm" nodeCondition=["DiskPressure"] May 15 15:45:09.372480 kubelet[2778]: I0515 15:45:09.372209 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-98w6c" nodeCondition=["DiskPressure"] May 15 15:45:09.525081 kubelet[2778]: I0515 15:45:09.525008 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-ckllp" nodeCondition=["DiskPressure"] May 15 15:45:09.681323 kubelet[2778]: I0515 15:45:09.681182 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-lwwns" nodeCondition=["DiskPressure"] May 15 15:45:09.822686 kubelet[2778]: I0515 15:45:09.822592 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-pzct8" nodeCondition=["DiskPressure"] May 15 15:45:09.979141 kubelet[2778]: I0515 15:45:09.978844 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-d2gfj" nodeCondition=["DiskPressure"] May 15 15:45:10.119794 kubelet[2778]: I0515 15:45:10.119667 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qx7zp" nodeCondition=["DiskPressure"] May 15 15:45:10.270866 kubelet[2778]: I0515 15:45:10.269516 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-wqlvl" nodeCondition=["DiskPressure"] May 15 15:45:10.423921 kubelet[2778]: I0515 15:45:10.423762 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-hqxcb" nodeCondition=["DiskPressure"] May 15 15:45:10.674745 kubelet[2778]: I0515 15:45:10.673581 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-z7bsg" nodeCondition=["DiskPressure"] May 15 15:45:10.831086 kubelet[2778]: I0515 15:45:10.830873 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-flnmv" nodeCondition=["DiskPressure"] May 15 15:45:10.907486 kubelet[2778]: E0515 15:45:10.906942 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:10.909515 containerd[1530]: time="2025-05-15T15:45:10.909453809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:45:10.942360 kubelet[2778]: I0515 15:45:10.941775 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zrfgc" nodeCondition=["DiskPressure"] May 15 15:45:11.031907 kubelet[2778]: I0515 15:45:11.030963 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-cvtf8" nodeCondition=["DiskPressure"] May 15 15:45:11.093136 containerd[1530]: time="2025-05-15T15:45:11.093035134Z" level=error msg="Failed to destroy network for sandbox \"e52187a5af1063d04e002eac8087a8e774716babb4bf9b40af4eb97f90de409d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:11.099679 systemd[1]: run-netns-cni\x2ddb47ea4f\x2d8ea7\x2dd00a\x2d8cf9\x2d12eccbd83956.mount: Deactivated successfully. May 15 15:45:11.104527 containerd[1530]: time="2025-05-15T15:45:11.102532851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e52187a5af1063d04e002eac8087a8e774716babb4bf9b40af4eb97f90de409d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:11.106372 kubelet[2778]: E0515 15:45:11.106316 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e52187a5af1063d04e002eac8087a8e774716babb4bf9b40af4eb97f90de409d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:11.106833 kubelet[2778]: E0515 15:45:11.106743 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e52187a5af1063d04e002eac8087a8e774716babb4bf9b40af4eb97f90de409d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:11.107011 kubelet[2778]: E0515 15:45:11.106981 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e52187a5af1063d04e002eac8087a8e774716babb4bf9b40af4eb97f90de409d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:11.107867 kubelet[2778]: E0515 15:45:11.107650 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e52187a5af1063d04e002eac8087a8e774716babb4bf9b40af4eb97f90de409d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:45:11.122547 kubelet[2778]: I0515 15:45:11.122476 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zq9ht" nodeCondition=["DiskPressure"] May 15 15:45:11.328410 kubelet[2778]: I0515 15:45:11.327765 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-ndkqh" nodeCondition=["DiskPressure"] May 15 15:45:11.423036 kubelet[2778]: I0515 15:45:11.422920 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-9q525" nodeCondition=["DiskPressure"] May 15 15:45:11.522643 kubelet[2778]: I0515 15:45:11.522204 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-chq52" nodeCondition=["DiskPressure"] May 15 15:45:11.625793 kubelet[2778]: I0515 15:45:11.625560 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-247pq" nodeCondition=["DiskPressure"] May 15 15:45:11.675144 kubelet[2778]: I0515 15:45:11.674817 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-jrmd9" nodeCondition=["DiskPressure"] May 15 15:45:11.772128 kubelet[2778]: I0515 15:45:11.772062 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-dml6p" nodeCondition=["DiskPressure"] May 15 15:45:11.877034 kubelet[2778]: I0515 15:45:11.875899 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-nlxhp" nodeCondition=["DiskPressure"] May 15 15:45:11.908177 containerd[1530]: time="2025-05-15T15:45:11.908110104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:45:11.999520 kubelet[2778]: I0515 15:45:11.998977 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-j67lx" nodeCondition=["DiskPressure"] May 15 15:45:12.060937 containerd[1530]: time="2025-05-15T15:45:12.060853352Z" level=error msg="Failed to destroy network for sandbox \"07f53fb83b460c3822296c7cb3a9754cd620dbdf5308b6588b8771ffcd271c3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:12.064887 containerd[1530]: time="2025-05-15T15:45:12.064543437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f53fb83b460c3822296c7cb3a9754cd620dbdf5308b6588b8771ffcd271c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:12.066419 kubelet[2778]: E0515 15:45:12.066316 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f53fb83b460c3822296c7cb3a9754cd620dbdf5308b6588b8771ffcd271c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:12.066739 kubelet[2778]: E0515 15:45:12.066430 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f53fb83b460c3822296c7cb3a9754cd620dbdf5308b6588b8771ffcd271c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:45:12.066739 kubelet[2778]: E0515 15:45:12.066510 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f53fb83b460c3822296c7cb3a9754cd620dbdf5308b6588b8771ffcd271c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:45:12.067974 kubelet[2778]: E0515 15:45:12.067006 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07f53fb83b460c3822296c7cb3a9754cd620dbdf5308b6588b8771ffcd271c3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:45:12.068263 systemd[1]: run-netns-cni\x2d0a590a76\x2dea1f\x2dc895\x2dd612\x2d3aacb11649bd.mount: Deactivated successfully. May 15 15:45:12.173972 kubelet[2778]: I0515 15:45:12.173886 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-jjkkg" nodeCondition=["DiskPressure"] May 15 15:45:12.271923 kubelet[2778]: I0515 15:45:12.271835 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-4c9kc" nodeCondition=["DiskPressure"] May 15 15:45:12.373728 kubelet[2778]: I0515 15:45:12.373586 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-8lm2f" nodeCondition=["DiskPressure"] May 15 15:45:12.472988 kubelet[2778]: I0515 15:45:12.472514 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-78wff" nodeCondition=["DiskPressure"] May 15 15:45:12.573296 kubelet[2778]: I0515 15:45:12.573218 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-pkcxg" nodeCondition=["DiskPressure"] May 15 15:45:12.673218 kubelet[2778]: I0515 15:45:12.673150 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-wjwvk" nodeCondition=["DiskPressure"] May 15 15:45:12.725643 kubelet[2778]: I0515 15:45:12.725310 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-f6986" nodeCondition=["DiskPressure"] May 15 15:45:12.819963 kubelet[2778]: I0515 15:45:12.819892 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-b7nw9" nodeCondition=["DiskPressure"] May 15 15:45:12.907828 kubelet[2778]: E0515 15:45:12.907758 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:12.913398 containerd[1530]: time="2025-05-15T15:45:12.913138105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:45:12.940102 kubelet[2778]: I0515 15:45:12.939624 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-56pjr" nodeCondition=["DiskPressure"] May 15 15:45:13.032549 kubelet[2778]: I0515 15:45:13.032029 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-87xg6" nodeCondition=["DiskPressure"] May 15 15:45:13.088021 containerd[1530]: time="2025-05-15T15:45:13.087905956Z" level=error msg="Failed to destroy network for sandbox \"10858fe33e2ffcaa504ef6349aebda0ab297ae719b2948ba59863027155dddc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:13.092557 containerd[1530]: time="2025-05-15T15:45:13.092403090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10858fe33e2ffcaa504ef6349aebda0ab297ae719b2948ba59863027155dddc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:13.094966 kubelet[2778]: E0515 15:45:13.092942 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10858fe33e2ffcaa504ef6349aebda0ab297ae719b2948ba59863027155dddc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:13.094966 kubelet[2778]: E0515 15:45:13.093055 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10858fe33e2ffcaa504ef6349aebda0ab297ae719b2948ba59863027155dddc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:13.094966 kubelet[2778]: E0515 15:45:13.093093 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10858fe33e2ffcaa504ef6349aebda0ab297ae719b2948ba59863027155dddc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:13.094966 kubelet[2778]: E0515 15:45:13.093170 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10858fe33e2ffcaa504ef6349aebda0ab297ae719b2948ba59863027155dddc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:45:13.093923 systemd[1]: run-netns-cni\x2da408c7fe\x2d7ea4\x2df14a\x2d09cb\x2d6270a256fa02.mount: Deactivated successfully. May 15 15:45:13.125107 kubelet[2778]: I0515 15:45:13.124582 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-kh4kd" nodeCondition=["DiskPressure"] May 15 15:45:13.222646 kubelet[2778]: I0515 15:45:13.222530 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qzl56" nodeCondition=["DiskPressure"] May 15 15:45:13.322867 kubelet[2778]: I0515 15:45:13.322215 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-tf7rr" nodeCondition=["DiskPressure"] May 15 15:45:13.378620 kubelet[2778]: I0515 15:45:13.378529 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-vhrm5" nodeCondition=["DiskPressure"] May 15 15:45:13.468739 kubelet[2778]: I0515 15:45:13.468180 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-cf5dj" nodeCondition=["DiskPressure"] May 15 15:45:13.573611 kubelet[2778]: I0515 15:45:13.573429 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-8vnnt" nodeCondition=["DiskPressure"] May 15 15:45:13.682753 kubelet[2778]: I0515 15:45:13.682651 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qz96q" nodeCondition=["DiskPressure"] May 15 15:45:13.771509 kubelet[2778]: I0515 15:45:13.771376 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-2dcfk" nodeCondition=["DiskPressure"] May 15 15:45:13.874072 kubelet[2778]: I0515 15:45:13.872770 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-frxn9" nodeCondition=["DiskPressure"] May 15 15:45:13.908286 containerd[1530]: time="2025-05-15T15:45:13.908093949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:45:13.981319 kubelet[2778]: I0515 15:45:13.981036 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-tkpjv" nodeCondition=["DiskPressure"] May 15 15:45:14.056744 containerd[1530]: time="2025-05-15T15:45:14.056618887Z" level=error msg="Failed to destroy network for sandbox \"323f621602261ec56cfd2f41f8310df7d19c3ecd3efbe4ffb7cac774aa2bc902\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:14.061128 containerd[1530]: time="2025-05-15T15:45:14.060974232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"323f621602261ec56cfd2f41f8310df7d19c3ecd3efbe4ffb7cac774aa2bc902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:14.062163 kubelet[2778]: E0515 15:45:14.062028 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323f621602261ec56cfd2f41f8310df7d19c3ecd3efbe4ffb7cac774aa2bc902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:14.062163 kubelet[2778]: E0515 15:45:14.062134 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323f621602261ec56cfd2f41f8310df7d19c3ecd3efbe4ffb7cac774aa2bc902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:14.062163 kubelet[2778]: E0515 15:45:14.062166 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323f621602261ec56cfd2f41f8310df7d19c3ecd3efbe4ffb7cac774aa2bc902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:14.063398 systemd[1]: run-netns-cni\x2dca9fecd4\x2d0fe6\x2d2554\x2de31c\x2d00fac7573b44.mount: Deactivated successfully. May 15 15:45:14.064303 kubelet[2778]: E0515 15:45:14.063782 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"323f621602261ec56cfd2f41f8310df7d19c3ecd3efbe4ffb7cac774aa2bc902\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:45:14.080765 kubelet[2778]: I0515 15:45:14.080676 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-fn7ns" nodeCondition=["DiskPressure"] May 15 15:45:14.173258 kubelet[2778]: I0515 15:45:14.172106 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-fk4t9" nodeCondition=["DiskPressure"] May 15 15:45:14.225034 kubelet[2778]: I0515 15:45:14.224957 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-c8cjk" nodeCondition=["DiskPressure"] May 15 15:45:14.330618 kubelet[2778]: I0515 15:45:14.330483 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-5mzrw" nodeCondition=["DiskPressure"] May 15 15:45:14.425557 kubelet[2778]: I0515 15:45:14.424317 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-nrkgg" nodeCondition=["DiskPressure"] May 15 15:45:14.524451 kubelet[2778]: I0515 15:45:14.524296 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mfr7c" nodeCondition=["DiskPressure"] May 15 15:45:14.623236 kubelet[2778]: I0515 15:45:14.623154 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-kgzlf" nodeCondition=["DiskPressure"] May 15 15:45:14.720496 kubelet[2778]: I0515 15:45:14.720145 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-jvgjx" nodeCondition=["DiskPressure"] May 15 15:45:14.820850 kubelet[2778]: I0515 15:45:14.820779 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mv57v" nodeCondition=["DiskPressure"] May 15 15:45:14.930890 kubelet[2778]: I0515 15:45:14.930778 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-jg8x9" nodeCondition=["DiskPressure"] May 15 15:45:15.132883 kubelet[2778]: I0515 15:45:15.129866 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-964j9" nodeCondition=["DiskPressure"] May 15 15:45:15.232194 kubelet[2778]: I0515 15:45:15.232068 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-5h6hv" nodeCondition=["DiskPressure"] May 15 15:45:15.324027 kubelet[2778]: I0515 15:45:15.323894 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rgcpv" nodeCondition=["DiskPressure"] May 15 15:45:15.421668 kubelet[2778]: I0515 15:45:15.421576 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-62jh6" nodeCondition=["DiskPressure"] May 15 15:45:15.526334 kubelet[2778]: I0515 15:45:15.526250 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zs9v7" nodeCondition=["DiskPressure"] May 15 15:45:15.657939 kubelet[2778]: I0515 15:45:15.657857 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-sdtbf" nodeCondition=["DiskPressure"] May 15 15:45:15.773185 kubelet[2778]: I0515 15:45:15.772927 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-n2j8f" nodeCondition=["DiskPressure"] May 15 15:45:15.875186 kubelet[2778]: I0515 15:45:15.875100 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-g9fk9" nodeCondition=["DiskPressure"] May 15 15:45:15.979489 kubelet[2778]: I0515 15:45:15.979368 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-w75ns" nodeCondition=["DiskPressure"] May 15 15:45:16.072839 kubelet[2778]: I0515 15:45:16.072592 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-md7k5" nodeCondition=["DiskPressure"] May 15 15:45:16.280876 kubelet[2778]: I0515 15:45:16.280806 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-xthnl" nodeCondition=["DiskPressure"] May 15 15:45:16.372878 kubelet[2778]: I0515 15:45:16.371632 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6qcmj" nodeCondition=["DiskPressure"] May 15 15:45:16.485366 kubelet[2778]: I0515 15:45:16.485248 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-qschj" nodeCondition=["DiskPressure"] May 15 15:45:16.578129 kubelet[2778]: I0515 15:45:16.576605 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-2pm77" nodeCondition=["DiskPressure"] May 15 15:45:16.720741 kubelet[2778]: I0515 15:45:16.717270 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-c6qf7" nodeCondition=["DiskPressure"] May 15 15:45:16.772787 kubelet[2778]: I0515 15:45:16.772676 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-clhjv" nodeCondition=["DiskPressure"] May 15 15:45:16.876419 kubelet[2778]: I0515 15:45:16.876027 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-tngbj" nodeCondition=["DiskPressure"] May 15 15:45:17.096235 kubelet[2778]: I0515 15:45:17.095973 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-2m2cc" nodeCondition=["DiskPressure"] May 15 15:45:17.173834 kubelet[2778]: I0515 15:45:17.173669 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rkllx" nodeCondition=["DiskPressure"] May 15 15:45:17.290761 kubelet[2778]: I0515 15:45:17.289574 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-vcr95" nodeCondition=["DiskPressure"] May 15 15:45:17.375510 kubelet[2778]: I0515 15:45:17.374809 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-rllj6" nodeCondition=["DiskPressure"] May 15 15:45:17.475925 kubelet[2778]: I0515 15:45:17.475618 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-4r8tv" nodeCondition=["DiskPressure"] May 15 15:45:17.575644 kubelet[2778]: I0515 15:45:17.575299 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-7v26n" nodeCondition=["DiskPressure"] May 15 15:45:17.673496 kubelet[2778]: I0515 15:45:17.673413 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-8bzk6" nodeCondition=["DiskPressure"] May 15 15:45:17.775930 kubelet[2778]: I0515 15:45:17.775859 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-6z5c6" nodeCondition=["DiskPressure"] May 15 15:45:17.873602 kubelet[2778]: I0515 15:45:17.873475 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-m7tdl" nodeCondition=["DiskPressure"] May 15 15:45:17.976637 kubelet[2778]: I0515 15:45:17.976365 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-zj6t6" nodeCondition=["DiskPressure"] May 15 15:45:18.074988 kubelet[2778]: I0515 15:45:18.074839 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-75wr4" nodeCondition=["DiskPressure"] May 15 15:45:18.172948 kubelet[2778]: I0515 15:45:18.172860 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-5cbp9" nodeCondition=["DiskPressure"] May 15 15:45:18.289007 kubelet[2778]: I0515 15:45:18.287614 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-wk5x6" nodeCondition=["DiskPressure"] May 15 15:45:18.370192 kubelet[2778]: I0515 15:45:18.370139 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-s7g26" nodeCondition=["DiskPressure"] May 15 15:45:18.471447 kubelet[2778]: I0515 15:45:18.471382 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mcr6p" nodeCondition=["DiskPressure"] May 15 15:45:18.571906 kubelet[2778]: I0515 15:45:18.570448 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mbw69" nodeCondition=["DiskPressure"] May 15 15:45:18.677061 kubelet[2778]: I0515 15:45:18.676989 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-hrl5n" nodeCondition=["DiskPressure"] May 15 15:45:18.773433 kubelet[2778]: I0515 15:45:18.773343 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-wg5lg" nodeCondition=["DiskPressure"] May 15 15:45:18.871825 kubelet[2778]: I0515 15:45:18.870468 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-z9k78" nodeCondition=["DiskPressure"] May 15 15:45:18.978926 kubelet[2778]: I0515 15:45:18.978820 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-f5tmp" nodeCondition=["DiskPressure"] May 15 15:45:19.026762 kubelet[2778]: I0515 15:45:19.025826 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-9sjt9" nodeCondition=["DiskPressure"] May 15 15:45:19.080473 kubelet[2778]: I0515 15:45:19.079959 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:19.080473 kubelet[2778]: I0515 15:45:19.080016 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:19.082969 kubelet[2778]: I0515 15:45:19.082919 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:19.099723 kubelet[2778]: I0515 15:45:19.099651 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:19.099882 kubelet[2778]: I0515 15:45:19.099790 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:19.099882 kubelet[2778]: E0515 15:45:19.099833 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:19.099882 kubelet[2778]: E0515 15:45:19.099845 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:19.099882 kubelet[2778]: E0515 15:45:19.099853 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:19.099882 kubelet[2778]: E0515 15:45:19.099861 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:45:19.100293 kubelet[2778]: E0515 15:45:19.099886 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:45:19.100293 kubelet[2778]: E0515 15:45:19.099900 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:45:19.100293 kubelet[2778]: E0515 15:45:19.099910 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:45:19.100293 kubelet[2778]: E0515 15:45:19.099921 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:45:19.100293 kubelet[2778]: E0515 15:45:19.099930 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:45:19.100293 kubelet[2778]: E0515 15:45:19.099939 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:45:19.100293 kubelet[2778]: I0515 15:45:19.099950 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:45:19.125913 kubelet[2778]: I0515 15:45:19.125733 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-mmvfw" nodeCondition=["DiskPressure"] May 15 15:45:19.221958 kubelet[2778]: I0515 15:45:19.221877 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-nctxb" nodeCondition=["DiskPressure"] May 15 15:45:19.327732 kubelet[2778]: I0515 15:45:19.327596 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-jzljn" nodeCondition=["DiskPressure"] May 15 15:45:19.423767 kubelet[2778]: I0515 15:45:19.423662 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-wj96w" nodeCondition=["DiskPressure"] May 15 15:45:19.479135 kubelet[2778]: I0515 15:45:19.478802 2778 eviction_manager.go:173] "Failed to admit pod to node" pod="tigera-operator/tigera-operator-797db67f8-m7wfz" nodeCondition=["DiskPressure"] May 15 15:45:19.695417 systemd[1]: Started sshd@7-164.92.106.96:22-139.178.68.195:33062.service - OpenSSH per-connection server daemon (139.178.68.195:33062). May 15 15:45:19.801443 sshd[3865]: Accepted publickey for core from 139.178.68.195 port 33062 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:19.804562 sshd-session[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:19.813132 systemd-logind[1513]: New session 8 of user core. May 15 15:45:19.824117 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 15:45:20.034682 sshd[3867]: Connection closed by 139.178.68.195 port 33062 May 15 15:45:20.036007 sshd-session[3865]: pam_unix(sshd:session): session closed for user core May 15 15:45:20.042763 systemd-logind[1513]: Session 8 logged out. Waiting for processes to exit. May 15 15:45:20.042898 systemd[1]: sshd@7-164.92.106.96:22-139.178.68.195:33062.service: Deactivated successfully. May 15 15:45:20.047254 systemd[1]: session-8.scope: Deactivated successfully. May 15 15:45:20.051928 systemd-logind[1513]: Removed session 8. May 15 15:45:21.909007 kubelet[2778]: E0515 15:45:21.908379 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:21.912575 containerd[1530]: time="2025-05-15T15:45:21.911846060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 15:45:22.907931 kubelet[2778]: E0515 15:45:22.907463 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:22.909731 containerd[1530]: time="2025-05-15T15:45:22.909420847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:45:22.999511 containerd[1530]: time="2025-05-15T15:45:22.999450155Z" level=error msg="Failed to destroy network for sandbox \"2e94a13c186c0da71da472d3c84e3b8908ef71bd899d93abe62d05aeaccf0c61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:23.002086 containerd[1530]: time="2025-05-15T15:45:23.001847886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e94a13c186c0da71da472d3c84e3b8908ef71bd899d93abe62d05aeaccf0c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:23.006169 kubelet[2778]: E0515 15:45:23.003200 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e94a13c186c0da71da472d3c84e3b8908ef71bd899d93abe62d05aeaccf0c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:23.006169 kubelet[2778]: E0515 15:45:23.003274 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e94a13c186c0da71da472d3c84e3b8908ef71bd899d93abe62d05aeaccf0c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:23.006169 kubelet[2778]: E0515 15:45:23.003556 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e94a13c186c0da71da472d3c84e3b8908ef71bd899d93abe62d05aeaccf0c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:23.006169 kubelet[2778]: E0515 15:45:23.003635 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e94a13c186c0da71da472d3c84e3b8908ef71bd899d93abe62d05aeaccf0c61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:45:23.005441 systemd[1]: run-netns-cni\x2df7511fee\x2d13c5\x2dc38b\x2da01a\x2d8f9a979074f7.mount: Deactivated successfully. May 15 15:45:25.055121 systemd[1]: Started sshd@8-164.92.106.96:22-139.178.68.195:60298.service - OpenSSH per-connection server daemon (139.178.68.195:60298). May 15 15:45:25.172392 sshd[3917]: Accepted publickey for core from 139.178.68.195 port 60298 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:25.175001 sshd-session[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:25.189373 systemd-logind[1513]: New session 9 of user core. May 15 15:45:25.196093 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 15:45:25.410909 sshd[3919]: Connection closed by 139.178.68.195 port 60298 May 15 15:45:25.410406 sshd-session[3917]: pam_unix(sshd:session): session closed for user core May 15 15:45:25.419736 systemd[1]: sshd@8-164.92.106.96:22-139.178.68.195:60298.service: Deactivated successfully. May 15 15:45:25.425632 systemd[1]: session-9.scope: Deactivated successfully. May 15 15:45:25.427737 systemd-logind[1513]: Session 9 logged out. Waiting for processes to exit. May 15 15:45:25.430145 systemd-logind[1513]: Removed session 9. May 15 15:45:25.908835 containerd[1530]: time="2025-05-15T15:45:25.908196445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:45:25.921988 containerd[1530]: time="2025-05-15T15:45:25.921620565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:45:26.140744 containerd[1530]: time="2025-05-15T15:45:26.138968551Z" level=error msg="Failed to destroy network for sandbox \"445f5b8d2156901eff162974461cf9c0da0acfec0bdac0c8ec4f3c8f09c1652d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:26.145150 systemd[1]: run-netns-cni\x2d87f369a8\x2d0767\x2dc343\x2d843c\x2d53c4402bb53b.mount: Deactivated successfully. May 15 15:45:26.154848 containerd[1530]: time="2025-05-15T15:45:26.153989837Z" level=error msg="Failed to destroy network for sandbox \"ef1c56ef7c9677d72d2098790ae70f51d38cd2bd33f4f31a36f04ff3c9ba6793\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:26.159650 kubelet[2778]: E0515 15:45:26.158798 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1c56ef7c9677d72d2098790ae70f51d38cd2bd33f4f31a36f04ff3c9ba6793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:26.159650 kubelet[2778]: E0515 15:45:26.158871 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1c56ef7c9677d72d2098790ae70f51d38cd2bd33f4f31a36f04ff3c9ba6793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:26.159650 kubelet[2778]: E0515 15:45:26.158895 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1c56ef7c9677d72d2098790ae70f51d38cd2bd33f4f31a36f04ff3c9ba6793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:26.159344 systemd[1]: run-netns-cni\x2d2de4f07b\x2d3bb0\x2d211b\x2d49da\x2daf6dd49cd034.mount: Deactivated successfully. May 15 15:45:26.161905 containerd[1530]: time="2025-05-15T15:45:26.157298340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1c56ef7c9677d72d2098790ae70f51d38cd2bd33f4f31a36f04ff3c9ba6793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:26.162026 kubelet[2778]: E0515 15:45:26.161276 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef1c56ef7c9677d72d2098790ae70f51d38cd2bd33f4f31a36f04ff3c9ba6793\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:45:26.162413 containerd[1530]: time="2025-05-15T15:45:26.162267822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"445f5b8d2156901eff162974461cf9c0da0acfec0bdac0c8ec4f3c8f09c1652d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:26.165347 kubelet[2778]: E0515 15:45:26.163805 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445f5b8d2156901eff162974461cf9c0da0acfec0bdac0c8ec4f3c8f09c1652d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:26.165684 kubelet[2778]: E0515 15:45:26.165409 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445f5b8d2156901eff162974461cf9c0da0acfec0bdac0c8ec4f3c8f09c1652d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:45:26.165684 kubelet[2778]: E0515 15:45:26.165450 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445f5b8d2156901eff162974461cf9c0da0acfec0bdac0c8ec4f3c8f09c1652d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:45:26.165684 kubelet[2778]: E0515 15:45:26.165530 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"445f5b8d2156901eff162974461cf9c0da0acfec0bdac0c8ec4f3c8f09c1652d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:45:26.681647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032197963.mount: Deactivated successfully. May 15 15:45:26.685740 containerd[1530]: time="2025-05-15T15:45:26.685454420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1032197963: mkdir /var/lib/containerd/tmpmounts/containerd-mount1032197963/usr/lib/.build-id/5a: no space left on device" May 15 15:45:26.685740 containerd[1530]: time="2025-05-15T15:45:26.685504484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 15:45:26.686162 kubelet[2778]: E0515 15:45:26.686104 2778 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1032197963: mkdir /var/lib/containerd/tmpmounts/containerd-mount1032197963/usr/lib/.build-id/5a: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:45:26.686292 kubelet[2778]: E0515 15:45:26.686168 2778 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1032197963: mkdir /var/lib/containerd/tmpmounts/containerd-mount1032197963/usr/lib/.build-id/5a: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:45:26.687635 kubelet[2778]: E0515 15:45:26.686416 2778 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrpmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-nfvst_calico-system(85ff5786-c114-43e4-8f58-d6ff4433361a): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1032197963: mkdir /var/lib/containerd/tmpmounts/containerd-mount1032197963/usr/lib/.build-id/5a: no space left on device May 15 15:45:26.687925 kubelet[2778]: E0515 15:45:26.686479 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1032197963: mkdir /var/lib/containerd/tmpmounts/containerd-mount1032197963/usr/lib/.build-id/5a: no space left on device\"" pod="calico-system/calico-node-nfvst" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" May 15 15:45:28.907475 kubelet[2778]: E0515 15:45:28.907168 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:28.909443 containerd[1530]: time="2025-05-15T15:45:28.909335232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:45:28.988692 containerd[1530]: time="2025-05-15T15:45:28.988606062Z" level=error msg="Failed to destroy network for sandbox \"e081da5c10a542f2644c196c0f4f102e6bd13ff04dde30f0afc728dbf929065e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:28.992103 containerd[1530]: time="2025-05-15T15:45:28.992013570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e081da5c10a542f2644c196c0f4f102e6bd13ff04dde30f0afc728dbf929065e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:28.992431 kubelet[2778]: E0515 15:45:28.992349 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e081da5c10a542f2644c196c0f4f102e6bd13ff04dde30f0afc728dbf929065e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:28.992500 kubelet[2778]: E0515 15:45:28.992442 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e081da5c10a542f2644c196c0f4f102e6bd13ff04dde30f0afc728dbf929065e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:28.992500 kubelet[2778]: E0515 15:45:28.992468 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e081da5c10a542f2644c196c0f4f102e6bd13ff04dde30f0afc728dbf929065e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:28.992608 kubelet[2778]: E0515 15:45:28.992540 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e081da5c10a542f2644c196c0f4f102e6bd13ff04dde30f0afc728dbf929065e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:45:28.993549 systemd[1]: run-netns-cni\x2da6c9197a\x2de04f\x2d51f0\x2d7f09\x2d3ec96f5f3bb5.mount: Deactivated successfully. May 15 15:45:29.119038 kubelet[2778]: I0515 15:45:29.118979 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:29.119038 kubelet[2778]: I0515 15:45:29.119028 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:29.122631 kubelet[2778]: I0515 15:45:29.122461 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:29.138407 kubelet[2778]: I0515 15:45:29.138178 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:29.138407 kubelet[2778]: I0515 15:45:29.138303 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.138866 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.138905 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.138919 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.138930 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.138941 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.138961 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.138978 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.138998 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.139015 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:45:29.139067 kubelet[2778]: E0515 15:45:29.139029 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:45:29.139067 kubelet[2778]: I0515 15:45:29.139048 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:45:30.431264 systemd[1]: Started sshd@9-164.92.106.96:22-139.178.68.195:60314.service - OpenSSH per-connection server daemon (139.178.68.195:60314). May 15 15:45:30.531021 sshd[4025]: Accepted publickey for core from 139.178.68.195 port 60314 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:30.533476 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:30.541019 systemd-logind[1513]: New session 10 of user core. May 15 15:45:30.550079 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 15:45:30.718002 sshd[4027]: Connection closed by 139.178.68.195 port 60314 May 15 15:45:30.718899 sshd-session[4025]: pam_unix(sshd:session): session closed for user core May 15 15:45:30.724004 systemd-logind[1513]: Session 10 logged out. Waiting for processes to exit. May 15 15:45:30.725562 systemd[1]: sshd@9-164.92.106.96:22-139.178.68.195:60314.service: Deactivated successfully. May 15 15:45:30.731401 systemd[1]: session-10.scope: Deactivated successfully. May 15 15:45:30.737856 systemd-logind[1513]: Removed session 10. May 15 15:45:35.739841 systemd[1]: Started sshd@10-164.92.106.96:22-139.178.68.195:49058.service - OpenSSH per-connection server daemon (139.178.68.195:49058). May 15 15:45:35.815434 sshd[4040]: Accepted publickey for core from 139.178.68.195 port 49058 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:35.817102 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:35.825502 systemd-logind[1513]: New session 11 of user core. May 15 15:45:35.835081 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 15:45:35.907919 kubelet[2778]: E0515 15:45:35.907844 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:35.910653 containerd[1530]: time="2025-05-15T15:45:35.909199415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:45:36.059768 containerd[1530]: time="2025-05-15T15:45:36.058348722Z" level=error msg="Failed to destroy network for sandbox \"326bbefa88c6bae2b84616469069ba5986afa338338c7e4107abcfd154576dfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:36.061178 containerd[1530]: time="2025-05-15T15:45:36.061013851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"326bbefa88c6bae2b84616469069ba5986afa338338c7e4107abcfd154576dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:36.061563 kubelet[2778]: E0515 15:45:36.061341 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326bbefa88c6bae2b84616469069ba5986afa338338c7e4107abcfd154576dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:36.061563 kubelet[2778]: E0515 15:45:36.061448 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326bbefa88c6bae2b84616469069ba5986afa338338c7e4107abcfd154576dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:36.061563 kubelet[2778]: E0515 15:45:36.061478 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326bbefa88c6bae2b84616469069ba5986afa338338c7e4107abcfd154576dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:36.061563 kubelet[2778]: E0515 15:45:36.061526 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"326bbefa88c6bae2b84616469069ba5986afa338338c7e4107abcfd154576dfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:45:36.065044 systemd[1]: run-netns-cni\x2dc5984ed7\x2d764e\x2dac68\x2dfa73\x2d5c0bd508870a.mount: Deactivated successfully. May 15 15:45:36.090556 sshd[4042]: Connection closed by 139.178.68.195 port 49058 May 15 15:45:36.091114 sshd-session[4040]: pam_unix(sshd:session): session closed for user core May 15 15:45:36.105536 systemd[1]: sshd@10-164.92.106.96:22-139.178.68.195:49058.service: Deactivated successfully. May 15 15:45:36.108945 systemd[1]: session-11.scope: Deactivated successfully. May 15 15:45:36.111035 systemd-logind[1513]: Session 11 logged out. Waiting for processes to exit. May 15 15:45:36.116859 systemd[1]: Started sshd@11-164.92.106.96:22-139.178.68.195:49074.service - OpenSSH per-connection server daemon (139.178.68.195:49074). May 15 15:45:36.118693 systemd-logind[1513]: Removed session 11. May 15 15:45:36.192840 sshd[4084]: Accepted publickey for core from 139.178.68.195 port 49074 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:36.195192 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:36.203830 systemd-logind[1513]: New session 12 of user core. May 15 15:45:36.213138 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 15:45:36.430935 sshd[4086]: Connection closed by 139.178.68.195 port 49074 May 15 15:45:36.431453 sshd-session[4084]: pam_unix(sshd:session): session closed for user core May 15 15:45:36.451335 systemd[1]: sshd@11-164.92.106.96:22-139.178.68.195:49074.service: Deactivated successfully. May 15 15:45:36.457986 systemd[1]: session-12.scope: Deactivated successfully. May 15 15:45:36.460398 systemd-logind[1513]: Session 12 logged out. Waiting for processes to exit. May 15 15:45:36.471304 systemd[1]: Started sshd@12-164.92.106.96:22-139.178.68.195:49080.service - OpenSSH per-connection server daemon (139.178.68.195:49080). May 15 15:45:36.473837 systemd-logind[1513]: Removed session 12. May 15 15:45:36.550781 sshd[4096]: Accepted publickey for core from 139.178.68.195 port 49080 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:36.553277 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:36.559958 systemd-logind[1513]: New session 13 of user core. May 15 15:45:36.568081 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 15:45:36.725230 sshd[4098]: Connection closed by 139.178.68.195 port 49080 May 15 15:45:36.727089 sshd-session[4096]: pam_unix(sshd:session): session closed for user core May 15 15:45:36.734511 systemd[1]: sshd@12-164.92.106.96:22-139.178.68.195:49080.service: Deactivated successfully. May 15 15:45:36.739536 systemd[1]: session-13.scope: Deactivated successfully. May 15 15:45:36.741855 systemd-logind[1513]: Session 13 logged out. Waiting for processes to exit. May 15 15:45:36.745856 systemd-logind[1513]: Removed session 13. May 15 15:45:36.908288 containerd[1530]: time="2025-05-15T15:45:36.908143619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:45:36.994568 containerd[1530]: time="2025-05-15T15:45:36.994214453Z" level=error msg="Failed to destroy network for sandbox \"d84c3e335009dd05f00aaa1a8c9a97551423c7747e2416500381c9f7ccb8cbef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:36.998435 systemd[1]: run-netns-cni\x2d8e8152e4\x2d8ae3\x2d125e\x2d8fef\x2d05d9ff9a2d4e.mount: Deactivated successfully. May 15 15:45:37.001020 containerd[1530]: time="2025-05-15T15:45:36.998144471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84c3e335009dd05f00aaa1a8c9a97551423c7747e2416500381c9f7ccb8cbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:37.001201 kubelet[2778]: E0515 15:45:36.999101 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84c3e335009dd05f00aaa1a8c9a97551423c7747e2416500381c9f7ccb8cbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:37.001201 kubelet[2778]: E0515 15:45:36.999225 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84c3e335009dd05f00aaa1a8c9a97551423c7747e2416500381c9f7ccb8cbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:37.001201 kubelet[2778]: E0515 15:45:36.999255 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84c3e335009dd05f00aaa1a8c9a97551423c7747e2416500381c9f7ccb8cbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:37.001201 kubelet[2778]: E0515 15:45:36.999310 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d84c3e335009dd05f00aaa1a8c9a97551423c7747e2416500381c9f7ccb8cbef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:45:37.907307 kubelet[2778]: E0515 15:45:37.906968 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:37.909121 kubelet[2778]: E0515 15:45:37.908982 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-nfvst" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" May 15 15:45:39.159288 kubelet[2778]: I0515 15:45:39.159220 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:39.159288 kubelet[2778]: I0515 15:45:39.159283 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:39.164287 kubelet[2778]: I0515 15:45:39.164244 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:39.181451 kubelet[2778]: I0515 15:45:39.181408 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:39.181904 kubelet[2778]: I0515 15:45:39.181617 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181774 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181790 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181802 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181812 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181824 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181842 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181858 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181877 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181896 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:45:39.181904 kubelet[2778]: E0515 15:45:39.181906 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:45:39.182540 kubelet[2778]: I0515 15:45:39.181922 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:45:40.909167 kubelet[2778]: E0515 15:45:40.908149 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:40.910801 containerd[1530]: time="2025-05-15T15:45:40.908766278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:45:40.915317 containerd[1530]: time="2025-05-15T15:45:40.914847360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:45:41.024290 containerd[1530]: time="2025-05-15T15:45:41.024216759Z" level=error msg="Failed to destroy network for sandbox \"5db36110c901bb1982d215f81968a7542ed9854c49ddd0b6bcae12ffc0bfc4fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:41.028777 systemd[1]: run-netns-cni\x2d8bc82b4c\x2de364\x2daa66\x2d292b\x2dc65253a884ce.mount: Deactivated successfully. May 15 15:45:41.032567 containerd[1530]: time="2025-05-15T15:45:41.032496769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db36110c901bb1982d215f81968a7542ed9854c49ddd0b6bcae12ffc0bfc4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:41.033083 kubelet[2778]: E0515 15:45:41.033030 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db36110c901bb1982d215f81968a7542ed9854c49ddd0b6bcae12ffc0bfc4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:41.033243 kubelet[2778]: E0515 15:45:41.033144 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db36110c901bb1982d215f81968a7542ed9854c49ddd0b6bcae12ffc0bfc4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:45:41.033346 kubelet[2778]: E0515 15:45:41.033256 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db36110c901bb1982d215f81968a7542ed9854c49ddd0b6bcae12ffc0bfc4fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:45:41.034002 kubelet[2778]: E0515 15:45:41.033598 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5db36110c901bb1982d215f81968a7542ed9854c49ddd0b6bcae12ffc0bfc4fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:45:41.039774 containerd[1530]: time="2025-05-15T15:45:41.039030177Z" level=error msg="Failed to destroy network for sandbox \"47a13ab4ad6ede1a69f0a3ac0901c00eb415ed500f9e326ad9db6875102155f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:41.043657 systemd[1]: run-netns-cni\x2d9a5505d9\x2d5c90\x2d2eba\x2d23fe\x2dd25abadb23dd.mount: Deactivated successfully. May 15 15:45:41.046809 containerd[1530]: time="2025-05-15T15:45:41.045108249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a13ab4ad6ede1a69f0a3ac0901c00eb415ed500f9e326ad9db6875102155f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:41.047538 kubelet[2778]: E0515 15:45:41.047401 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a13ab4ad6ede1a69f0a3ac0901c00eb415ed500f9e326ad9db6875102155f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:41.048235 kubelet[2778]: E0515 15:45:41.047665 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a13ab4ad6ede1a69f0a3ac0901c00eb415ed500f9e326ad9db6875102155f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:41.048235 kubelet[2778]: E0515 15:45:41.047738 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a13ab4ad6ede1a69f0a3ac0901c00eb415ed500f9e326ad9db6875102155f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:41.048235 kubelet[2778]: E0515 15:45:41.047821 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47a13ab4ad6ede1a69f0a3ac0901c00eb415ed500f9e326ad9db6875102155f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:45:41.748119 systemd[1]: Started sshd@13-164.92.106.96:22-139.178.68.195:49082.service - OpenSSH per-connection server daemon (139.178.68.195:49082). May 15 15:45:41.824099 sshd[4202]: Accepted publickey for core from 139.178.68.195 port 49082 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:41.826296 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:41.834498 systemd-logind[1513]: New session 14 of user core. May 15 15:45:41.844098 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 15:45:41.909082 kubelet[2778]: E0515 15:45:41.908533 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:41.987680 systemd[1]: Started sshd@14-164.92.106.96:22-97.86.134.216:1872.service - OpenSSH per-connection server daemon (97.86.134.216:1872). May 15 15:45:42.041443 sshd[4204]: Connection closed by 139.178.68.195 port 49082 May 15 15:45:42.042265 sshd-session[4202]: pam_unix(sshd:session): session closed for user core May 15 15:45:42.048324 systemd[1]: sshd@13-164.92.106.96:22-139.178.68.195:49082.service: Deactivated successfully. May 15 15:45:42.051719 systemd[1]: session-14.scope: Deactivated successfully. May 15 15:45:42.053543 systemd-logind[1513]: Session 14 logged out. Waiting for processes to exit. May 15 15:45:42.057342 systemd-logind[1513]: Removed session 14. May 15 15:45:43.694058 sshd[4212]: Invalid user test from 97.86.134.216 port 1872 May 15 15:45:44.000169 sshd-session[4218]: pam_faillock(sshd:auth): User unknown May 15 15:45:44.002982 sshd[4212]: Postponed keyboard-interactive for invalid user test from 97.86.134.216 port 1872 ssh2 [preauth] May 15 15:45:44.248990 sshd-session[4218]: pam_unix(sshd:auth): check pass; user unknown May 15 15:45:44.249046 sshd-session[4218]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=97.86.134.216 May 15 15:45:44.250607 sshd-session[4218]: pam_faillock(sshd:auth): User unknown May 15 15:45:45.389478 sshd[4212]: PAM: Permission denied for illegal user test from 97.86.134.216 May 15 15:45:45.391000 sshd[4212]: Failed keyboard-interactive/pam for invalid user test from 97.86.134.216 port 1872 ssh2 May 15 15:45:45.809759 sshd[4212]: Connection closed by invalid user test 97.86.134.216 port 1872 [preauth] May 15 15:45:45.811264 systemd[1]: sshd@14-164.92.106.96:22-97.86.134.216:1872.service: Deactivated successfully. May 15 15:45:47.062984 systemd[1]: Started sshd@15-164.92.106.96:22-139.178.68.195:44610.service - OpenSSH per-connection server daemon (139.178.68.195:44610). May 15 15:45:47.144317 sshd[4223]: Accepted publickey for core from 139.178.68.195 port 44610 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:47.146977 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:47.157073 systemd-logind[1513]: New session 15 of user core. May 15 15:45:47.163114 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 15:45:47.381765 sshd[4225]: Connection closed by 139.178.68.195 port 44610 May 15 15:45:47.383229 sshd-session[4223]: pam_unix(sshd:session): session closed for user core May 15 15:45:47.391498 systemd[1]: sshd@15-164.92.106.96:22-139.178.68.195:44610.service: Deactivated successfully. May 15 15:45:47.397513 systemd[1]: session-15.scope: Deactivated successfully. May 15 15:45:47.399300 systemd-logind[1513]: Session 15 logged out. Waiting for processes to exit. May 15 15:45:47.403125 systemd-logind[1513]: Removed session 15. May 15 15:45:47.908188 containerd[1530]: time="2025-05-15T15:45:47.907819976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:45:48.009209 containerd[1530]: time="2025-05-15T15:45:48.009146328Z" level=error msg="Failed to destroy network for sandbox \"a1a85c50fb031c5a6eee4e32533552e14210ad7a3e170f2763f22f815e269652\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:48.012831 systemd[1]: run-netns-cni\x2daeb36a59\x2d91ed\x2d555b\x2d949b\x2d3182c13bd224.mount: Deactivated successfully. May 15 15:45:48.013228 containerd[1530]: time="2025-05-15T15:45:48.013151534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1a85c50fb031c5a6eee4e32533552e14210ad7a3e170f2763f22f815e269652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:48.014392 kubelet[2778]: E0515 15:45:48.013543 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1a85c50fb031c5a6eee4e32533552e14210ad7a3e170f2763f22f815e269652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:48.014392 kubelet[2778]: E0515 15:45:48.013612 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1a85c50fb031c5a6eee4e32533552e14210ad7a3e170f2763f22f815e269652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:48.014392 kubelet[2778]: E0515 15:45:48.013637 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1a85c50fb031c5a6eee4e32533552e14210ad7a3e170f2763f22f815e269652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:48.014392 kubelet[2778]: E0515 15:45:48.013693 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1a85c50fb031c5a6eee4e32533552e14210ad7a3e170f2763f22f815e269652\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:45:48.909127 kubelet[2778]: E0515 15:45:48.909077 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:48.910954 containerd[1530]: time="2025-05-15T15:45:48.910890593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:45:49.006679 containerd[1530]: time="2025-05-15T15:45:49.006559366Z" level=error msg="Failed to destroy network for sandbox \"c284a27644880bb371ff490ebef91589baa7d8af2bf6943b47cbfb9b16270b84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:49.010356 systemd[1]: run-netns-cni\x2dab48c49d\x2db45e\x2df6c6\x2d08d8\x2d69ec37509eb6.mount: Deactivated successfully. May 15 15:45:49.023587 containerd[1530]: time="2025-05-15T15:45:49.023508882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c284a27644880bb371ff490ebef91589baa7d8af2bf6943b47cbfb9b16270b84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:49.024347 kubelet[2778]: E0515 15:45:49.024291 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c284a27644880bb371ff490ebef91589baa7d8af2bf6943b47cbfb9b16270b84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:49.025051 kubelet[2778]: E0515 15:45:49.024365 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c284a27644880bb371ff490ebef91589baa7d8af2bf6943b47cbfb9b16270b84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:49.025051 kubelet[2778]: E0515 15:45:49.024393 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c284a27644880bb371ff490ebef91589baa7d8af2bf6943b47cbfb9b16270b84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:49.025051 kubelet[2778]: E0515 15:45:49.024793 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c284a27644880bb371ff490ebef91589baa7d8af2bf6943b47cbfb9b16270b84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:45:49.199192 kubelet[2778]: I0515 15:45:49.199115 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:49.199192 kubelet[2778]: I0515 15:45:49.199180 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:49.202865 kubelet[2778]: I0515 15:45:49.202802 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:49.223722 kubelet[2778]: I0515 15:45:49.223514 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:49.223722 kubelet[2778]: I0515 15:45:49.223639 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:49.223722 kubelet[2778]: E0515 15:45:49.223682 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:49.224039 kubelet[2778]: E0515 15:45:49.223693 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:49.224039 kubelet[2778]: E0515 15:45:49.223981 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:49.224039 kubelet[2778]: E0515 15:45:49.223990 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:45:49.224039 kubelet[2778]: E0515 15:45:49.223998 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:45:49.224039 kubelet[2778]: E0515 15:45:49.224017 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:45:49.224604 kubelet[2778]: E0515 15:45:49.224418 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:45:49.224604 kubelet[2778]: E0515 15:45:49.224533 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:45:49.224604 kubelet[2778]: E0515 15:45:49.224545 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:45:49.224604 kubelet[2778]: E0515 15:45:49.224555 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:45:49.224604 kubelet[2778]: I0515 15:45:49.224567 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:45:50.907620 kubelet[2778]: E0515 15:45:50.907532 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:50.911736 containerd[1530]: time="2025-05-15T15:45:50.911123760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 15:45:52.408611 systemd[1]: Started sshd@16-164.92.106.96:22-139.178.68.195:44620.service - OpenSSH per-connection server daemon (139.178.68.195:44620). May 15 15:45:52.510105 sshd[4299]: Accepted publickey for core from 139.178.68.195 port 44620 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:52.516158 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:52.541845 systemd-logind[1513]: New session 16 of user core. May 15 15:45:52.544998 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 15:45:52.825206 sshd[4301]: Connection closed by 139.178.68.195 port 44620 May 15 15:45:52.826564 sshd-session[4299]: pam_unix(sshd:session): session closed for user core May 15 15:45:52.839058 systemd[1]: sshd@16-164.92.106.96:22-139.178.68.195:44620.service: Deactivated successfully. May 15 15:45:52.851343 systemd[1]: session-16.scope: Deactivated successfully. May 15 15:45:52.858085 systemd-logind[1513]: Session 16 logged out. Waiting for processes to exit. May 15 15:45:52.866865 systemd-logind[1513]: Removed session 16. May 15 15:45:54.914239 kubelet[2778]: E0515 15:45:54.914136 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:54.916799 containerd[1530]: time="2025-05-15T15:45:54.916482001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:45:55.126922 containerd[1530]: time="2025-05-15T15:45:55.126819395Z" level=error msg="Failed to destroy network for sandbox \"6cbfccc1410ed6fe7de42b6d0fba47c805b5776d33c85ce030914cd03378832a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:55.131434 systemd[1]: run-netns-cni\x2d12629a90\x2da376\x2dcedc\x2dc588\x2d393a144c6bb5.mount: Deactivated successfully. May 15 15:45:55.177747 containerd[1530]: time="2025-05-15T15:45:55.176362803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cbfccc1410ed6fe7de42b6d0fba47c805b5776d33c85ce030914cd03378832a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:55.181106 kubelet[2778]: E0515 15:45:55.181050 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cbfccc1410ed6fe7de42b6d0fba47c805b5776d33c85ce030914cd03378832a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:55.183458 kubelet[2778]: E0515 15:45:55.183186 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cbfccc1410ed6fe7de42b6d0fba47c805b5776d33c85ce030914cd03378832a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:55.183941 kubelet[2778]: E0515 15:45:55.183721 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cbfccc1410ed6fe7de42b6d0fba47c805b5776d33c85ce030914cd03378832a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:55.184584 kubelet[2778]: E0515 15:45:55.184261 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cbfccc1410ed6fe7de42b6d0fba47c805b5776d33c85ce030914cd03378832a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:45:55.920504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156631153.mount: Deactivated successfully. May 15 15:45:55.929767 containerd[1530]: time="2025-05-15T15:45:55.929613822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1156631153: write /var/lib/containerd/tmpmounts/containerd-mount1156631153/usr/bin/mountns: no space left on device" May 15 15:45:55.931497 containerd[1530]: time="2025-05-15T15:45:55.929735581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 15:45:55.931679 kubelet[2778]: E0515 15:45:55.931110 2778 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1156631153: write /var/lib/containerd/tmpmounts/containerd-mount1156631153/usr/bin/mountns: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:45:55.931679 kubelet[2778]: E0515 15:45:55.931182 2778 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1156631153: write /var/lib/containerd/tmpmounts/containerd-mount1156631153/usr/bin/mountns: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 15:45:55.933239 kubelet[2778]: E0515 15:45:55.931975 2778 kuberuntime_manager.go:1256] container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:interface=eth0,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrpmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-nfvst_calico-system(85ff5786-c114-43e4-8f58-d6ff4433361a): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/node:v3.29.3": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1156631153: write /var/lib/containerd/tmpmounts/containerd-mount1156631153/usr/bin/mountns: no space left on device May 15 15:45:55.933466 kubelet[2778]: E0515 15:45:55.932043 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1156631153: write /var/lib/containerd/tmpmounts/containerd-mount1156631153/usr/bin/mountns: no space left on device\"" pod="calico-system/calico-node-nfvst" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" May 15 15:45:55.940392 containerd[1530]: time="2025-05-15T15:45:55.940308287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:45:56.052448 containerd[1530]: time="2025-05-15T15:45:56.052366114Z" level=error msg="Failed to destroy network for sandbox \"1ea94f323ca6cd27b03ecd6a71f505054e9d5775cbb6fcb99e9d5697ac2597f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:56.056172 containerd[1530]: time="2025-05-15T15:45:56.056094148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea94f323ca6cd27b03ecd6a71f505054e9d5775cbb6fcb99e9d5697ac2597f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:56.059370 kubelet[2778]: E0515 15:45:56.056444 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea94f323ca6cd27b03ecd6a71f505054e9d5775cbb6fcb99e9d5697ac2597f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:56.059370 kubelet[2778]: E0515 15:45:56.056522 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea94f323ca6cd27b03ecd6a71f505054e9d5775cbb6fcb99e9d5697ac2597f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:45:56.059370 kubelet[2778]: E0515 15:45:56.056554 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea94f323ca6cd27b03ecd6a71f505054e9d5775cbb6fcb99e9d5697ac2597f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:45:56.059370 kubelet[2778]: E0515 15:45:56.056609 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ea94f323ca6cd27b03ecd6a71f505054e9d5775cbb6fcb99e9d5697ac2597f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:45:56.059952 systemd[1]: run-netns-cni\x2d892a1b79\x2d7d27\x2d9a7a\x2d8b93\x2d92ca7d582c16.mount: Deactivated successfully. May 15 15:45:57.846484 systemd[1]: Started sshd@17-164.92.106.96:22-139.178.68.195:43840.service - OpenSSH per-connection server daemon (139.178.68.195:43840). May 15 15:45:57.908493 kubelet[2778]: E0515 15:45:57.907154 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:45:57.970288 sshd[4382]: Accepted publickey for core from 139.178.68.195 port 43840 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:57.972968 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:57.982155 systemd-logind[1513]: New session 17 of user core. May 15 15:45:57.995459 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 15:45:58.182078 sshd[4384]: Connection closed by 139.178.68.195 port 43840 May 15 15:45:58.182976 sshd-session[4382]: pam_unix(sshd:session): session closed for user core May 15 15:45:58.196555 systemd[1]: sshd@17-164.92.106.96:22-139.178.68.195:43840.service: Deactivated successfully. May 15 15:45:58.201309 systemd[1]: session-17.scope: Deactivated successfully. May 15 15:45:58.205333 systemd-logind[1513]: Session 17 logged out. Waiting for processes to exit. May 15 15:45:58.209256 systemd[1]: Started sshd@18-164.92.106.96:22-139.178.68.195:43852.service - OpenSSH per-connection server daemon (139.178.68.195:43852). May 15 15:45:58.212237 systemd-logind[1513]: Removed session 17. May 15 15:45:58.275887 sshd[4396]: Accepted publickey for core from 139.178.68.195 port 43852 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:58.278195 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:58.288548 systemd-logind[1513]: New session 18 of user core. May 15 15:45:58.292047 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 15:45:58.683361 sshd[4398]: Connection closed by 139.178.68.195 port 43852 May 15 15:45:58.684070 sshd-session[4396]: pam_unix(sshd:session): session closed for user core May 15 15:45:58.701330 systemd[1]: sshd@18-164.92.106.96:22-139.178.68.195:43852.service: Deactivated successfully. May 15 15:45:58.705544 systemd[1]: session-18.scope: Deactivated successfully. May 15 15:45:58.707663 systemd-logind[1513]: Session 18 logged out. Waiting for processes to exit. May 15 15:45:58.713996 systemd[1]: Started sshd@19-164.92.106.96:22-139.178.68.195:43866.service - OpenSSH per-connection server daemon (139.178.68.195:43866). May 15 15:45:58.716850 systemd-logind[1513]: Removed session 18. May 15 15:45:58.782917 sshd[4408]: Accepted publickey for core from 139.178.68.195 port 43866 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:45:58.784778 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:45:58.792501 systemd-logind[1513]: New session 19 of user core. May 15 15:45:58.800209 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 15:45:58.910300 containerd[1530]: time="2025-05-15T15:45:58.910239752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:45:59.006140 containerd[1530]: time="2025-05-15T15:45:59.005232180Z" level=error msg="Failed to destroy network for sandbox \"cff19ff84a1e06e01aef304ab44978ddc28f4825c95a0114d72f8d19cde439c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:59.008551 containerd[1530]: time="2025-05-15T15:45:59.008432460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff19ff84a1e06e01aef304ab44978ddc28f4825c95a0114d72f8d19cde439c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:59.010886 systemd[1]: run-netns-cni\x2d124391d6\x2d795d\x2da27f\x2dfed6\x2d070002ba8239.mount: Deactivated successfully. May 15 15:45:59.016338 kubelet[2778]: E0515 15:45:59.014940 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff19ff84a1e06e01aef304ab44978ddc28f4825c95a0114d72f8d19cde439c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:45:59.016338 kubelet[2778]: E0515 15:45:59.015037 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff19ff84a1e06e01aef304ab44978ddc28f4825c95a0114d72f8d19cde439c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:59.016338 kubelet[2778]: E0515 15:45:59.015060 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff19ff84a1e06e01aef304ab44978ddc28f4825c95a0114d72f8d19cde439c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:59.016338 kubelet[2778]: E0515 15:45:59.015116 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cff19ff84a1e06e01aef304ab44978ddc28f4825c95a0114d72f8d19cde439c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:45:59.246359 kubelet[2778]: I0515 15:45:59.246303 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:45:59.246359 kubelet[2778]: I0515 15:45:59.246364 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:45:59.253050 kubelet[2778]: I0515 15:45:59.253012 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:45:59.274249 kubelet[2778]: I0515 15:45:59.274081 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:45:59.274249 kubelet[2778]: I0515 15:45:59.274177 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:45:59.274249 kubelet[2778]: E0515 15:45:59.274221 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:45:59.274249 kubelet[2778]: E0515 15:45:59.274232 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:45:59.274249 kubelet[2778]: E0515 15:45:59.274240 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:45:59.274249 kubelet[2778]: E0515 15:45:59.274248 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:45:59.274249 kubelet[2778]: E0515 15:45:59.274255 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:45:59.274740 kubelet[2778]: E0515 15:45:59.274271 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:45:59.274740 kubelet[2778]: E0515 15:45:59.274281 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:45:59.274740 kubelet[2778]: E0515 15:45:59.274293 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:45:59.274740 kubelet[2778]: E0515 15:45:59.274304 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:45:59.274740 kubelet[2778]: E0515 15:45:59.274313 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:45:59.274740 kubelet[2778]: I0515 15:45:59.274324 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:46:00.909960 kubelet[2778]: E0515 15:46:00.909797 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:00.911966 containerd[1530]: time="2025-05-15T15:46:00.911557078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:46:01.159851 containerd[1530]: time="2025-05-15T15:46:01.159719498Z" level=error msg="Failed to destroy network for sandbox \"c9e32806dd5d856a1c6a802b965c80f1ae66a3cf434ab83695a0588b7cdb095d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:01.166256 systemd[1]: run-netns-cni\x2d0efcf8d0\x2d053c\x2d8bbe\x2dde06\x2dd806a6c0dcb3.mount: Deactivated successfully. May 15 15:46:01.167890 containerd[1530]: time="2025-05-15T15:46:01.166362205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e32806dd5d856a1c6a802b965c80f1ae66a3cf434ab83695a0588b7cdb095d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:01.168529 kubelet[2778]: E0515 15:46:01.168329 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e32806dd5d856a1c6a802b965c80f1ae66a3cf434ab83695a0588b7cdb095d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:01.168529 kubelet[2778]: E0515 15:46:01.168441 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e32806dd5d856a1c6a802b965c80f1ae66a3cf434ab83695a0588b7cdb095d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:01.168529 kubelet[2778]: E0515 15:46:01.168496 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e32806dd5d856a1c6a802b965c80f1ae66a3cf434ab83695a0588b7cdb095d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:01.169045 kubelet[2778]: E0515 15:46:01.168961 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9e32806dd5d856a1c6a802b965c80f1ae66a3cf434ab83695a0588b7cdb095d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:46:01.909406 kubelet[2778]: E0515 15:46:01.909305 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:02.002141 sshd[4410]: Connection closed by 139.178.68.195 port 43866 May 15 15:46:02.003061 sshd-session[4408]: pam_unix(sshd:session): session closed for user core May 15 15:46:02.023658 systemd[1]: sshd@19-164.92.106.96:22-139.178.68.195:43866.service: Deactivated successfully. May 15 15:46:02.028450 systemd[1]: session-19.scope: Deactivated successfully. May 15 15:46:02.030533 systemd[1]: session-19.scope: Consumed 825ms CPU time, 68.7M memory peak. May 15 15:46:02.035197 systemd-logind[1513]: Session 19 logged out. Waiting for processes to exit. May 15 15:46:02.043345 systemd-logind[1513]: Removed session 19. May 15 15:46:02.049262 systemd[1]: Started sshd@20-164.92.106.96:22-139.178.68.195:43878.service - OpenSSH per-connection server daemon (139.178.68.195:43878). May 15 15:46:02.167747 sshd[4484]: Accepted publickey for core from 139.178.68.195 port 43878 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:02.169327 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:02.182603 systemd-logind[1513]: New session 20 of user core. May 15 15:46:02.192037 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 15:46:02.770605 sshd[4488]: Connection closed by 139.178.68.195 port 43878 May 15 15:46:02.771171 sshd-session[4484]: pam_unix(sshd:session): session closed for user core May 15 15:46:02.798159 systemd[1]: sshd@20-164.92.106.96:22-139.178.68.195:43878.service: Deactivated successfully. May 15 15:46:02.805901 systemd[1]: session-20.scope: Deactivated successfully. May 15 15:46:02.810690 systemd-logind[1513]: Session 20 logged out. Waiting for processes to exit. May 15 15:46:02.818942 systemd-logind[1513]: Removed session 20. May 15 15:46:02.827416 systemd[1]: Started sshd@21-164.92.106.96:22-139.178.68.195:43888.service - OpenSSH per-connection server daemon (139.178.68.195:43888). May 15 15:46:02.940409 sshd[4498]: Accepted publickey for core from 139.178.68.195 port 43888 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:02.947104 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:02.965251 systemd-logind[1513]: New session 21 of user core. May 15 15:46:02.976087 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 15:46:03.291559 sshd[4500]: Connection closed by 139.178.68.195 port 43888 May 15 15:46:03.292595 sshd-session[4498]: pam_unix(sshd:session): session closed for user core May 15 15:46:03.305454 systemd[1]: sshd@21-164.92.106.96:22-139.178.68.195:43888.service: Deactivated successfully. May 15 15:46:03.311726 systemd[1]: session-21.scope: Deactivated successfully. May 15 15:46:03.318372 systemd-logind[1513]: Session 21 logged out. Waiting for processes to exit. May 15 15:46:03.326610 systemd-logind[1513]: Removed session 21. May 15 15:46:05.907996 kubelet[2778]: E0515 15:46:05.907505 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:05.910988 containerd[1530]: time="2025-05-15T15:46:05.910120117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:46:06.066172 containerd[1530]: time="2025-05-15T15:46:06.066004147Z" level=error msg="Failed to destroy network for sandbox \"5952dc71dbe42a9299d2d0872251dc66a98adf8709e1b26491199d8659d55abf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:06.071590 containerd[1530]: time="2025-05-15T15:46:06.071495086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5952dc71dbe42a9299d2d0872251dc66a98adf8709e1b26491199d8659d55abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:06.076198 kubelet[2778]: E0515 15:46:06.073056 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5952dc71dbe42a9299d2d0872251dc66a98adf8709e1b26491199d8659d55abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:06.076632 kubelet[2778]: E0515 15:46:06.076509 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5952dc71dbe42a9299d2d0872251dc66a98adf8709e1b26491199d8659d55abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:06.076632 kubelet[2778]: E0515 15:46:06.076576 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5952dc71dbe42a9299d2d0872251dc66a98adf8709e1b26491199d8659d55abf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:06.079046 kubelet[2778]: E0515 15:46:06.076957 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5952dc71dbe42a9299d2d0872251dc66a98adf8709e1b26491199d8659d55abf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:46:06.080287 systemd[1]: run-netns-cni\x2dc569040e\x2d351e\x2d4c71\x2d575a\x2d27541e941b8d.mount: Deactivated successfully. May 15 15:46:07.907403 kubelet[2778]: E0515 15:46:07.907248 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:07.911948 kubelet[2778]: E0515 15:46:07.910458 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-nfvst" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" May 15 15:46:07.913301 kubelet[2778]: E0515 15:46:07.913097 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:08.307331 systemd[1]: Started sshd@22-164.92.106.96:22-139.178.68.195:38760.service - OpenSSH per-connection server daemon (139.178.68.195:38760). May 15 15:46:08.424971 sshd[4546]: Accepted publickey for core from 139.178.68.195 port 38760 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:08.432094 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:08.450088 systemd-logind[1513]: New session 22 of user core. May 15 15:46:08.459220 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 15:46:08.710603 sshd[4548]: Connection closed by 139.178.68.195 port 38760 May 15 15:46:08.711937 sshd-session[4546]: pam_unix(sshd:session): session closed for user core May 15 15:46:08.722062 systemd[1]: sshd@22-164.92.106.96:22-139.178.68.195:38760.service: Deactivated successfully. May 15 15:46:08.730403 systemd[1]: session-22.scope: Deactivated successfully. May 15 15:46:08.733502 systemd-logind[1513]: Session 22 logged out. Waiting for processes to exit. May 15 15:46:08.739149 systemd-logind[1513]: Removed session 22. May 15 15:46:09.314175 kubelet[2778]: I0515 15:46:09.313856 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:46:09.314175 kubelet[2778]: I0515 15:46:09.313940 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:46:09.319117 kubelet[2778]: I0515 15:46:09.319071 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:46:09.349796 kubelet[2778]: I0515 15:46:09.349735 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:46:09.350350 kubelet[2778]: I0515 15:46:09.350300 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:46:09.350675 kubelet[2778]: E0515 15:46:09.350566 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:09.350675 kubelet[2778]: E0515 15:46:09.350606 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:09.350675 kubelet[2778]: E0515 15:46:09.350615 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:09.350675 kubelet[2778]: E0515 15:46:09.350623 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:46:09.350675 kubelet[2778]: E0515 15:46:09.350632 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:46:09.350675 kubelet[2778]: E0515 15:46:09.350648 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:46:09.351037 kubelet[2778]: E0515 15:46:09.350941 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:46:09.351037 kubelet[2778]: E0515 15:46:09.350969 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:46:09.351037 kubelet[2778]: E0515 15:46:09.350980 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:46:09.351037 kubelet[2778]: E0515 15:46:09.350992 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:46:09.351037 kubelet[2778]: I0515 15:46:09.351021 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:46:09.908224 containerd[1530]: time="2025-05-15T15:46:09.908146262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:46:10.057979 containerd[1530]: time="2025-05-15T15:46:10.057885155Z" level=error msg="Failed to destroy network for sandbox \"7f39faf93a0c5ff73990e8ca5c87012ad93b7125c325c3bb8d57841c43395b7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:10.064389 containerd[1530]: time="2025-05-15T15:46:10.064291729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f39faf93a0c5ff73990e8ca5c87012ad93b7125c325c3bb8d57841c43395b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:10.064528 systemd[1]: run-netns-cni\x2d838e466c\x2d3273\x2db43f\x2d55b7\x2decd3321b66a1.mount: Deactivated successfully. May 15 15:46:10.065674 kubelet[2778]: E0515 15:46:10.065273 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f39faf93a0c5ff73990e8ca5c87012ad93b7125c325c3bb8d57841c43395b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:10.065674 kubelet[2778]: E0515 15:46:10.065358 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f39faf93a0c5ff73990e8ca5c87012ad93b7125c325c3bb8d57841c43395b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:46:10.065674 kubelet[2778]: E0515 15:46:10.065395 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f39faf93a0c5ff73990e8ca5c87012ad93b7125c325c3bb8d57841c43395b7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:46:10.065674 kubelet[2778]: E0515 15:46:10.065456 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f39faf93a0c5ff73990e8ca5c87012ad93b7125c325c3bb8d57841c43395b7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:46:10.908830 containerd[1530]: time="2025-05-15T15:46:10.908465822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:46:11.064767 containerd[1530]: time="2025-05-15T15:46:11.062175640Z" level=error msg="Failed to destroy network for sandbox \"b652de2d1d5b2bae98fb6e261b8f0937f714f59eab5fe8d08f9da5c7c3a0faa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:11.066263 systemd[1]: run-netns-cni\x2d0de661d0\x2d2fe9\x2ddaf3\x2d6d23\x2db3bf9669a046.mount: Deactivated successfully. May 15 15:46:11.104338 containerd[1530]: time="2025-05-15T15:46:11.104203515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b652de2d1d5b2bae98fb6e261b8f0937f714f59eab5fe8d08f9da5c7c3a0faa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:11.105282 kubelet[2778]: E0515 15:46:11.105153 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b652de2d1d5b2bae98fb6e261b8f0937f714f59eab5fe8d08f9da5c7c3a0faa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:11.106798 kubelet[2778]: E0515 15:46:11.105328 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b652de2d1d5b2bae98fb6e261b8f0937f714f59eab5fe8d08f9da5c7c3a0faa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:11.106798 kubelet[2778]: E0515 15:46:11.105372 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b652de2d1d5b2bae98fb6e261b8f0937f714f59eab5fe8d08f9da5c7c3a0faa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:11.106798 kubelet[2778]: E0515 15:46:11.105536 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b652de2d1d5b2bae98fb6e261b8f0937f714f59eab5fe8d08f9da5c7c3a0faa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:46:12.908818 kubelet[2778]: E0515 15:46:12.908624 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:12.911412 containerd[1530]: time="2025-05-15T15:46:12.910959227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:46:13.063977 containerd[1530]: time="2025-05-15T15:46:13.061383861Z" level=error msg="Failed to destroy network for sandbox \"31a783cba9d5a35d095c4df0c2d2a0f62453e2220a1a27e1386cc152d1845620\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:13.065788 systemd[1]: run-netns-cni\x2d68f58bbc\x2d6d6a\x2dc4b8\x2dc213\x2d71f7a0ff2245.mount: Deactivated successfully. May 15 15:46:13.067501 containerd[1530]: time="2025-05-15T15:46:13.067184307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a783cba9d5a35d095c4df0c2d2a0f62453e2220a1a27e1386cc152d1845620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:13.069398 kubelet[2778]: E0515 15:46:13.069310 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a783cba9d5a35d095c4df0c2d2a0f62453e2220a1a27e1386cc152d1845620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:13.071991 kubelet[2778]: E0515 15:46:13.070054 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a783cba9d5a35d095c4df0c2d2a0f62453e2220a1a27e1386cc152d1845620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:13.071991 kubelet[2778]: E0515 15:46:13.070175 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a783cba9d5a35d095c4df0c2d2a0f62453e2220a1a27e1386cc152d1845620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:13.071991 kubelet[2778]: E0515 15:46:13.070523 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a783cba9d5a35d095c4df0c2d2a0f62453e2220a1a27e1386cc152d1845620\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:46:13.736020 systemd[1]: Started sshd@23-164.92.106.96:22-139.178.68.195:49432.service - OpenSSH per-connection server daemon (139.178.68.195:49432). May 15 15:46:13.834092 sshd[4655]: Accepted publickey for core from 139.178.68.195 port 49432 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:13.838160 sshd-session[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:13.850416 systemd-logind[1513]: New session 23 of user core. May 15 15:46:13.854237 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 15:46:14.056511 sshd[4657]: Connection closed by 139.178.68.195 port 49432 May 15 15:46:14.057838 sshd-session[4655]: pam_unix(sshd:session): session closed for user core May 15 15:46:14.064644 systemd[1]: sshd@23-164.92.106.96:22-139.178.68.195:49432.service: Deactivated successfully. May 15 15:46:14.068812 systemd[1]: session-23.scope: Deactivated successfully. May 15 15:46:14.070422 systemd-logind[1513]: Session 23 logged out. Waiting for processes to exit. May 15 15:46:14.073972 systemd-logind[1513]: Removed session 23. May 15 15:46:14.909795 kubelet[2778]: E0515 15:46:14.909666 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:17.908275 kubelet[2778]: E0515 15:46:17.907683 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:17.910207 containerd[1530]: time="2025-05-15T15:46:17.910021422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:46:18.046750 containerd[1530]: time="2025-05-15T15:46:18.046501724Z" level=error msg="Failed to destroy network for sandbox \"43285cce8a4e3c3d173f2a9591b11915305443c78a79b1f96d3320ef0d1e8a0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:18.053350 containerd[1530]: time="2025-05-15T15:46:18.052334087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43285cce8a4e3c3d173f2a9591b11915305443c78a79b1f96d3320ef0d1e8a0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:18.053373 systemd[1]: run-netns-cni\x2d2c6a9f28\x2d1ccd\x2d1ddf\x2d4809\x2d39d1e27962d1.mount: Deactivated successfully. May 15 15:46:18.058573 kubelet[2778]: E0515 15:46:18.056959 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43285cce8a4e3c3d173f2a9591b11915305443c78a79b1f96d3320ef0d1e8a0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:18.058573 kubelet[2778]: E0515 15:46:18.057127 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43285cce8a4e3c3d173f2a9591b11915305443c78a79b1f96d3320ef0d1e8a0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:18.058573 kubelet[2778]: E0515 15:46:18.057884 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43285cce8a4e3c3d173f2a9591b11915305443c78a79b1f96d3320ef0d1e8a0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:18.058573 kubelet[2778]: E0515 15:46:18.058002 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43285cce8a4e3c3d173f2a9591b11915305443c78a79b1f96d3320ef0d1e8a0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:46:19.074781 systemd[1]: Started sshd@24-164.92.106.96:22-139.178.68.195:49442.service - OpenSSH per-connection server daemon (139.178.68.195:49442). May 15 15:46:19.157020 sshd[4698]: Accepted publickey for core from 139.178.68.195 port 49442 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:19.160695 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:19.169037 systemd-logind[1513]: New session 24 of user core. May 15 15:46:19.176075 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 15:46:19.399244 kubelet[2778]: I0515 15:46:19.398536 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:46:19.399244 kubelet[2778]: I0515 15:46:19.398583 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:46:19.402746 kubelet[2778]: I0515 15:46:19.402603 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:46:19.403674 sshd[4700]: Connection closed by 139.178.68.195 port 49442 May 15 15:46:19.404472 sshd-session[4698]: pam_unix(sshd:session): session closed for user core May 15 15:46:19.416885 systemd[1]: sshd@24-164.92.106.96:22-139.178.68.195:49442.service: Deactivated successfully. May 15 15:46:19.428501 systemd[1]: session-24.scope: Deactivated successfully. May 15 15:46:19.437683 kubelet[2778]: I0515 15:46:19.436549 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:46:19.437683 kubelet[2778]: I0515 15:46:19.437342 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437396 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437409 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437417 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437426 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437435 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437447 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437458 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437490 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437503 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:46:19.437683 kubelet[2778]: E0515 15:46:19.437521 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:46:19.437683 kubelet[2778]: I0515 15:46:19.437538 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:46:19.436832 systemd-logind[1513]: Session 24 logged out. Waiting for processes to exit. May 15 15:46:19.440624 systemd-logind[1513]: Removed session 24. May 15 15:46:19.909822 kubelet[2778]: E0515 15:46:19.908731 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:19.911123 kubelet[2778]: E0515 15:46:19.910756 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-nfvst" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" May 15 15:46:23.909660 containerd[1530]: time="2025-05-15T15:46:23.909407749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:46:23.911358 containerd[1530]: time="2025-05-15T15:46:23.910017072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:46:24.141513 containerd[1530]: time="2025-05-15T15:46:24.141372779Z" level=error msg="Failed to destroy network for sandbox \"206ca437bda8e4c9cdc22d7061935b15e1c1744a9be87fbbaa1b21288991afe5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:24.146871 systemd[1]: run-netns-cni\x2d2a2cf28c\x2dd954\x2d2276\x2d558e\x2d167c1ceb5bd5.mount: Deactivated successfully. May 15 15:46:24.151049 containerd[1530]: time="2025-05-15T15:46:24.150951176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"206ca437bda8e4c9cdc22d7061935b15e1c1744a9be87fbbaa1b21288991afe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:24.153126 kubelet[2778]: E0515 15:46:24.153049 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206ca437bda8e4c9cdc22d7061935b15e1c1744a9be87fbbaa1b21288991afe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:24.155348 kubelet[2778]: E0515 15:46:24.153166 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206ca437bda8e4c9cdc22d7061935b15e1c1744a9be87fbbaa1b21288991afe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:24.155348 kubelet[2778]: E0515 15:46:24.153215 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206ca437bda8e4c9cdc22d7061935b15e1c1744a9be87fbbaa1b21288991afe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:24.155348 kubelet[2778]: E0515 15:46:24.153311 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"206ca437bda8e4c9cdc22d7061935b15e1c1744a9be87fbbaa1b21288991afe5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:46:24.156037 containerd[1530]: time="2025-05-15T15:46:24.155943706Z" level=error msg="Failed to destroy network for sandbox \"879c45b066199b34d0b089e59b8a0d3e10ff55b93865def75895cd39b058feb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:24.161911 containerd[1530]: time="2025-05-15T15:46:24.159463253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"879c45b066199b34d0b089e59b8a0d3e10ff55b93865def75895cd39b058feb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:24.162885 kubelet[2778]: E0515 15:46:24.162521 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"879c45b066199b34d0b089e59b8a0d3e10ff55b93865def75895cd39b058feb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:24.162885 kubelet[2778]: E0515 15:46:24.162592 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"879c45b066199b34d0b089e59b8a0d3e10ff55b93865def75895cd39b058feb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:46:24.162885 kubelet[2778]: E0515 15:46:24.162618 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"879c45b066199b34d0b089e59b8a0d3e10ff55b93865def75895cd39b058feb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:46:24.162885 kubelet[2778]: E0515 15:46:24.162672 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"879c45b066199b34d0b089e59b8a0d3e10ff55b93865def75895cd39b058feb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:46:24.163647 systemd[1]: run-netns-cni\x2d0a92b4ef\x2db236\x2dfb3d\x2d1dae\x2df24e610de4fb.mount: Deactivated successfully. May 15 15:46:24.425740 systemd[1]: Started sshd@25-164.92.106.96:22-139.178.68.195:42322.service - OpenSSH per-connection server daemon (139.178.68.195:42322). May 15 15:46:24.505578 sshd[4775]: Accepted publickey for core from 139.178.68.195 port 42322 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:24.507523 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:24.516941 systemd-logind[1513]: New session 25 of user core. May 15 15:46:24.524371 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 15:46:24.723855 sshd[4777]: Connection closed by 139.178.68.195 port 42322 May 15 15:46:24.725475 sshd-session[4775]: pam_unix(sshd:session): session closed for user core May 15 15:46:24.734157 systemd[1]: sshd@25-164.92.106.96:22-139.178.68.195:42322.service: Deactivated successfully. May 15 15:46:24.739972 systemd[1]: session-25.scope: Deactivated successfully. May 15 15:46:24.743738 systemd-logind[1513]: Session 25 logged out. Waiting for processes to exit. May 15 15:46:24.747465 systemd-logind[1513]: Removed session 25. May 15 15:46:27.907511 kubelet[2778]: E0515 15:46:27.907456 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:27.910831 containerd[1530]: time="2025-05-15T15:46:27.910659870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:46:28.022989 containerd[1530]: time="2025-05-15T15:46:28.022900564Z" level=error msg="Failed to destroy network for sandbox \"44ce85f39b2b483b44ff36e7ee8872a62945c6803d1f17da041d0c95dcaa2087\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:28.027060 containerd[1530]: time="2025-05-15T15:46:28.026870930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ce85f39b2b483b44ff36e7ee8872a62945c6803d1f17da041d0c95dcaa2087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:28.030238 kubelet[2778]: E0515 15:46:28.027864 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ce85f39b2b483b44ff36e7ee8872a62945c6803d1f17da041d0c95dcaa2087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:28.030238 kubelet[2778]: E0515 15:46:28.028018 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ce85f39b2b483b44ff36e7ee8872a62945c6803d1f17da041d0c95dcaa2087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:28.030238 kubelet[2778]: E0515 15:46:28.028059 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ce85f39b2b483b44ff36e7ee8872a62945c6803d1f17da041d0c95dcaa2087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:28.030238 kubelet[2778]: E0515 15:46:28.028129 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44ce85f39b2b483b44ff36e7ee8872a62945c6803d1f17da041d0c95dcaa2087\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:46:28.028905 systemd[1]: run-netns-cni\x2d83d5026b\x2ddec8\x2d55c2\x2d2f04\x2d4fae62dc818e.mount: Deactivated successfully. May 15 15:46:29.459537 kubelet[2778]: I0515 15:46:29.459251 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:46:29.459537 kubelet[2778]: I0515 15:46:29.459313 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:46:29.463332 kubelet[2778]: I0515 15:46:29.463307 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:46:29.468868 kubelet[2778]: I0515 15:46:29.468740 2778 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" size=18182961 runtimeHandler="" May 15 15:46:29.476726 containerd[1530]: time="2025-05-15T15:46:29.476590756Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:46:29.486249 containerd[1530]: time="2025-05-15T15:46:29.486160644Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:46:29.487014 containerd[1530]: time="2025-05-15T15:46:29.486963697Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"" May 15 15:46:29.487803 containerd[1530]: time="2025-05-15T15:46:29.487750611Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" returns successfully" May 15 15:46:29.487979 containerd[1530]: time="2025-05-15T15:46:29.487952835Z" level=info msg="ImageDelete event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:46:29.489767 kubelet[2778]: I0515 15:46:29.488299 2778 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" size=57236178 runtimeHandler="" May 15 15:46:29.489887 containerd[1530]: time="2025-05-15T15:46:29.488629616Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 15:46:29.490242 containerd[1530]: time="2025-05-15T15:46:29.490206729Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.12-0\"" May 15 15:46:29.490835 containerd[1530]: time="2025-05-15T15:46:29.490770467Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\"" May 15 15:46:29.491684 containerd[1530]: time="2025-05-15T15:46:29.491617166Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" returns successfully" May 15 15:46:29.491819 containerd[1530]: time="2025-05-15T15:46:29.491785712Z" level=info msg="ImageDelete event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 15:46:29.492175 kubelet[2778]: I0515 15:46:29.492125 2778 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" size=321520 runtimeHandler="" May 15 15:46:29.493634 containerd[1530]: time="2025-05-15T15:46:29.493190547Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 15:46:29.495197 containerd[1530]: time="2025-05-15T15:46:29.494782335Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.9\"" May 15 15:46:29.496253 containerd[1530]: time="2025-05-15T15:46:29.496186824Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"" May 15 15:46:29.497304 containerd[1530]: time="2025-05-15T15:46:29.497237241Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" returns successfully" May 15 15:46:29.497445 containerd[1530]: time="2025-05-15T15:46:29.497409344Z" level=info msg="ImageDelete event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 15:46:29.514932 kubelet[2778]: I0515 15:46:29.514885 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:46:29.515141 kubelet[2778]: I0515 15:46:29.514995 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515039 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515056 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515066 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515078 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515088 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515104 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515118 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515131 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:46:29.515141 kubelet[2778]: E0515 15:46:29.515144 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:46:29.515576 kubelet[2778]: E0515 15:46:29.515157 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:46:29.515576 kubelet[2778]: I0515 15:46:29.515174 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:46:29.746205 systemd[1]: Started sshd@26-164.92.106.96:22-139.178.68.195:42336.service - OpenSSH per-connection server daemon (139.178.68.195:42336). May 15 15:46:29.834632 sshd[4821]: Accepted publickey for core from 139.178.68.195 port 42336 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:29.836979 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:29.848380 systemd-logind[1513]: New session 26 of user core. May 15 15:46:29.856522 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 15:46:30.067309 sshd[4823]: Connection closed by 139.178.68.195 port 42336 May 15 15:46:30.068292 sshd-session[4821]: pam_unix(sshd:session): session closed for user core May 15 15:46:30.077316 systemd-logind[1513]: Session 26 logged out. Waiting for processes to exit. May 15 15:46:30.078648 systemd[1]: sshd@26-164.92.106.96:22-139.178.68.195:42336.service: Deactivated successfully. May 15 15:46:30.084183 systemd[1]: session-26.scope: Deactivated successfully. May 15 15:46:30.089499 systemd-logind[1513]: Removed session 26. May 15 15:46:30.909744 kubelet[2778]: E0515 15:46:30.907489 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:30.911148 containerd[1530]: time="2025-05-15T15:46:30.910690524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:46:31.015995 containerd[1530]: time="2025-05-15T15:46:31.015205204Z" level=error msg="Failed to destroy network for sandbox \"96e8f0aaef453c9c0d86c56d539c9c8e93b85f724357d87c3cf5e490841be185\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:31.022275 containerd[1530]: time="2025-05-15T15:46:31.022203548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e8f0aaef453c9c0d86c56d539c9c8e93b85f724357d87c3cf5e490841be185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:31.024167 systemd[1]: run-netns-cni\x2d98a4a7da\x2df0b9\x2d25e6\x2dfa3c\x2db74a749e657a.mount: Deactivated successfully. May 15 15:46:31.027069 kubelet[2778]: E0515 15:46:31.024804 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e8f0aaef453c9c0d86c56d539c9c8e93b85f724357d87c3cf5e490841be185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:31.027069 kubelet[2778]: E0515 15:46:31.024893 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e8f0aaef453c9c0d86c56d539c9c8e93b85f724357d87c3cf5e490841be185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:31.027069 kubelet[2778]: E0515 15:46:31.024932 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e8f0aaef453c9c0d86c56d539c9c8e93b85f724357d87c3cf5e490841be185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:31.027069 kubelet[2778]: E0515 15:46:31.025001 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96e8f0aaef453c9c0d86c56d539c9c8e93b85f724357d87c3cf5e490841be185\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:46:32.907197 kubelet[2778]: E0515 15:46:32.906824 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:32.909850 kubelet[2778]: E0515 15:46:32.908774 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-nfvst" podUID="85ff5786-c114-43e4-8f58-d6ff4433361a" May 15 15:46:34.908726 containerd[1530]: time="2025-05-15T15:46:34.908648581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:46:35.025522 containerd[1530]: time="2025-05-15T15:46:35.025231514Z" level=error msg="Failed to destroy network for sandbox \"c83a8594be415b5bc11630c97429922a8cb5f5ad9374d952f8f58fbf765cac58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:35.029561 containerd[1530]: time="2025-05-15T15:46:35.027797941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c83a8594be415b5bc11630c97429922a8cb5f5ad9374d952f8f58fbf765cac58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:35.029734 kubelet[2778]: E0515 15:46:35.028856 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c83a8594be415b5bc11630c97429922a8cb5f5ad9374d952f8f58fbf765cac58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:35.029734 kubelet[2778]: E0515 15:46:35.028949 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c83a8594be415b5bc11630c97429922a8cb5f5ad9374d952f8f58fbf765cac58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:35.029734 kubelet[2778]: E0515 15:46:35.028992 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c83a8594be415b5bc11630c97429922a8cb5f5ad9374d952f8f58fbf765cac58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:35.029734 kubelet[2778]: E0515 15:46:35.029108 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c83a8594be415b5bc11630c97429922a8cb5f5ad9374d952f8f58fbf765cac58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:46:35.034162 systemd[1]: run-netns-cni\x2da5146e30\x2d45d8\x2d495a\x2d8fb0\x2d063fd55ea175.mount: Deactivated successfully. May 15 15:46:35.089513 systemd[1]: Started sshd@27-164.92.106.96:22-139.178.68.195:46182.service - OpenSSH per-connection server daemon (139.178.68.195:46182). May 15 15:46:35.160288 sshd[4894]: Accepted publickey for core from 139.178.68.195 port 46182 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:35.163599 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:35.174208 systemd-logind[1513]: New session 27 of user core. May 15 15:46:35.180357 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 15:46:35.349791 sshd[4896]: Connection closed by 139.178.68.195 port 46182 May 15 15:46:35.350534 sshd-session[4894]: pam_unix(sshd:session): session closed for user core May 15 15:46:35.358369 systemd-logind[1513]: Session 27 logged out. Waiting for processes to exit. May 15 15:46:35.360198 systemd[1]: sshd@27-164.92.106.96:22-139.178.68.195:46182.service: Deactivated successfully. May 15 15:46:35.366061 systemd[1]: session-27.scope: Deactivated successfully. May 15 15:46:35.371744 systemd-logind[1513]: Removed session 27. May 15 15:46:36.910271 containerd[1530]: time="2025-05-15T15:46:36.910092363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:46:37.019255 containerd[1530]: time="2025-05-15T15:46:37.019146347Z" level=error msg="Failed to destroy network for sandbox \"7b479e4c946048c7ca3eb0d1bbe4696ecfde996e71e266d6e65b1f873bcf0ed1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:37.021846 containerd[1530]: time="2025-05-15T15:46:37.021687404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b479e4c946048c7ca3eb0d1bbe4696ecfde996e71e266d6e65b1f873bcf0ed1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:37.023733 kubelet[2778]: E0515 15:46:37.022862 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b479e4c946048c7ca3eb0d1bbe4696ecfde996e71e266d6e65b1f873bcf0ed1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:37.023733 kubelet[2778]: E0515 15:46:37.022942 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b479e4c946048c7ca3eb0d1bbe4696ecfde996e71e266d6e65b1f873bcf0ed1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:46:37.023733 kubelet[2778]: E0515 15:46:37.022970 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b479e4c946048c7ca3eb0d1bbe4696ecfde996e71e266d6e65b1f873bcf0ed1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:46:37.023733 kubelet[2778]: E0515 15:46:37.023017 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b479e4c946048c7ca3eb0d1bbe4696ecfde996e71e266d6e65b1f873bcf0ed1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:46:37.026952 systemd[1]: run-netns-cni\x2d7f5aac19\x2db099\x2d28e3\x2d5daa\x2d538890f44e9d.mount: Deactivated successfully. May 15 15:46:40.368167 systemd[1]: Started sshd@28-164.92.106.96:22-139.178.68.195:46190.service - OpenSSH per-connection server daemon (139.178.68.195:46190). May 15 15:46:40.440161 sshd[4942]: Accepted publickey for core from 139.178.68.195 port 46190 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:40.443097 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:40.452275 systemd-logind[1513]: New session 28 of user core. May 15 15:46:40.459921 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 15:46:40.678566 sshd[4944]: Connection closed by 139.178.68.195 port 46190 May 15 15:46:40.679311 sshd-session[4942]: pam_unix(sshd:session): session closed for user core May 15 15:46:40.686732 systemd[1]: sshd@28-164.92.106.96:22-139.178.68.195:46190.service: Deactivated successfully. May 15 15:46:40.693017 systemd[1]: session-28.scope: Deactivated successfully. May 15 15:46:40.697144 systemd-logind[1513]: Session 28 logged out. Waiting for processes to exit. May 15 15:46:40.702524 systemd-logind[1513]: Removed session 28. May 15 15:46:42.908591 kubelet[2778]: E0515 15:46:42.907931 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:42.909662 containerd[1530]: time="2025-05-15T15:46:42.909611476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:46:43.049014 containerd[1530]: time="2025-05-15T15:46:43.047133942Z" level=error msg="Failed to destroy network for sandbox \"2390f1f2f6772c96ba2fec708932380be175580efef40804e41a175cd12afc4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:43.050849 containerd[1530]: time="2025-05-15T15:46:43.050541823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2390f1f2f6772c96ba2fec708932380be175580efef40804e41a175cd12afc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:43.051043 kubelet[2778]: E0515 15:46:43.050898 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2390f1f2f6772c96ba2fec708932380be175580efef40804e41a175cd12afc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:43.051043 kubelet[2778]: E0515 15:46:43.050968 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2390f1f2f6772c96ba2fec708932380be175580efef40804e41a175cd12afc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:43.051043 kubelet[2778]: E0515 15:46:43.050996 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2390f1f2f6772c96ba2fec708932380be175580efef40804e41a175cd12afc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:43.051188 kubelet[2778]: E0515 15:46:43.051056 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2390f1f2f6772c96ba2fec708932380be175580efef40804e41a175cd12afc4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:46:43.051802 systemd[1]: run-netns-cni\x2d9b450a8f\x2d5f88\x2df001\x2d9df6\x2df1b65343c905.mount: Deactivated successfully. May 15 15:46:43.908683 kubelet[2778]: E0515 15:46:43.907488 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:43.910839 containerd[1530]: time="2025-05-15T15:46:43.909868256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:46:44.017736 containerd[1530]: time="2025-05-15T15:46:44.016058367Z" level=error msg="Failed to destroy network for sandbox \"95e03ed279433e6cf097f51e39cd80cd5181ca542938d3c823c9c93595064bb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:44.020359 containerd[1530]: time="2025-05-15T15:46:44.020273070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e03ed279433e6cf097f51e39cd80cd5181ca542938d3c823c9c93595064bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:44.020380 systemd[1]: run-netns-cni\x2d8f603368\x2dbfcd\x2d994e\x2d5403\x2da6c41364b684.mount: Deactivated successfully. May 15 15:46:44.021987 kubelet[2778]: E0515 15:46:44.021930 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e03ed279433e6cf097f51e39cd80cd5181ca542938d3c823c9c93595064bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:44.022160 kubelet[2778]: E0515 15:46:44.022010 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e03ed279433e6cf097f51e39cd80cd5181ca542938d3c823c9c93595064bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:44.022160 kubelet[2778]: E0515 15:46:44.022035 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e03ed279433e6cf097f51e39cd80cd5181ca542938d3c823c9c93595064bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:44.022160 kubelet[2778]: E0515 15:46:44.022088 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95e03ed279433e6cf097f51e39cd80cd5181ca542938d3c823c9c93595064bb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:46:45.698106 systemd[1]: Started sshd@29-164.92.106.96:22-139.178.68.195:47396.service - OpenSSH per-connection server daemon (139.178.68.195:47396). May 15 15:46:45.776120 sshd[5014]: Accepted publickey for core from 139.178.68.195 port 47396 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:45.778223 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:45.791887 systemd-logind[1513]: New session 29 of user core. May 15 15:46:45.802969 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 15:46:45.909319 kubelet[2778]: E0515 15:46:45.908688 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:45.912733 containerd[1530]: time="2025-05-15T15:46:45.912530655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 15:46:46.073876 sshd[5016]: Connection closed by 139.178.68.195 port 47396 May 15 15:46:46.075087 sshd-session[5014]: pam_unix(sshd:session): session closed for user core May 15 15:46:46.083633 systemd[1]: sshd@29-164.92.106.96:22-139.178.68.195:47396.service: Deactivated successfully. May 15 15:46:46.091544 systemd[1]: session-29.scope: Deactivated successfully. May 15 15:46:46.100122 systemd-logind[1513]: Session 29 logged out. Waiting for processes to exit. May 15 15:46:46.104859 systemd-logind[1513]: Removed session 29. May 15 15:46:46.909546 containerd[1530]: time="2025-05-15T15:46:46.909037440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:46:47.046242 containerd[1530]: time="2025-05-15T15:46:47.046109150Z" level=error msg="Failed to destroy network for sandbox \"14e0d175af4a3c6546c617f27bde4d6e1121b79ea0090c36069bf2799b7d2c5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:47.051650 systemd[1]: run-netns-cni\x2d57d8efa6\x2dc710\x2deacf\x2db2a4\x2df5a6857647a1.mount: Deactivated successfully. May 15 15:46:47.052827 containerd[1530]: time="2025-05-15T15:46:47.052711454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e0d175af4a3c6546c617f27bde4d6e1121b79ea0090c36069bf2799b7d2c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:47.054558 kubelet[2778]: E0515 15:46:47.054497 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e0d175af4a3c6546c617f27bde4d6e1121b79ea0090c36069bf2799b7d2c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:47.056313 kubelet[2778]: E0515 15:46:47.054599 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e0d175af4a3c6546c617f27bde4d6e1121b79ea0090c36069bf2799b7d2c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:47.056313 kubelet[2778]: E0515 15:46:47.054635 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14e0d175af4a3c6546c617f27bde4d6e1121b79ea0090c36069bf2799b7d2c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:47.056313 kubelet[2778]: E0515 15:46:47.054741 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14e0d175af4a3c6546c617f27bde4d6e1121b79ea0090c36069bf2799b7d2c5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:46:49.584233 kubelet[2778]: I0515 15:46:49.583721 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:46:49.584233 kubelet[2778]: I0515 15:46:49.583794 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:46:49.587398 kubelet[2778]: I0515 15:46:49.586436 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:46:49.619370 kubelet[2778]: I0515 15:46:49.619290 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:46:49.619546 kubelet[2778]: I0515 15:46:49.619450 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-lmnwc","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/csi-node-driver-h6786","calico-system/calico-node-nfvst","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:46:49.619546 kubelet[2778]: E0515 15:46:49.619505 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:49.619546 kubelet[2778]: E0515 15:46:49.619522 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:49.619546 kubelet[2778]: E0515 15:46:49.619533 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:49.619546 kubelet[2778]: E0515 15:46:49.619545 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:46:49.620105 kubelet[2778]: E0515 15:46:49.619555 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:46:49.620105 kubelet[2778]: E0515 15:46:49.619567 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:46:49.620105 kubelet[2778]: E0515 15:46:49.619578 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:46:49.620105 kubelet[2778]: E0515 15:46:49.619588 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:46:49.620105 kubelet[2778]: E0515 15:46:49.619598 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:46:49.620105 kubelet[2778]: E0515 15:46:49.619610 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:46:49.620105 kubelet[2778]: I0515 15:46:49.619621 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:46:49.909623 containerd[1530]: time="2025-05-15T15:46:49.909055041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:46:50.611806 containerd[1530]: time="2025-05-15T15:46:50.611649483Z" level=error msg="Failed to destroy network for sandbox \"d014585404b3729271b4928a201b67784ed86f33b66fce23ef8f0524ee5435b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:50.618873 systemd[1]: run-netns-cni\x2dacdfe221\x2d6d55\x2db5a3\x2df707\x2d89e287c47e24.mount: Deactivated successfully. May 15 15:46:50.671061 containerd[1530]: time="2025-05-15T15:46:50.670794031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d014585404b3729271b4928a201b67784ed86f33b66fce23ef8f0524ee5435b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:50.673409 kubelet[2778]: E0515 15:46:50.673350 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d014585404b3729271b4928a201b67784ed86f33b66fce23ef8f0524ee5435b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:50.675208 kubelet[2778]: E0515 15:46:50.674036 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d014585404b3729271b4928a201b67784ed86f33b66fce23ef8f0524ee5435b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:46:50.675208 kubelet[2778]: E0515 15:46:50.674104 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d014585404b3729271b4928a201b67784ed86f33b66fce23ef8f0524ee5435b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6786" May 15 15:46:50.675208 kubelet[2778]: E0515 15:46:50.674190 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6786_calico-system(d39bfc53-e893-4a7d-a3e9-870e79b27f93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d014585404b3729271b4928a201b67784ed86f33b66fce23ef8f0524ee5435b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6786" podUID="d39bfc53-e893-4a7d-a3e9-870e79b27f93" May 15 15:46:51.103180 systemd[1]: Started sshd@30-164.92.106.96:22-139.178.68.195:47400.service - OpenSSH per-connection server daemon (139.178.68.195:47400). May 15 15:46:51.309419 sshd[5094]: Accepted publickey for core from 139.178.68.195 port 47400 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:51.317170 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:51.349117 systemd-logind[1513]: New session 30 of user core. May 15 15:46:51.357259 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 15:46:51.732924 sshd[5096]: Connection closed by 139.178.68.195 port 47400 May 15 15:46:51.734486 sshd-session[5094]: pam_unix(sshd:session): session closed for user core May 15 15:46:51.745092 systemd[1]: sshd@30-164.92.106.96:22-139.178.68.195:47400.service: Deactivated successfully. May 15 15:46:51.752063 systemd[1]: session-30.scope: Deactivated successfully. May 15 15:46:51.760067 systemd-logind[1513]: Session 30 logged out. Waiting for processes to exit. May 15 15:46:51.764245 systemd-logind[1513]: Removed session 30. May 15 15:46:56.757274 systemd[1]: Started sshd@31-164.92.106.96:22-139.178.68.195:53806.service - OpenSSH per-connection server daemon (139.178.68.195:53806). May 15 15:46:56.832430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754396178.mount: Deactivated successfully. May 15 15:46:56.910480 kubelet[2778]: E0515 15:46:56.910374 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:56.938735 containerd[1530]: time="2025-05-15T15:46:56.919016553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:46:56.953637 containerd[1530]: time="2025-05-15T15:46:56.952998683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:46:56.956283 containerd[1530]: time="2025-05-15T15:46:56.956213500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 15:46:56.974581 containerd[1530]: time="2025-05-15T15:46:56.974514226Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:46:56.997001 containerd[1530]: time="2025-05-15T15:46:56.996903542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:46:57.001372 containerd[1530]: time="2025-05-15T15:46:57.001044920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 11.086222909s" May 15 15:46:57.001372 containerd[1530]: time="2025-05-15T15:46:57.001146694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 15:46:57.014021 sshd[5107]: Accepted publickey for core from 139.178.68.195 port 53806 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:46:57.018145 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:46:57.046872 systemd-logind[1513]: New session 31 of user core. May 15 15:46:57.049983 systemd[1]: Started session-31.scope - Session 31 of User core. May 15 15:46:57.071770 containerd[1530]: time="2025-05-15T15:46:57.071686394Z" level=info msg="CreateContainer within sandbox \"613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 15:46:57.120769 containerd[1530]: time="2025-05-15T15:46:57.120346774Z" level=info msg="Container b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2: CDI devices from CRI Config.CDIDevices: []" May 15 15:46:57.175368 containerd[1530]: time="2025-05-15T15:46:57.175286073Z" level=info msg="CreateContainer within sandbox \"613bdc0b50ec75e1ff26a6ba8a482814849207eae539889a860abf66a3d8b05f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\"" May 15 15:46:57.181579 containerd[1530]: time="2025-05-15T15:46:57.181185877Z" level=info msg="StartContainer for \"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\"" May 15 15:46:57.195040 containerd[1530]: time="2025-05-15T15:46:57.194971598Z" level=info msg="connecting to shim b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2" address="unix:///run/containerd/s/8280441f141ab184c4ba9783f0f24f6a722c797718422063846e1a0f3b9536a1" protocol=ttrpc version=3 May 15 15:46:57.289277 containerd[1530]: time="2025-05-15T15:46:57.289047763Z" level=error msg="Failed to destroy network for sandbox \"1fe0eb3d99aa077ce45c99649a54093d965b620da3d240392cf0714df3ab146b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:57.294744 containerd[1530]: time="2025-05-15T15:46:57.293905850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe0eb3d99aa077ce45c99649a54093d965b620da3d240392cf0714df3ab146b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:57.297626 kubelet[2778]: E0515 15:46:57.297540 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe0eb3d99aa077ce45c99649a54093d965b620da3d240392cf0714df3ab146b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:57.298932 kubelet[2778]: E0515 15:46:57.297643 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe0eb3d99aa077ce45c99649a54093d965b620da3d240392cf0714df3ab146b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:57.298932 kubelet[2778]: E0515 15:46:57.297671 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fe0eb3d99aa077ce45c99649a54093d965b620da3d240392cf0714df3ab146b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:57.298932 kubelet[2778]: E0515 15:46:57.297800 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vdlk8_kube-system(d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fe0eb3d99aa077ce45c99649a54093d965b620da3d240392cf0714df3ab146b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podUID="d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c" May 15 15:46:57.502110 systemd[1]: Started cri-containerd-b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2.scope - libcontainer container b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2. May 15 15:46:57.528977 sshd[5121]: Connection closed by 139.178.68.195 port 53806 May 15 15:46:57.529502 sshd-session[5107]: pam_unix(sshd:session): session closed for user core May 15 15:46:57.541907 systemd[1]: sshd@31-164.92.106.96:22-139.178.68.195:53806.service: Deactivated successfully. May 15 15:46:57.549993 systemd[1]: session-31.scope: Deactivated successfully. May 15 15:46:57.552449 systemd-logind[1513]: Session 31 logged out. Waiting for processes to exit. May 15 15:46:57.561061 systemd-logind[1513]: Removed session 31. May 15 15:46:57.645013 containerd[1530]: time="2025-05-15T15:46:57.644961989Z" level=info msg="StartContainer for \"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\" returns successfully" May 15 15:46:57.674969 kubelet[2778]: E0515 15:46:57.674894 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:57.722742 kubelet[2778]: I0515 15:46:57.721781 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nfvst" podStartSLOduration=2.255189828 podStartE2EDuration="2m12.721747919s" podCreationTimestamp="2025-05-15 15:44:45 +0000 UTC" firstStartedPulling="2025-05-15 15:44:46.543341749 +0000 UTC m=+21.906689309" lastFinishedPulling="2025-05-15 15:46:57.009899835 +0000 UTC m=+152.373247400" observedRunningTime="2025-05-15 15:46:57.697505691 +0000 UTC m=+153.060853276" watchObservedRunningTime="2025-05-15 15:46:57.721747919 +0000 UTC m=+153.085095504" May 15 15:46:57.832272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280511599.mount: Deactivated successfully. May 15 15:46:57.832951 systemd[1]: run-netns-cni\x2de1fee192\x2dbbcc\x2db8bb\x2dc17d\x2dcbb94bee41fb.mount: Deactivated successfully. May 15 15:46:57.906782 kubelet[2778]: E0515 15:46:57.906737 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:57.909101 containerd[1530]: time="2025-05-15T15:46:57.909035903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:46:58.110777 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 15:46:58.118348 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 15:46:58.154628 containerd[1530]: time="2025-05-15T15:46:58.154343636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\" id:\"ce777b8d564e8b7dac14c8c8ab71b6ecab8c2350fcb68b54db1e1edbe570e7dc\" pid:5199 exit_status:1 exited_at:{seconds:1747324018 nanos:153397499}" May 15 15:46:58.200837 containerd[1530]: time="2025-05-15T15:46:58.198355864Z" level=error msg="Failed to destroy network for sandbox \"752d61e14712a0317ef40f43ff7cc9fdd7b18dec45ca4297d70a7af8bcee4352\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:58.203948 systemd[1]: run-netns-cni\x2da886955b\x2d0249\x2d57fe\x2db6dd\x2dc4b03f16ae44.mount: Deactivated successfully. May 15 15:46:58.208103 containerd[1530]: time="2025-05-15T15:46:58.206278679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"752d61e14712a0317ef40f43ff7cc9fdd7b18dec45ca4297d70a7af8bcee4352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:58.208325 kubelet[2778]: E0515 15:46:58.206714 2778 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752d61e14712a0317ef40f43ff7cc9fdd7b18dec45ca4297d70a7af8bcee4352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 15:46:58.208325 kubelet[2778]: E0515 15:46:58.206806 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752d61e14712a0317ef40f43ff7cc9fdd7b18dec45ca4297d70a7af8bcee4352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:58.208325 kubelet[2778]: E0515 15:46:58.206830 2778 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752d61e14712a0317ef40f43ff7cc9fdd7b18dec45ca4297d70a7af8bcee4352\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:58.208325 kubelet[2778]: E0515 15:46:58.206876 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lmnwc_kube-system(2060f7d9-6d6b-4e81-9323-08b479f092eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"752d61e14712a0317ef40f43ff7cc9fdd7b18dec45ca4297d70a7af8bcee4352\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podUID="2060f7d9-6d6b-4e81-9323-08b479f092eb" May 15 15:46:58.679328 kubelet[2778]: E0515 15:46:58.679197 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:46:59.045382 containerd[1530]: time="2025-05-15T15:46:59.045055332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\" id:\"4715a7d1c8bdcfa1e2f445cbde66389ff89f861ef6c87a47a1e74f03fcf0633f\" pid:5274 exit_status:1 exited_at:{seconds:1747324019 nanos:43599764}" May 15 15:46:59.644609 kubelet[2778]: I0515 15:46:59.644542 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:46:59.644609 kubelet[2778]: I0515 15:46:59.644612 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:46:59.650568 kubelet[2778]: I0515 15:46:59.650490 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:46:59.665835 kubelet[2778]: I0515 15:46:59.665795 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:46:59.666043 kubelet[2778]: I0515 15:46:59.665979 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/csi-node-driver-h6786","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/calico-node-nfvst","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:46:59.666043 kubelet[2778]: E0515 15:46:59.666029 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666047 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666059 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666072 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666089 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666102 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666116 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666129 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666145 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:46:59.666191 kubelet[2778]: E0515 15:46:59.666159 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:46:59.666191 kubelet[2778]: I0515 15:46:59.666179 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:47:00.910259 containerd[1530]: time="2025-05-15T15:47:00.910185051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,}" May 15 15:47:01.353262 systemd-networkd[1445]: calif685b834ff7: Link UP May 15 15:47:01.355256 systemd-networkd[1445]: calif685b834ff7: Gained carrier May 15 15:47:01.409863 containerd[1530]: 2025-05-15 15:47:01.049 [INFO][5420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0 calico-kube-controllers-65cd59455f- calico-system 86e0d73b-0507-46e9-944b-4fbf6879e642 718 0 2025-05-15 15:44:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65cd59455f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4334.0.0-a-8a7930f089 calico-kube-controllers-65cd59455f-72w5b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif685b834ff7 [] []}} ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Namespace="calico-system" Pod="calico-kube-controllers-65cd59455f-72w5b" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-" May 15 15:47:01.409863 containerd[1530]: 2025-05-15 15:47:01.050 [INFO][5420] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Namespace="calico-system" Pod="calico-kube-controllers-65cd59455f-72w5b" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" May 15 15:47:01.409863 containerd[1530]: 2025-05-15 15:47:01.214 [INFO][5432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" HandleID="k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Workload="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.246 [INFO][5432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" HandleID="k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Workload="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003877a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-8a7930f089", "pod":"calico-kube-controllers-65cd59455f-72w5b", "timestamp":"2025-05-15 15:47:01.214091867 +0000 UTC"}, Hostname:"ci-4334.0.0-a-8a7930f089", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.246 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.246 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.246 [INFO][5432] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-8a7930f089' May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.250 [INFO][5432] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.261 [INFO][5432] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.271 [INFO][5432] ipam/ipam.go 489: Trying affinity for 192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.277 [INFO][5432] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.411323 containerd[1530]: 2025-05-15 15:47:01.285 [INFO][5432] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.415034 containerd[1530]: 2025-05-15 15:47:01.286 [INFO][5432] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.415034 containerd[1530]: 2025-05-15 15:47:01.290 [INFO][5432] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7 May 15 15:47:01.415034 containerd[1530]: 2025-05-15 15:47:01.303 [INFO][5432] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.415034 containerd[1530]: 2025-05-15 15:47:01.314 [INFO][5432] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.1/26] block=192.168.30.0/26 handle="k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.415034 containerd[1530]: 2025-05-15 15:47:01.314 [INFO][5432] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.1/26] handle="k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:01.415034 containerd[1530]: 2025-05-15 15:47:01.315 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 15:47:01.415034 containerd[1530]: 2025-05-15 15:47:01.315 [INFO][5432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.1/26] IPv6=[] ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" HandleID="k8s-pod-network.3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Workload="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" May 15 15:47:01.415317 containerd[1530]: 2025-05-15 15:47:01.319 [INFO][5420] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Namespace="calico-system" Pod="calico-kube-controllers-65cd59455f-72w5b" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0", GenerateName:"calico-kube-controllers-65cd59455f-", Namespace:"calico-system", SelfLink:"", UID:"86e0d73b-0507-46e9-944b-4fbf6879e642", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cd59455f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-8a7930f089", ContainerID:"", Pod:"calico-kube-controllers-65cd59455f-72w5b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif685b834ff7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:47:01.416833 containerd[1530]: 2025-05-15 15:47:01.321 [INFO][5420] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.1/32] ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Namespace="calico-system" Pod="calico-kube-controllers-65cd59455f-72w5b" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" May 15 15:47:01.416833 containerd[1530]: 2025-05-15 15:47:01.321 [INFO][5420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif685b834ff7 ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Namespace="calico-system" Pod="calico-kube-controllers-65cd59455f-72w5b" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" May 15 15:47:01.416833 containerd[1530]: 2025-05-15 15:47:01.348 [INFO][5420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Namespace="calico-system" Pod="calico-kube-controllers-65cd59455f-72w5b" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" May 15 15:47:01.417410 containerd[1530]: 2025-05-15 15:47:01.352 [INFO][5420] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Namespace="calico-system" Pod="calico-kube-controllers-65cd59455f-72w5b" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0", GenerateName:"calico-kube-controllers-65cd59455f-", Namespace:"calico-system", SelfLink:"", UID:"86e0d73b-0507-46e9-944b-4fbf6879e642", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cd59455f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-8a7930f089", ContainerID:"3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7", Pod:"calico-kube-controllers-65cd59455f-72w5b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif685b834ff7", MAC:"52:b4:60:cc:0b:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:47:01.417525 containerd[1530]: 2025-05-15 15:47:01.399 [INFO][5420] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" Namespace="calico-system" Pod="calico-kube-controllers-65cd59455f-72w5b" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-calico--kube--controllers--65cd59455f--72w5b-eth0" May 15 15:47:01.557441 containerd[1530]: time="2025-05-15T15:47:01.557369222Z" level=info msg="connecting to shim 3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7" address="unix:///run/containerd/s/e8852de48e92eecfeec624b92c9c29b566d1200c9ed308ba858c1f7627b43ea2" namespace=k8s.io protocol=ttrpc version=3 May 15 15:47:01.637104 systemd[1]: Started cri-containerd-3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7.scope - libcontainer container 3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7. May 15 15:47:01.673744 systemd-networkd[1445]: vxlan.calico: Link UP May 15 15:47:01.673755 systemd-networkd[1445]: vxlan.calico: Gained carrier May 15 15:47:01.913730 kubelet[2778]: E0515 15:47:01.913661 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:02.003893 containerd[1530]: time="2025-05-15T15:47:02.003520205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cd59455f-72w5b,Uid:86e0d73b-0507-46e9-944b-4fbf6879e642,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a57450366eb0c032e30ede5e675cede870e1d0b8081a5aad7815c5dc8749eb7\"" May 15 15:47:02.038865 containerd[1530]: time="2025-05-15T15:47:02.038745574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 15:47:02.552938 systemd[1]: Started sshd@32-164.92.106.96:22-139.178.68.195:53812.service - OpenSSH per-connection server daemon (139.178.68.195:53812). May 15 15:47:02.718819 sshd[5566]: Accepted publickey for core from 139.178.68.195 port 53812 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:02.721979 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:02.733660 systemd-logind[1513]: New session 32 of user core. May 15 15:47:02.742127 systemd[1]: Started session-32.scope - Session 32 of User core. May 15 15:47:02.916351 containerd[1530]: time="2025-05-15T15:47:02.914667110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,}" May 15 15:47:03.138033 systemd-networkd[1445]: calif685b834ff7: Gained IPv6LL May 15 15:47:03.330837 sshd[5572]: Connection closed by 139.178.68.195 port 53812 May 15 15:47:03.329353 sshd-session[5566]: pam_unix(sshd:session): session closed for user core May 15 15:47:03.331817 systemd-networkd[1445]: vxlan.calico: Gained IPv6LL May 15 15:47:03.342534 systemd[1]: sshd@32-164.92.106.96:22-139.178.68.195:53812.service: Deactivated successfully. May 15 15:47:03.348305 systemd[1]: session-32.scope: Deactivated successfully. May 15 15:47:03.355865 systemd-logind[1513]: Session 32 logged out. Waiting for processes to exit. May 15 15:47:03.359357 systemd-logind[1513]: Removed session 32. May 15 15:47:03.507794 systemd-networkd[1445]: cali3e6744e8560: Link UP May 15 15:47:03.509519 systemd-networkd[1445]: cali3e6744e8560: Gained carrier May 15 15:47:03.549080 containerd[1530]: 2025-05-15 15:47:03.133 [INFO][5579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0 csi-node-driver- calico-system d39bfc53-e893-4a7d-a3e9-870e79b27f93 618 0 2025-05-15 15:44:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4334.0.0-a-8a7930f089 csi-node-driver-h6786 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3e6744e8560 [] []}} ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Namespace="calico-system" Pod="csi-node-driver-h6786" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-" May 15 15:47:03.549080 containerd[1530]: 2025-05-15 15:47:03.135 [INFO][5579] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Namespace="calico-system" Pod="csi-node-driver-h6786" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" May 15 15:47:03.549080 containerd[1530]: 2025-05-15 15:47:03.351 [INFO][5593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" HandleID="k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Workload="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.392 [INFO][5593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" HandleID="k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Workload="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cdb30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-8a7930f089", "pod":"csi-node-driver-h6786", "timestamp":"2025-05-15 15:47:03.351004281 +0000 UTC"}, Hostname:"ci-4334.0.0-a-8a7930f089", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.393 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.394 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.394 [INFO][5593] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-8a7930f089' May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.405 [INFO][5593] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.423 [INFO][5593] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.444 [INFO][5593] ipam/ipam.go 489: Trying affinity for 192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.452 [INFO][5593] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.551362 containerd[1530]: 2025-05-15 15:47:03.458 [INFO][5593] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.553303 containerd[1530]: 2025-05-15 15:47:03.458 [INFO][5593] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.553303 containerd[1530]: 2025-05-15 15:47:03.462 [INFO][5593] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73 May 15 15:47:03.553303 containerd[1530]: 2025-05-15 15:47:03.469 [INFO][5593] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.553303 containerd[1530]: 2025-05-15 15:47:03.485 [INFO][5593] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.2/26] block=192.168.30.0/26 handle="k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.553303 containerd[1530]: 2025-05-15 15:47:03.486 [INFO][5593] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.2/26] handle="k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:03.553303 containerd[1530]: 2025-05-15 15:47:03.486 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 15:47:03.553303 containerd[1530]: 2025-05-15 15:47:03.486 [INFO][5593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.2/26] IPv6=[] ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" HandleID="k8s-pod-network.2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Workload="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" May 15 15:47:03.553655 containerd[1530]: 2025-05-15 15:47:03.498 [INFO][5579] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Namespace="calico-system" Pod="csi-node-driver-h6786" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d39bfc53-e893-4a7d-a3e9-870e79b27f93", ResourceVersion:"618", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-8a7930f089", ContainerID:"", Pod:"csi-node-driver-h6786", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e6744e8560", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:47:03.555537 containerd[1530]: 2025-05-15 15:47:03.499 [INFO][5579] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.2/32] ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Namespace="calico-system" Pod="csi-node-driver-h6786" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" May 15 15:47:03.555537 containerd[1530]: 2025-05-15 15:47:03.499 [INFO][5579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e6744e8560 ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Namespace="calico-system" Pod="csi-node-driver-h6786" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" May 15 15:47:03.555537 containerd[1530]: 2025-05-15 15:47:03.513 [INFO][5579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Namespace="calico-system" Pod="csi-node-driver-h6786" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" May 15 15:47:03.555678 containerd[1530]: 2025-05-15 15:47:03.515 [INFO][5579] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Namespace="calico-system" Pod="csi-node-driver-h6786" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d39bfc53-e893-4a7d-a3e9-870e79b27f93", ResourceVersion:"618", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-8a7930f089", ContainerID:"2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73", Pod:"csi-node-driver-h6786", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e6744e8560", MAC:"16:9e:75:c6:d9:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:47:03.555777 containerd[1530]: 2025-05-15 15:47:03.540 [INFO][5579] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" Namespace="calico-system" Pod="csi-node-driver-h6786" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-csi--node--driver--h6786-eth0" May 15 15:47:03.621026 containerd[1530]: time="2025-05-15T15:47:03.620562583Z" level=info msg="connecting to shim 2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73" address="unix:///run/containerd/s/64f5de0678ba1e782863309ed822e7b75cefda4ad6e3841384305c5750cb4ac2" namespace=k8s.io protocol=ttrpc version=3 May 15 15:47:03.689039 systemd[1]: Started cri-containerd-2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73.scope - libcontainer container 2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73. May 15 15:47:03.787219 containerd[1530]: time="2025-05-15T15:47:03.786741371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6786,Uid:d39bfc53-e893-4a7d-a3e9-870e79b27f93,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73\"" May 15 15:47:04.738769 systemd-networkd[1445]: cali3e6744e8560: Gained IPv6LL May 15 15:47:05.378270 containerd[1530]: time="2025-05-15T15:47:05.378065964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/70/fs/usr/bin/kube-controllers: no space left on device" May 15 15:47:05.380066 containerd[1530]: time="2025-05-15T15:47:05.378064752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 15:47:05.393394 kubelet[2778]: E0515 15:47:05.379605 2778 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/70/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:47:05.397982 kubelet[2778]: E0515 15:47:05.397393 2778 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/70/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:47:05.400618 containerd[1530]: time="2025-05-15T15:47:05.400153919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 15:47:05.467370 kubelet[2778]: E0515 15:47:05.467097 2778 kuberuntime_manager.go:1256] container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tklb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/kube-controllers:v3.29.3": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/70/fs/usr/bin/kube-controllers: no space left on device May 15 15:47:05.467370 kubelet[2778]: E0515 15:47:05.467202 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/70/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:47:05.738671 kubelet[2778]: E0515 15:47:05.738151 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:47:07.321877 containerd[1530]: time="2025-05-15T15:47:07.321613210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:07.327743 containerd[1530]: time="2025-05-15T15:47:07.326854756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 15 15:47:07.328659 containerd[1530]: time="2025-05-15T15:47:07.328578273Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:07.338400 containerd[1530]: time="2025-05-15T15:47:07.337806371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:07.340246 containerd[1530]: time="2025-05-15T15:47:07.340175562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.939920528s" May 15 15:47:07.340246 containerd[1530]: time="2025-05-15T15:47:07.340243417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 15:47:07.348948 containerd[1530]: time="2025-05-15T15:47:07.348879092Z" level=info msg="CreateContainer within sandbox \"2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 15:47:07.377750 containerd[1530]: time="2025-05-15T15:47:07.372097928Z" level=info msg="Container 574902d6578b10021e15335ed83cd422783af62c49094c9e7cf856e2c8904d25: CDI devices from CRI Config.CDIDevices: []" May 15 15:47:07.387810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857354907.mount: Deactivated successfully. May 15 15:47:07.427194 containerd[1530]: time="2025-05-15T15:47:07.427121878Z" level=info msg="CreateContainer within sandbox \"2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"574902d6578b10021e15335ed83cd422783af62c49094c9e7cf856e2c8904d25\"" May 15 15:47:07.430753 containerd[1530]: time="2025-05-15T15:47:07.430402774Z" level=info msg="StartContainer for \"574902d6578b10021e15335ed83cd422783af62c49094c9e7cf856e2c8904d25\"" May 15 15:47:07.435144 containerd[1530]: time="2025-05-15T15:47:07.434934556Z" level=info msg="connecting to shim 574902d6578b10021e15335ed83cd422783af62c49094c9e7cf856e2c8904d25" address="unix:///run/containerd/s/64f5de0678ba1e782863309ed822e7b75cefda4ad6e3841384305c5750cb4ac2" protocol=ttrpc version=3 May 15 15:47:07.490043 systemd[1]: Started cri-containerd-574902d6578b10021e15335ed83cd422783af62c49094c9e7cf856e2c8904d25.scope - libcontainer container 574902d6578b10021e15335ed83cd422783af62c49094c9e7cf856e2c8904d25. May 15 15:47:07.607096 containerd[1530]: time="2025-05-15T15:47:07.606367716Z" level=info msg="StartContainer for \"574902d6578b10021e15335ed83cd422783af62c49094c9e7cf856e2c8904d25\" returns successfully" May 15 15:47:07.612038 containerd[1530]: time="2025-05-15T15:47:07.611395297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 15:47:08.351126 systemd[1]: Started sshd@33-164.92.106.96:22-139.178.68.195:40242.service - OpenSSH per-connection server daemon (139.178.68.195:40242). May 15 15:47:08.523785 sshd[5708]: Accepted publickey for core from 139.178.68.195 port 40242 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:08.528251 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:08.546884 systemd-logind[1513]: New session 33 of user core. May 15 15:47:08.550282 systemd[1]: Started session-33.scope - Session 33 of User core. May 15 15:47:08.829635 sshd[5710]: Connection closed by 139.178.68.195 port 40242 May 15 15:47:08.830657 sshd-session[5708]: pam_unix(sshd:session): session closed for user core May 15 15:47:08.843064 systemd[1]: sshd@33-164.92.106.96:22-139.178.68.195:40242.service: Deactivated successfully. May 15 15:47:08.844213 systemd-logind[1513]: Session 33 logged out. Waiting for processes to exit. May 15 15:47:08.854270 systemd[1]: session-33.scope: Deactivated successfully. May 15 15:47:08.865563 systemd-logind[1513]: Removed session 33. May 15 15:47:08.911577 kubelet[2778]: E0515 15:47:08.911476 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:08.919630 containerd[1530]: time="2025-05-15T15:47:08.916602554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,}" May 15 15:47:09.632110 systemd-networkd[1445]: cali120145085d4: Link UP May 15 15:47:09.634295 systemd-networkd[1445]: cali120145085d4: Gained carrier May 15 15:47:09.732754 containerd[1530]: 2025-05-15 15:47:09.095 [INFO][5723] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0 coredns-7db6d8ff4d- kube-system d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c 727 0 2025-05-15 15:44:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-8a7930f089 coredns-7db6d8ff4d-vdlk8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali120145085d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vdlk8" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-" May 15 15:47:09.732754 containerd[1530]: 2025-05-15 15:47:09.096 [INFO][5723] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vdlk8" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" May 15 15:47:09.732754 containerd[1530]: 2025-05-15 15:47:09.260 [INFO][5737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" HandleID="k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Workload="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.313 [INFO][5737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" HandleID="k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Workload="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eca90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-8a7930f089", "pod":"coredns-7db6d8ff4d-vdlk8", "timestamp":"2025-05-15 15:47:09.260839327 +0000 UTC"}, Hostname:"ci-4334.0.0-a-8a7930f089", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.313 [INFO][5737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.313 [INFO][5737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.313 [INFO][5737] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-8a7930f089' May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.326 [INFO][5737] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.366 [INFO][5737] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.416 [INFO][5737] ipam/ipam.go 489: Trying affinity for 192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.441 [INFO][5737] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.737550 containerd[1530]: 2025-05-15 15:47:09.468 [INFO][5737] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.739182 containerd[1530]: 2025-05-15 15:47:09.468 [INFO][5737] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.739182 containerd[1530]: 2025-05-15 15:47:09.482 [INFO][5737] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4 May 15 15:47:09.739182 containerd[1530]: 2025-05-15 15:47:09.544 [INFO][5737] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.739182 containerd[1530]: 2025-05-15 15:47:09.593 [INFO][5737] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.3/26] block=192.168.30.0/26 handle="k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.739182 containerd[1530]: 2025-05-15 15:47:09.593 [INFO][5737] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.3/26] handle="k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:09.739182 containerd[1530]: 2025-05-15 15:47:09.593 [INFO][5737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 15:47:09.739182 containerd[1530]: 2025-05-15 15:47:09.596 [INFO][5737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.3/26] IPv6=[] ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" HandleID="k8s-pod-network.7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Workload="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" May 15 15:47:09.746416 containerd[1530]: 2025-05-15 15:47:09.611 [INFO][5723] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vdlk8" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-8a7930f089", ContainerID:"", Pod:"coredns-7db6d8ff4d-vdlk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali120145085d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:47:09.746416 containerd[1530]: 2025-05-15 15:47:09.612 [INFO][5723] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.3/32] ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vdlk8" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" May 15 15:47:09.746416 containerd[1530]: 2025-05-15 15:47:09.612 [INFO][5723] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali120145085d4 ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vdlk8" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" May 15 15:47:09.746416 containerd[1530]: 2025-05-15 15:47:09.639 [INFO][5723] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vdlk8" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" May 15 15:47:09.746416 containerd[1530]: 2025-05-15 15:47:09.640 [INFO][5723] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vdlk8" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-8a7930f089", ContainerID:"7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4", Pod:"coredns-7db6d8ff4d-vdlk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali120145085d4", MAC:"be:12:84:d8:33:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:47:09.746416 containerd[1530]: 2025-05-15 15:47:09.708 [INFO][5723] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vdlk8" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--vdlk8-eth0" May 15 15:47:09.834759 kubelet[2778]: I0515 15:47:09.833720 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:47:09.841149 kubelet[2778]: I0515 15:47:09.837197 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:47:09.856177 kubelet[2778]: I0515 15:47:09.856132 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:47:09.880999 kubelet[2778]: I0515 15:47:09.880763 2778 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578" size=21998657 runtimeHandler="" May 15 15:47:09.883936 containerd[1530]: time="2025-05-15T15:47:09.883191458Z" level=info msg="RemoveImage \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 15:47:09.896079 containerd[1530]: time="2025-05-15T15:47:09.896002712Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.36.7\"" May 15 15:47:09.902304 containerd[1530]: time="2025-05-15T15:47:09.901930917Z" level=info msg="connecting to shim 7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4" address="unix:///run/containerd/s/90e2a397bad4e057c349f1945635772ecaf7f3ba34f02885332655c51b476374" namespace=k8s.io protocol=ttrpc version=3 May 15 15:47:09.906546 containerd[1530]: time="2025-05-15T15:47:09.906033547Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\"" May 15 15:47:09.910103 kubelet[2778]: E0515 15:47:09.909951 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:09.911693 containerd[1530]: time="2025-05-15T15:47:09.911507854Z" level=info msg="RemoveImage \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" returns successfully" May 15 15:47:09.917841 containerd[1530]: time="2025-05-15T15:47:09.916014605Z" level=info msg="ImageDelete event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 15:47:09.923118 containerd[1530]: time="2025-05-15T15:47:09.920050499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,}" May 15 15:47:10.150192 kubelet[2778]: I0515 15:47:10.149227 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:47:10.198909 kubelet[2778]: I0515 15:47:10.197674 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-kube-controllers-65cd59455f-72w5b","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-typha-c75d45c47-9qmhx","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:47:10.200283 kubelet[2778]: E0515 15:47:10.199634 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:47:10.201897 kubelet[2778]: E0515 15:47:10.201612 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:47:10.203943 kubelet[2778]: E0515 15:47:10.202972 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:47:10.203943 kubelet[2778]: E0515 15:47:10.203027 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:47:10.203943 kubelet[2778]: E0515 15:47:10.203050 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:47:10.203943 kubelet[2778]: E0515 15:47:10.203077 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:47:10.203943 kubelet[2778]: E0515 15:47:10.203093 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:47:10.203943 kubelet[2778]: E0515 15:47:10.203115 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:47:10.203943 kubelet[2778]: E0515 15:47:10.203133 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:47:10.203943 kubelet[2778]: E0515 15:47:10.203151 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:47:10.203943 kubelet[2778]: I0515 15:47:10.203196 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:47:10.269449 systemd[1]: Started cri-containerd-7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4.scope - libcontainer container 7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4. May 15 15:47:10.545347 containerd[1530]: time="2025-05-15T15:47:10.543659426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vdlk8,Uid:d4ab97e1-a8ea-4ff1-b2ca-fc307beaaf5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4\"" May 15 15:47:10.552804 kubelet[2778]: E0515 15:47:10.552752 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:10.682925 containerd[1530]: time="2025-05-15T15:47:10.682830916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:10.684481 containerd[1530]: time="2025-05-15T15:47:10.684403935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 15 15:47:10.690001 containerd[1530]: time="2025-05-15T15:47:10.689923790Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:10.694340 containerd[1530]: time="2025-05-15T15:47:10.694263148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:10.696134 containerd[1530]: time="2025-05-15T15:47:10.696078026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.084203339s" May 15 15:47:10.696134 containerd[1530]: time="2025-05-15T15:47:10.696128138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 15:47:10.699373 containerd[1530]: time="2025-05-15T15:47:10.699299249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:47:10.706397 containerd[1530]: time="2025-05-15T15:47:10.706267739Z" level=info msg="CreateContainer within sandbox \"2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 15:47:10.728751 containerd[1530]: time="2025-05-15T15:47:10.728047968Z" level=info msg="Container aa53cd0d6540812d4753d623edcb4e677b7281093ca5fc40552dd1b2fad6db50: CDI devices from CRI Config.CDIDevices: []" May 15 15:47:10.751373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168870898.mount: Deactivated successfully. May 15 15:47:10.761483 containerd[1530]: time="2025-05-15T15:47:10.761408586Z" level=info msg="CreateContainer within sandbox \"2a2bb10e19b56e4eec52cdcd43b68ab2f46348534efa56a3708b41fdc397cc73\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"aa53cd0d6540812d4753d623edcb4e677b7281093ca5fc40552dd1b2fad6db50\"" May 15 15:47:10.764413 containerd[1530]: time="2025-05-15T15:47:10.764171806Z" level=info msg="StartContainer for \"aa53cd0d6540812d4753d623edcb4e677b7281093ca5fc40552dd1b2fad6db50\"" May 15 15:47:10.783933 containerd[1530]: time="2025-05-15T15:47:10.783854970Z" level=info msg="connecting to shim aa53cd0d6540812d4753d623edcb4e677b7281093ca5fc40552dd1b2fad6db50" address="unix:///run/containerd/s/64f5de0678ba1e782863309ed822e7b75cefda4ad6e3841384305c5750cb4ac2" protocol=ttrpc version=3 May 15 15:47:10.890020 systemd[1]: Started cri-containerd-aa53cd0d6540812d4753d623edcb4e677b7281093ca5fc40552dd1b2fad6db50.scope - libcontainer container aa53cd0d6540812d4753d623edcb4e677b7281093ca5fc40552dd1b2fad6db50. May 15 15:47:10.985922 systemd-networkd[1445]: calib799e28a934: Link UP May 15 15:47:10.986361 systemd-networkd[1445]: calib799e28a934: Gained carrier May 15 15:47:11.011686 systemd-networkd[1445]: cali120145085d4: Gained IPv6LL May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.164 [INFO][5782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0 coredns-7db6d8ff4d- kube-system 2060f7d9-6d6b-4e81-9323-08b479f092eb 725 0 2025-05-15 15:44:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-8a7930f089 coredns-7db6d8ff4d-lmnwc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib799e28a934 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lmnwc" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.166 [INFO][5782] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lmnwc" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.586 [INFO][5815] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" HandleID="k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Workload="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.687 [INFO][5815] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" HandleID="k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Workload="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000421180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-8a7930f089", "pod":"coredns-7db6d8ff4d-lmnwc", "timestamp":"2025-05-15 15:47:10.586893776 +0000 UTC"}, Hostname:"ci-4334.0.0-a-8a7930f089", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.688 [INFO][5815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.688 [INFO][5815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.688 [INFO][5815] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-8a7930f089' May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.707 [INFO][5815] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.771 [INFO][5815] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.844 [INFO][5815] ipam/ipam.go 489: Trying affinity for 192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.862 [INFO][5815] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.879 [INFO][5815] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.879 [INFO][5815] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.886 [INFO][5815] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315 May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.922 [INFO][5815] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.961 [INFO][5815] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.4/26] block=192.168.30.0/26 handle="k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.961 [INFO][5815] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.4/26] handle="k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" host="ci-4334.0.0-a-8a7930f089" May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.961 [INFO][5815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 15:47:11.061672 containerd[1530]: 2025-05-15 15:47:10.961 [INFO][5815] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.4/26] IPv6=[] ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" HandleID="k8s-pod-network.c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Workload="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" May 15 15:47:11.067336 containerd[1530]: 2025-05-15 15:47:10.971 [INFO][5782] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lmnwc" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2060f7d9-6d6b-4e81-9323-08b479f092eb", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-8a7930f089", ContainerID:"", Pod:"coredns-7db6d8ff4d-lmnwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib799e28a934", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:47:11.067336 containerd[1530]: 2025-05-15 15:47:10.971 [INFO][5782] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.4/32] ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lmnwc" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" May 15 15:47:11.067336 containerd[1530]: 2025-05-15 15:47:10.972 [INFO][5782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib799e28a934 ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lmnwc" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" May 15 15:47:11.067336 containerd[1530]: 2025-05-15 15:47:10.990 [INFO][5782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lmnwc" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" May 15 15:47:11.067336 containerd[1530]: 2025-05-15 15:47:10.992 [INFO][5782] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lmnwc" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2060f7d9-6d6b-4e81-9323-08b479f092eb", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 15, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-8a7930f089", ContainerID:"c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315", Pod:"coredns-7db6d8ff4d-lmnwc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib799e28a934", MAC:"de:cf:e4:dd:a8:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 15:47:11.067336 containerd[1530]: 2025-05-15 15:47:11.042 [INFO][5782] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lmnwc" WorkloadEndpoint="ci--4334.0.0--a--8a7930f089-k8s-coredns--7db6d8ff4d--lmnwc-eth0" May 15 15:47:11.145824 containerd[1530]: time="2025-05-15T15:47:11.144952552Z" level=info msg="connecting to shim c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315" address="unix:///run/containerd/s/b51f204d46473e08ebea843f5f88d0c91ef1b6242b0fdc3d09ae63373372b3e3" namespace=k8s.io protocol=ttrpc version=3 May 15 15:47:11.224526 systemd[1]: Started cri-containerd-c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315.scope - libcontainer container c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315. May 15 15:47:11.343413 containerd[1530]: time="2025-05-15T15:47:11.343362164Z" level=info msg="StartContainer for \"aa53cd0d6540812d4753d623edcb4e677b7281093ca5fc40552dd1b2fad6db50\" returns successfully" May 15 15:47:11.533527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130144792.mount: Deactivated successfully. May 15 15:47:11.602910 containerd[1530]: time="2025-05-15T15:47:11.602836024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lmnwc,Uid:2060f7d9-6d6b-4e81-9323-08b479f092eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315\"" May 15 15:47:11.609523 kubelet[2778]: E0515 15:47:11.609463 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:12.111628 kubelet[2778]: I0515 15:47:12.111518 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h6786" podStartSLOduration=139.204022109 podStartE2EDuration="2m26.111475995s" podCreationTimestamp="2025-05-15 15:44:46 +0000 UTC" firstStartedPulling="2025-05-15 15:47:03.790214272 +0000 UTC m=+159.153561839" lastFinishedPulling="2025-05-15 15:47:10.697668148 +0000 UTC m=+166.061015725" observedRunningTime="2025-05-15 15:47:12.107140477 +0000 UTC m=+167.470488072" watchObservedRunningTime="2025-05-15 15:47:12.111475995 +0000 UTC m=+167.474823622" May 15 15:47:12.289931 systemd-networkd[1445]: calib799e28a934: Gained IPv6LL May 15 15:47:12.404625 kubelet[2778]: I0515 15:47:12.404377 2778 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 15:47:12.418247 kubelet[2778]: I0515 15:47:12.417815 2778 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 15:47:12.569377 containerd[1530]: time="2025-05-15T15:47:12.568459834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\" id:\"1b1cdcac67beb1428a3d5ddbb5bca956cb05aff8b4e5f6cc6b2bcabbf8879d4d\" pid:5920 exited_at:{seconds:1747324032 nanos:564919445}" May 15 15:47:12.585491 kubelet[2778]: E0515 15:47:12.585158 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:13.851786 systemd[1]: Started sshd@34-164.92.106.96:22-139.178.68.195:56940.service - OpenSSH per-connection server daemon (139.178.68.195:56940). May 15 15:47:13.881578 containerd[1530]: time="2025-05-15T15:47:13.880374698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:13.883533 containerd[1530]: time="2025-05-15T15:47:13.883426495Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 15:47:13.886012 containerd[1530]: time="2025-05-15T15:47:13.885955027Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:13.923600 containerd[1530]: time="2025-05-15T15:47:13.923533747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:13.933374 containerd[1530]: time="2025-05-15T15:47:13.933311400Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.231812166s" May 15 15:47:13.934848 containerd[1530]: time="2025-05-15T15:47:13.934771999Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:47:13.940915 containerd[1530]: time="2025-05-15T15:47:13.940774495Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 15:47:13.952888 containerd[1530]: time="2025-05-15T15:47:13.949914825Z" level=info msg="CreateContainer within sandbox \"7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 15:47:13.991421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491269984.mount: Deactivated successfully. May 15 15:47:14.001623 containerd[1530]: time="2025-05-15T15:47:14.000173697Z" level=info msg="Container 0a618da4699afed505bc9246e6076dad2f7564eddce10e6582b587353d84b7c8: CDI devices from CRI Config.CDIDevices: []" May 15 15:47:14.055914 containerd[1530]: time="2025-05-15T15:47:14.054782694Z" level=info msg="CreateContainer within sandbox \"7e8de15db6d5d566b927bc048fc5e185a252859848d5ae456a8062fcac9d7df4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a618da4699afed505bc9246e6076dad2f7564eddce10e6582b587353d84b7c8\"" May 15 15:47:14.057735 containerd[1530]: time="2025-05-15T15:47:14.057647916Z" level=info msg="StartContainer for \"0a618da4699afed505bc9246e6076dad2f7564eddce10e6582b587353d84b7c8\"" May 15 15:47:14.059657 containerd[1530]: time="2025-05-15T15:47:14.059424019Z" level=info msg="connecting to shim 0a618da4699afed505bc9246e6076dad2f7564eddce10e6582b587353d84b7c8" address="unix:///run/containerd/s/90e2a397bad4e057c349f1945635772ecaf7f3ba34f02885332655c51b476374" protocol=ttrpc version=3 May 15 15:47:14.080608 sshd[5997]: Accepted publickey for core from 139.178.68.195 port 56940 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:14.086209 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:14.097803 systemd-logind[1513]: New session 34 of user core. May 15 15:47:14.104575 systemd[1]: Started session-34.scope - Session 34 of User core. May 15 15:47:14.180457 systemd[1]: Started cri-containerd-0a618da4699afed505bc9246e6076dad2f7564eddce10e6582b587353d84b7c8.scope - libcontainer container 0a618da4699afed505bc9246e6076dad2f7564eddce10e6582b587353d84b7c8. May 15 15:47:14.187265 containerd[1530]: time="2025-05-15T15:47:14.184312492Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 15:47:14.187265 containerd[1530]: time="2025-05-15T15:47:14.186269215Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=0" May 15 15:47:14.191128 containerd[1530]: time="2025-05-15T15:47:14.190428796Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 249.573614ms" May 15 15:47:14.191128 containerd[1530]: time="2025-05-15T15:47:14.190480476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 15:47:14.212727 containerd[1530]: time="2025-05-15T15:47:14.212521446Z" level=info msg="CreateContainer within sandbox \"c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 15:47:14.259532 containerd[1530]: time="2025-05-15T15:47:14.259229476Z" level=info msg="Container 6921ac71cba6e3b32e5d3a95fdba8187195c9b5ee30a6b9a460af7224e1112f9: CDI devices from CRI Config.CDIDevices: []" May 15 15:47:14.279610 containerd[1530]: time="2025-05-15T15:47:14.279524200Z" level=info msg="CreateContainer within sandbox \"c77612d3fa6ef4aab30f5853973438fee1999b331bc9cba4d7e7eefef4d30315\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6921ac71cba6e3b32e5d3a95fdba8187195c9b5ee30a6b9a460af7224e1112f9\"" May 15 15:47:14.285277 containerd[1530]: time="2025-05-15T15:47:14.285132468Z" level=info msg="StartContainer for \"6921ac71cba6e3b32e5d3a95fdba8187195c9b5ee30a6b9a460af7224e1112f9\"" May 15 15:47:14.292547 containerd[1530]: time="2025-05-15T15:47:14.292402452Z" level=info msg="connecting to shim 6921ac71cba6e3b32e5d3a95fdba8187195c9b5ee30a6b9a460af7224e1112f9" address="unix:///run/containerd/s/b51f204d46473e08ebea843f5f88d0c91ef1b6242b0fdc3d09ae63373372b3e3" protocol=ttrpc version=3 May 15 15:47:14.382083 systemd[1]: Started cri-containerd-6921ac71cba6e3b32e5d3a95fdba8187195c9b5ee30a6b9a460af7224e1112f9.scope - libcontainer container 6921ac71cba6e3b32e5d3a95fdba8187195c9b5ee30a6b9a460af7224e1112f9. May 15 15:47:14.446225 containerd[1530]: time="2025-05-15T15:47:14.446086358Z" level=info msg="StartContainer for \"0a618da4699afed505bc9246e6076dad2f7564eddce10e6582b587353d84b7c8\" returns successfully" May 15 15:47:14.575607 containerd[1530]: time="2025-05-15T15:47:14.575425179Z" level=info msg="StartContainer for \"6921ac71cba6e3b32e5d3a95fdba8187195c9b5ee30a6b9a460af7224e1112f9\" returns successfully" May 15 15:47:14.744195 sshd[6000]: Connection closed by 139.178.68.195 port 56940 May 15 15:47:14.744994 sshd-session[5997]: pam_unix(sshd:session): session closed for user core May 15 15:47:14.750574 systemd-logind[1513]: Session 34 logged out. Waiting for processes to exit. May 15 15:47:14.752893 systemd[1]: sshd@34-164.92.106.96:22-139.178.68.195:56940.service: Deactivated successfully. May 15 15:47:14.759871 systemd[1]: session-34.scope: Deactivated successfully. May 15 15:47:14.767211 systemd-logind[1513]: Removed session 34. May 15 15:47:14.965409 kubelet[2778]: E0515 15:47:14.964990 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:14.969251 kubelet[2778]: E0515 15:47:14.969035 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:14.991450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876099111.mount: Deactivated successfully. May 15 15:47:15.069828 kubelet[2778]: I0515 15:47:15.069622 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lmnwc" podStartSLOduration=154.484315374 podStartE2EDuration="2m37.06958565s" podCreationTimestamp="2025-05-15 15:44:38 +0000 UTC" firstStartedPulling="2025-05-15 15:47:11.614329293 +0000 UTC m=+166.977676871" lastFinishedPulling="2025-05-15 15:47:14.199599564 +0000 UTC m=+169.562947147" observedRunningTime="2025-05-15 15:47:15.037854429 +0000 UTC m=+170.401202025" watchObservedRunningTime="2025-05-15 15:47:15.06958565 +0000 UTC m=+170.432933250" May 15 15:47:15.076765 kubelet[2778]: I0515 15:47:15.074576 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vdlk8" podStartSLOduration=153.69230747 podStartE2EDuration="2m37.074542698s" podCreationTimestamp="2025-05-15 15:44:38 +0000 UTC" firstStartedPulling="2025-05-15 15:47:10.557547082 +0000 UTC m=+165.920894658" lastFinishedPulling="2025-05-15 15:47:13.93978231 +0000 UTC m=+169.303129886" observedRunningTime="2025-05-15 15:47:15.07046249 +0000 UTC m=+170.433810074" watchObservedRunningTime="2025-05-15 15:47:15.074542698 +0000 UTC m=+170.437890298" May 15 15:47:15.908525 kubelet[2778]: E0515 15:47:15.908435 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:15.972354 kubelet[2778]: E0515 15:47:15.972303 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:15.973983 kubelet[2778]: E0515 15:47:15.972379 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:16.975772 kubelet[2778]: E0515 15:47:16.975660 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:16.985677 kubelet[2778]: E0515 15:47:16.985416 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:17.979935 kubelet[2778]: E0515 15:47:17.979884 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:18.910132 containerd[1530]: time="2025-05-15T15:47:18.909844253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 15:47:19.763008 systemd[1]: Started sshd@35-164.92.106.96:22-139.178.68.195:56950.service - OpenSSH per-connection server daemon (139.178.68.195:56950). May 15 15:47:19.847480 sshd[6090]: Accepted publickey for core from 139.178.68.195 port 56950 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:19.850954 sshd-session[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:19.858158 systemd-logind[1513]: New session 35 of user core. May 15 15:47:19.870217 systemd[1]: Started session-35.scope - Session 35 of User core. May 15 15:47:20.240334 sshd[6092]: Connection closed by 139.178.68.195 port 56950 May 15 15:47:20.241852 sshd-session[6090]: pam_unix(sshd:session): session closed for user core May 15 15:47:20.255269 systemd[1]: sshd@35-164.92.106.96:22-139.178.68.195:56950.service: Deactivated successfully. May 15 15:47:20.255475 systemd-logind[1513]: Session 35 logged out. Waiting for processes to exit. May 15 15:47:20.266099 systemd[1]: session-35.scope: Deactivated successfully. May 15 15:47:20.276962 systemd-logind[1513]: Removed session 35. May 15 15:47:20.310642 kubelet[2778]: I0515 15:47:20.309439 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:47:20.310642 kubelet[2778]: I0515 15:47:20.309569 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:47:20.314638 kubelet[2778]: I0515 15:47:20.314590 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:47:20.325082 containerd[1530]: time="2025-05-15T15:47:20.324996805Z" level=error msg="failed to cleanup \"extract-39800521-fXno sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 15:47:20.327252 containerd[1530]: time="2025-05-15T15:47:20.327179394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" May 15 15:47:20.327531 containerd[1530]: time="2025-05-15T15:47:20.327232194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=6295734" May 15 15:47:20.328375 kubelet[2778]: E0515 15:47:20.328304 2778 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:47:20.328735 kubelet[2778]: E0515 15:47:20.328586 2778 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:47:20.329318 kubelet[2778]: E0515 15:47:20.328894 2778 kuberuntime_manager.go:1256] container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tklb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/kube-controllers:v3.29.3": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device May 15 15:47:20.329318 kubelet[2778]: E0515 15:47:20.328941 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:47:20.362891 kubelet[2778]: I0515 15:47:20.362824 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:47:20.363128 kubelet[2778]: I0515 15:47:20.363086 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:47:20.363262 kubelet[2778]: E0515 15:47:20.363148 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:47:20.363262 kubelet[2778]: E0515 15:47:20.363176 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:47:20.363262 kubelet[2778]: E0515 15:47:20.363189 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:47:20.363262 kubelet[2778]: E0515 15:47:20.363205 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:47:20.363262 kubelet[2778]: E0515 15:47:20.363219 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:47:20.363262 kubelet[2778]: E0515 15:47:20.363234 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:47:20.363262 kubelet[2778]: E0515 15:47:20.363258 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:47:20.363545 kubelet[2778]: E0515 15:47:20.363273 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:47:20.363545 kubelet[2778]: E0515 15:47:20.363282 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:47:20.363545 kubelet[2778]: E0515 15:47:20.363295 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:47:20.363545 kubelet[2778]: I0515 15:47:20.363310 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:47:25.258475 systemd[1]: Started sshd@36-164.92.106.96:22-139.178.68.195:34992.service - OpenSSH per-connection server daemon (139.178.68.195:34992). May 15 15:47:25.361861 sshd[6115]: Accepted publickey for core from 139.178.68.195 port 34992 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:25.363690 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:25.372816 systemd-logind[1513]: New session 36 of user core. May 15 15:47:25.381319 systemd[1]: Started session-36.scope - Session 36 of User core. May 15 15:47:25.674776 sshd[6117]: Connection closed by 139.178.68.195 port 34992 May 15 15:47:25.675062 sshd-session[6115]: pam_unix(sshd:session): session closed for user core May 15 15:47:25.695254 systemd[1]: sshd@36-164.92.106.96:22-139.178.68.195:34992.service: Deactivated successfully. May 15 15:47:25.703193 systemd[1]: session-36.scope: Deactivated successfully. May 15 15:47:25.706564 systemd-logind[1513]: Session 36 logged out. Waiting for processes to exit. May 15 15:47:25.710422 systemd-logind[1513]: Removed session 36. May 15 15:47:28.907336 kubelet[2778]: E0515 15:47:28.906863 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:30.418742 kubelet[2778]: I0515 15:47:30.418504 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:47:30.418742 kubelet[2778]: I0515 15:47:30.418564 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:47:30.426197 kubelet[2778]: I0515 15:47:30.425957 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:47:30.475375 kubelet[2778]: I0515 15:47:30.475084 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:47:30.475375 kubelet[2778]: I0515 15:47:30.475307 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475755 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475790 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475804 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475818 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475832 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475846 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475860 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475870 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475881 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:47:30.475919 kubelet[2778]: E0515 15:47:30.475891 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:47:30.475919 kubelet[2778]: I0515 15:47:30.475903 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:47:30.689423 systemd[1]: Started sshd@37-164.92.106.96:22-139.178.68.195:34994.service - OpenSSH per-connection server daemon (139.178.68.195:34994). May 15 15:47:30.778470 sshd[6129]: Accepted publickey for core from 139.178.68.195 port 34994 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:30.780952 sshd-session[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:30.794797 systemd-logind[1513]: New session 37 of user core. May 15 15:47:30.801982 systemd[1]: Started session-37.scope - Session 37 of User core. May 15 15:47:31.058533 sshd[6131]: Connection closed by 139.178.68.195 port 34994 May 15 15:47:31.059192 sshd-session[6129]: pam_unix(sshd:session): session closed for user core May 15 15:47:31.067470 systemd-logind[1513]: Session 37 logged out. Waiting for processes to exit. May 15 15:47:31.067928 systemd[1]: sshd@37-164.92.106.96:22-139.178.68.195:34994.service: Deactivated successfully. May 15 15:47:31.074623 systemd[1]: session-37.scope: Deactivated successfully. May 15 15:47:31.078328 systemd-logind[1513]: Removed session 37. May 15 15:47:31.907936 kubelet[2778]: E0515 15:47:31.907685 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:33.908485 kubelet[2778]: E0515 15:47:33.908400 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:47:36.073096 systemd[1]: Started sshd@38-164.92.106.96:22-139.178.68.195:46436.service - OpenSSH per-connection server daemon (139.178.68.195:46436). May 15 15:47:36.199025 sshd[6144]: Accepted publickey for core from 139.178.68.195 port 46436 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:36.202732 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:36.215058 systemd-logind[1513]: New session 38 of user core. May 15 15:47:36.221163 systemd[1]: Started session-38.scope - Session 38 of User core. May 15 15:47:36.447539 sshd[6147]: Connection closed by 139.178.68.195 port 46436 May 15 15:47:36.448743 sshd-session[6144]: pam_unix(sshd:session): session closed for user core May 15 15:47:36.456894 systemd[1]: sshd@38-164.92.106.96:22-139.178.68.195:46436.service: Deactivated successfully. May 15 15:47:36.461858 systemd[1]: session-38.scope: Deactivated successfully. May 15 15:47:36.463448 systemd-logind[1513]: Session 38 logged out. Waiting for processes to exit. May 15 15:47:36.466344 systemd-logind[1513]: Removed session 38. May 15 15:47:40.518141 kubelet[2778]: I0515 15:47:40.518092 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:47:40.518661 kubelet[2778]: I0515 15:47:40.518644 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:47:40.521004 kubelet[2778]: I0515 15:47:40.520905 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:47:40.544654 kubelet[2778]: I0515 15:47:40.544592 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:47:40.545199 kubelet[2778]: I0515 15:47:40.545165 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:47:40.545965 kubelet[2778]: E0515 15:47:40.545817 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:47:40.545965 kubelet[2778]: E0515 15:47:40.545851 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:47:40.545965 kubelet[2778]: E0515 15:47:40.545867 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:47:40.545965 kubelet[2778]: E0515 15:47:40.545901 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:47:40.545965 kubelet[2778]: E0515 15:47:40.545922 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:47:40.545965 kubelet[2778]: E0515 15:47:40.545937 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:47:40.545965 kubelet[2778]: E0515 15:47:40.545977 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:47:40.546226 kubelet[2778]: E0515 15:47:40.545994 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:47:40.546226 kubelet[2778]: E0515 15:47:40.546009 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:47:40.546226 kubelet[2778]: E0515 15:47:40.546026 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:47:40.546226 kubelet[2778]: I0515 15:47:40.546063 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:47:41.401250 containerd[1530]: time="2025-05-15T15:47:41.401127092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\" id:\"cbd6d764c89c20d148e58a4edcd3180bc62feaad134f4dceeb7decceb5c4ddd5\" pid:6175 exited_at:{seconds:1747324061 nanos:400153026}" May 15 15:47:41.476089 systemd[1]: Started sshd@39-164.92.106.96:22-139.178.68.195:46446.service - OpenSSH per-connection server daemon (139.178.68.195:46446). May 15 15:47:41.557096 sshd[6188]: Accepted publickey for core from 139.178.68.195 port 46446 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:41.559411 sshd-session[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:41.567513 systemd-logind[1513]: New session 39 of user core. May 15 15:47:41.578047 systemd[1]: Started session-39.scope - Session 39 of User core. May 15 15:47:41.753502 sshd[6190]: Connection closed by 139.178.68.195 port 46446 May 15 15:47:41.754442 sshd-session[6188]: pam_unix(sshd:session): session closed for user core May 15 15:47:41.761086 systemd-logind[1513]: Session 39 logged out. Waiting for processes to exit. May 15 15:47:41.762352 systemd[1]: sshd@39-164.92.106.96:22-139.178.68.195:46446.service: Deactivated successfully. May 15 15:47:41.765290 systemd[1]: session-39.scope: Deactivated successfully. May 15 15:47:41.767809 systemd-logind[1513]: Removed session 39. May 15 15:47:41.907250 kubelet[2778]: E0515 15:47:41.907113 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:47:45.913810 containerd[1530]: time="2025-05-15T15:47:45.912266912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 15:47:46.771630 systemd[1]: Started sshd@40-164.92.106.96:22-139.178.68.195:56618.service - OpenSSH per-connection server daemon (139.178.68.195:56618). May 15 15:47:46.858821 sshd[6208]: Accepted publickey for core from 139.178.68.195 port 56618 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:46.861650 sshd-session[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:46.880082 systemd-logind[1513]: New session 40 of user core. May 15 15:47:46.885052 systemd[1]: Started session-40.scope - Session 40 of User core. May 15 15:47:47.107966 sshd[6210]: Connection closed by 139.178.68.195 port 56618 May 15 15:47:47.108822 sshd-session[6208]: pam_unix(sshd:session): session closed for user core May 15 15:47:47.115967 systemd[1]: sshd@40-164.92.106.96:22-139.178.68.195:56618.service: Deactivated successfully. May 15 15:47:47.119249 systemd[1]: session-40.scope: Deactivated successfully. May 15 15:47:47.120936 systemd-logind[1513]: Session 40 logged out. Waiting for processes to exit. May 15 15:47:47.128604 systemd-logind[1513]: Removed session 40. May 15 15:47:47.359503 containerd[1530]: time="2025-05-15T15:47:47.359258868Z" level=error msg="failed to cleanup \"extract-148791164-etHu sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 15:47:47.361474 containerd[1530]: time="2025-05-15T15:47:47.360385114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" May 15 15:47:47.361474 containerd[1530]: time="2025-05-15T15:47:47.360506833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=6295734" May 15 15:47:47.361595 kubelet[2778]: E0515 15:47:47.360994 2778 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:47:47.361595 kubelet[2778]: E0515 15:47:47.361201 2778 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:47:47.362757 kubelet[2778]: E0515 15:47:47.361649 2778 kuberuntime_manager.go:1256] container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tklb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/kube-controllers:v3.29.3": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device May 15 15:47:47.362757 kubelet[2778]: E0515 15:47:47.361789 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:47:50.594533 kubelet[2778]: I0515 15:47:50.594457 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:47:50.594533 kubelet[2778]: I0515 15:47:50.594526 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:47:50.603904 kubelet[2778]: I0515 15:47:50.603303 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:47:50.638924 kubelet[2778]: I0515 15:47:50.638858 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:47:50.639269 kubelet[2778]: I0515 15:47:50.639213 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:47:50.639420 kubelet[2778]: E0515 15:47:50.639309 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:47:50.639420 kubelet[2778]: E0515 15:47:50.639359 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:47:50.639420 kubelet[2778]: E0515 15:47:50.639375 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:47:50.639420 kubelet[2778]: E0515 15:47:50.639390 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:47:50.639513 kubelet[2778]: E0515 15:47:50.639425 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:47:50.639513 kubelet[2778]: E0515 15:47:50.639440 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:47:50.639513 kubelet[2778]: E0515 15:47:50.639458 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:47:50.639513 kubelet[2778]: E0515 15:47:50.639493 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:47:50.639513 kubelet[2778]: E0515 15:47:50.639507 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:47:50.639656 kubelet[2778]: E0515 15:47:50.639522 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:47:50.639656 kubelet[2778]: I0515 15:47:50.639539 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:47:52.137111 systemd[1]: Started sshd@41-164.92.106.96:22-139.178.68.195:56620.service - OpenSSH per-connection server daemon (139.178.68.195:56620). May 15 15:47:52.216628 sshd[6224]: Accepted publickey for core from 139.178.68.195 port 56620 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:52.220323 sshd-session[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:52.233550 systemd-logind[1513]: New session 41 of user core. May 15 15:47:52.241724 systemd[1]: Started session-41.scope - Session 41 of User core. May 15 15:47:52.418443 sshd[6226]: Connection closed by 139.178.68.195 port 56620 May 15 15:47:52.419371 sshd-session[6224]: pam_unix(sshd:session): session closed for user core May 15 15:47:52.427320 systemd[1]: sshd@41-164.92.106.96:22-139.178.68.195:56620.service: Deactivated successfully. May 15 15:47:52.431354 systemd[1]: session-41.scope: Deactivated successfully. May 15 15:47:52.434500 systemd-logind[1513]: Session 41 logged out. Waiting for processes to exit. May 15 15:47:52.438017 systemd-logind[1513]: Removed session 41. May 15 15:47:57.438217 systemd[1]: Started sshd@42-164.92.106.96:22-139.178.68.195:40270.service - OpenSSH per-connection server daemon (139.178.68.195:40270). May 15 15:47:57.552773 sshd[6238]: Accepted publickey for core from 139.178.68.195 port 40270 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:47:57.555109 sshd-session[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:47:57.565202 systemd-logind[1513]: New session 42 of user core. May 15 15:47:57.571167 systemd[1]: Started session-42.scope - Session 42 of User core. May 15 15:47:57.901770 sshd[6240]: Connection closed by 139.178.68.195 port 40270 May 15 15:47:57.902657 sshd-session[6238]: pam_unix(sshd:session): session closed for user core May 15 15:47:57.910148 systemd[1]: sshd@42-164.92.106.96:22-139.178.68.195:40270.service: Deactivated successfully. May 15 15:47:57.914338 systemd[1]: session-42.scope: Deactivated successfully. May 15 15:47:57.916559 systemd-logind[1513]: Session 42 logged out. Waiting for processes to exit. May 15 15:47:57.919547 systemd-logind[1513]: Removed session 42. May 15 15:47:58.909357 kubelet[2778]: E0515 15:47:58.909156 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:48:00.687034 kubelet[2778]: I0515 15:48:00.686911 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:48:00.687034 kubelet[2778]: I0515 15:48:00.686979 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:48:00.695963 kubelet[2778]: I0515 15:48:00.695918 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:48:00.729101 kubelet[2778]: I0515 15:48:00.728797 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:48:00.736020 kubelet[2778]: I0515 15:48:00.735942 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-lmnwc","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:48:00.736590 kubelet[2778]: E0515 15:48:00.736554 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:48:00.737747 kubelet[2778]: E0515 15:48:00.737582 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:48:00.737747 kubelet[2778]: E0515 15:48:00.737642 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:48:00.737747 kubelet[2778]: E0515 15:48:00.737662 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:48:00.737747 kubelet[2778]: E0515 15:48:00.737682 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:48:00.738038 kubelet[2778]: E0515 15:48:00.738022 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:48:00.738241 kubelet[2778]: E0515 15:48:00.738199 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:48:00.738670 kubelet[2778]: E0515 15:48:00.738641 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:48:00.738878 kubelet[2778]: E0515 15:48:00.738758 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:48:00.738878 kubelet[2778]: E0515 15:48:00.738776 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:48:00.738878 kubelet[2778]: I0515 15:48:00.738793 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:48:02.930479 systemd[1]: Started sshd@43-164.92.106.96:22-139.178.68.195:40274.service - OpenSSH per-connection server daemon (139.178.68.195:40274). May 15 15:48:03.035779 sshd[6251]: Accepted publickey for core from 139.178.68.195 port 40274 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:03.039881 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:03.049756 systemd-logind[1513]: New session 43 of user core. May 15 15:48:03.057218 systemd[1]: Started session-43.scope - Session 43 of User core. May 15 15:48:03.366453 sshd[6253]: Connection closed by 139.178.68.195 port 40274 May 15 15:48:03.368091 sshd-session[6251]: pam_unix(sshd:session): session closed for user core May 15 15:48:03.381858 systemd[1]: sshd@43-164.92.106.96:22-139.178.68.195:40274.service: Deactivated successfully. May 15 15:48:03.389061 systemd[1]: session-43.scope: Deactivated successfully. May 15 15:48:03.393618 systemd-logind[1513]: Session 43 logged out. Waiting for processes to exit. May 15 15:48:03.396781 systemd-logind[1513]: Removed session 43. May 15 15:48:08.395665 systemd[1]: Started sshd@44-164.92.106.96:22-139.178.68.195:47800.service - OpenSSH per-connection server daemon (139.178.68.195:47800). May 15 15:48:08.494404 sshd[6265]: Accepted publickey for core from 139.178.68.195 port 47800 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:08.498297 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:08.513297 systemd-logind[1513]: New session 44 of user core. May 15 15:48:08.521083 systemd[1]: Started session-44.scope - Session 44 of User core. May 15 15:48:08.800854 sshd[6267]: Connection closed by 139.178.68.195 port 47800 May 15 15:48:08.802195 sshd-session[6265]: pam_unix(sshd:session): session closed for user core May 15 15:48:08.816012 systemd-logind[1513]: Session 44 logged out. Waiting for processes to exit. May 15 15:48:08.816444 systemd[1]: sshd@44-164.92.106.96:22-139.178.68.195:47800.service: Deactivated successfully. May 15 15:48:08.825245 systemd[1]: session-44.scope: Deactivated successfully. May 15 15:48:08.831555 systemd-logind[1513]: Removed session 44. May 15 15:48:10.781520 kubelet[2778]: I0515 15:48:10.781455 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:48:10.781520 kubelet[2778]: I0515 15:48:10.781526 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:48:10.789269 kubelet[2778]: I0515 15:48:10.789214 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:48:10.831948 kubelet[2778]: I0515 15:48:10.831897 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:48:10.832190 kubelet[2778]: I0515 15:48:10.832173 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-lmnwc","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:48:10.832320 kubelet[2778]: E0515 15:48:10.832230 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:48:10.832320 kubelet[2778]: E0515 15:48:10.832257 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:48:10.832320 kubelet[2778]: E0515 15:48:10.832274 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:48:10.832320 kubelet[2778]: E0515 15:48:10.832289 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:48:10.832320 kubelet[2778]: E0515 15:48:10.832306 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:48:10.832320 kubelet[2778]: E0515 15:48:10.832317 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:48:10.832550 kubelet[2778]: E0515 15:48:10.832333 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:48:10.832550 kubelet[2778]: E0515 15:48:10.832345 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:48:10.832550 kubelet[2778]: E0515 15:48:10.832355 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:48:10.832550 kubelet[2778]: E0515 15:48:10.832369 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:48:10.832550 kubelet[2778]: I0515 15:48:10.832384 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:48:10.911004 kubelet[2778]: E0515 15:48:10.910893 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:48:11.450929 containerd[1530]: time="2025-05-15T15:48:11.450833837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\" id:\"1ef024d0d8c8a67c103c3a02d84b2f9604fa10f7b761822c240babe622ae5991\" pid:6294 exited_at:{seconds:1747324091 nanos:449362915}" May 15 15:48:13.819376 systemd[1]: Started sshd@45-164.92.106.96:22-139.178.68.195:59822.service - OpenSSH per-connection server daemon (139.178.68.195:59822). May 15 15:48:13.921436 sshd[6308]: Accepted publickey for core from 139.178.68.195 port 59822 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:13.927562 sshd-session[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:13.939864 systemd-logind[1513]: New session 45 of user core. May 15 15:48:13.946050 systemd[1]: Started session-45.scope - Session 45 of User core. May 15 15:48:14.143354 sshd[6310]: Connection closed by 139.178.68.195 port 59822 May 15 15:48:14.144764 sshd-session[6308]: pam_unix(sshd:session): session closed for user core May 15 15:48:14.151574 systemd[1]: sshd@45-164.92.106.96:22-139.178.68.195:59822.service: Deactivated successfully. May 15 15:48:14.152489 systemd-logind[1513]: Session 45 logged out. Waiting for processes to exit. May 15 15:48:14.156395 systemd[1]: session-45.scope: Deactivated successfully. May 15 15:48:14.161532 systemd-logind[1513]: Removed session 45. May 15 15:48:15.908824 kubelet[2778]: E0515 15:48:15.908751 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:48:19.164197 systemd[1]: Started sshd@46-164.92.106.96:22-139.178.68.195:59830.service - OpenSSH per-connection server daemon (139.178.68.195:59830). May 15 15:48:19.245382 sshd[6322]: Accepted publickey for core from 139.178.68.195 port 59830 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:19.251254 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:19.271492 systemd-logind[1513]: New session 46 of user core. May 15 15:48:19.277149 systemd[1]: Started session-46.scope - Session 46 of User core. May 15 15:48:19.467371 sshd[6324]: Connection closed by 139.178.68.195 port 59830 May 15 15:48:19.468307 sshd-session[6322]: pam_unix(sshd:session): session closed for user core May 15 15:48:19.474145 systemd[1]: sshd@46-164.92.106.96:22-139.178.68.195:59830.service: Deactivated successfully. May 15 15:48:19.479665 systemd[1]: session-46.scope: Deactivated successfully. May 15 15:48:19.483047 systemd-logind[1513]: Session 46 logged out. Waiting for processes to exit. May 15 15:48:19.487416 systemd-logind[1513]: Removed session 46. May 15 15:48:20.875744 kubelet[2778]: I0515 15:48:20.874616 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:48:20.877330 kubelet[2778]: I0515 15:48:20.875767 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:48:20.879789 kubelet[2778]: I0515 15:48:20.879751 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:48:20.929796 kubelet[2778]: I0515 15:48:20.929678 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:48:20.930341 kubelet[2778]: I0515 15:48:20.930074 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-lmnwc","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:48:20.930341 kubelet[2778]: E0515 15:48:20.930165 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:48:20.930341 kubelet[2778]: E0515 15:48:20.930336 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:48:20.930947 kubelet[2778]: E0515 15:48:20.930363 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:48:20.930947 kubelet[2778]: E0515 15:48:20.930383 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:48:20.930947 kubelet[2778]: E0515 15:48:20.930399 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:48:20.930947 kubelet[2778]: E0515 15:48:20.930417 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:48:20.930947 kubelet[2778]: E0515 15:48:20.930440 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:48:20.930947 kubelet[2778]: E0515 15:48:20.930478 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:48:20.930947 kubelet[2778]: E0515 15:48:20.930491 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:48:20.930947 kubelet[2778]: E0515 15:48:20.930509 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:48:20.930947 kubelet[2778]: I0515 15:48:20.930526 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:48:23.909892 kubelet[2778]: E0515 15:48:23.909076 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:48:24.491065 systemd[1]: Started sshd@47-164.92.106.96:22-139.178.68.195:38522.service - OpenSSH per-connection server daemon (139.178.68.195:38522). May 15 15:48:24.599141 sshd[6341]: Accepted publickey for core from 139.178.68.195 port 38522 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:24.602881 sshd-session[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:24.611097 systemd-logind[1513]: New session 47 of user core. May 15 15:48:24.615977 systemd[1]: Started session-47.scope - Session 47 of User core. May 15 15:48:25.015620 sshd[6343]: Connection closed by 139.178.68.195 port 38522 May 15 15:48:25.018507 sshd-session[6341]: pam_unix(sshd:session): session closed for user core May 15 15:48:25.028247 systemd[1]: sshd@47-164.92.106.96:22-139.178.68.195:38522.service: Deactivated successfully. May 15 15:48:25.031860 systemd[1]: session-47.scope: Deactivated successfully. May 15 15:48:25.036850 systemd-logind[1513]: Session 47 logged out. Waiting for processes to exit. May 15 15:48:25.040075 systemd-logind[1513]: Removed session 47. May 15 15:48:29.908323 kubelet[2778]: E0515 15:48:29.908240 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:48:29.908323 kubelet[2778]: E0515 15:48:29.908253 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:48:29.910054 kubelet[2778]: E0515 15:48:29.909407 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:48:30.040119 systemd[1]: Started sshd@48-164.92.106.96:22-139.178.68.195:38530.service - OpenSSH per-connection server daemon (139.178.68.195:38530). May 15 15:48:30.144746 sshd[6357]: Accepted publickey for core from 139.178.68.195 port 38530 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:30.148032 sshd-session[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:30.160309 systemd-logind[1513]: New session 48 of user core. May 15 15:48:30.167108 systemd[1]: Started session-48.scope - Session 48 of User core. May 15 15:48:30.408852 sshd[6359]: Connection closed by 139.178.68.195 port 38530 May 15 15:48:30.409356 sshd-session[6357]: pam_unix(sshd:session): session closed for user core May 15 15:48:30.417632 systemd-logind[1513]: Session 48 logged out. Waiting for processes to exit. May 15 15:48:30.419065 systemd[1]: sshd@48-164.92.106.96:22-139.178.68.195:38530.service: Deactivated successfully. May 15 15:48:30.424690 systemd[1]: session-48.scope: Deactivated successfully. May 15 15:48:30.437181 systemd-logind[1513]: Removed session 48. May 15 15:48:30.907909 kubelet[2778]: E0515 15:48:30.907722 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:48:30.977612 kubelet[2778]: I0515 15:48:30.977394 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:48:30.977612 kubelet[2778]: I0515 15:48:30.977463 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:48:30.983598 kubelet[2778]: I0515 15:48:30.983560 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:48:31.021405 kubelet[2778]: I0515 15:48:31.021358 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:48:31.021635 kubelet[2778]: I0515 15:48:31.021545 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:48:31.021635 kubelet[2778]: E0515 15:48:31.021600 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:48:31.021635 kubelet[2778]: E0515 15:48:31.021620 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:48:31.023045 kubelet[2778]: E0515 15:48:31.021653 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:48:31.023045 kubelet[2778]: E0515 15:48:31.021672 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:48:31.023045 kubelet[2778]: E0515 15:48:31.021688 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:48:31.023045 kubelet[2778]: E0515 15:48:31.021728 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:48:31.023045 kubelet[2778]: E0515 15:48:31.021746 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:48:31.023045 kubelet[2778]: E0515 15:48:31.021804 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:48:31.023045 kubelet[2778]: E0515 15:48:31.021825 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:48:31.023045 kubelet[2778]: E0515 15:48:31.021843 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:48:31.023045 kubelet[2778]: I0515 15:48:31.021858 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:48:34.909978 containerd[1530]: time="2025-05-15T15:48:34.909887322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 15:48:35.430885 systemd[1]: Started sshd@49-164.92.106.96:22-139.178.68.195:50356.service - OpenSSH per-connection server daemon (139.178.68.195:50356). May 15 15:48:35.520531 sshd[6378]: Accepted publickey for core from 139.178.68.195 port 50356 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:35.524446 sshd-session[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:35.537046 systemd-logind[1513]: New session 49 of user core. May 15 15:48:35.543883 systemd[1]: Started session-49.scope - Session 49 of User core. May 15 15:48:35.749435 sshd[6380]: Connection closed by 139.178.68.195 port 50356 May 15 15:48:35.750781 sshd-session[6378]: pam_unix(sshd:session): session closed for user core May 15 15:48:35.758691 systemd[1]: sshd@49-164.92.106.96:22-139.178.68.195:50356.service: Deactivated successfully. May 15 15:48:35.763601 systemd[1]: session-49.scope: Deactivated successfully. May 15 15:48:35.766045 systemd-logind[1513]: Session 49 logged out. Waiting for processes to exit. May 15 15:48:35.768975 systemd-logind[1513]: Removed session 49. May 15 15:48:36.409526 containerd[1530]: time="2025-05-15T15:48:36.409406067Z" level=error msg="failed to cleanup \"extract-186189202-2TxR sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 15:48:36.411201 containerd[1530]: time="2025-05-15T15:48:36.410727445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" May 15 15:48:36.411201 containerd[1530]: time="2025-05-15T15:48:36.410857204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=6295734" May 15 15:48:36.411560 kubelet[2778]: E0515 15:48:36.411229 2778 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:48:36.411560 kubelet[2778]: E0515 15:48:36.411295 2778 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 15:48:36.411560 kubelet[2778]: E0515 15:48:36.411558 2778 kuberuntime_manager.go:1256] container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tklb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65cd59455f-72w5b_calico-system(86e0d73b-0507-46e9-944b-4fbf6879e642): ErrImagePull: failed to pull and unpack image "ghcr.io/flatcar/calico/kube-controllers:v3.29.3": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device May 15 15:48:36.412081 kubelet[2778]: E0515 15:48:36.411606 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:48:38.907848 kubelet[2778]: E0515 15:48:38.907527 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:48:39.908158 kubelet[2778]: E0515 15:48:39.907850 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:48:40.782221 systemd[1]: Started sshd@50-164.92.106.96:22-139.178.68.195:50360.service - OpenSSH per-connection server daemon (139.178.68.195:50360). May 15 15:48:40.875553 sshd[6405]: Accepted publickey for core from 139.178.68.195 port 50360 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:40.878183 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:40.895891 systemd-logind[1513]: New session 50 of user core. May 15 15:48:40.903066 systemd[1]: Started session-50.scope - Session 50 of User core. May 15 15:48:41.108854 kubelet[2778]: I0515 15:48:41.108571 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:48:41.108854 kubelet[2778]: I0515 15:48:41.108644 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:48:41.110280 kubelet[2778]: I0515 15:48:41.109969 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-lmnwc","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110065 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110103 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110121 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110136 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110152 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110167 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110187 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110202 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110218 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:48:41.110280 kubelet[2778]: E0515 15:48:41.110232 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:48:41.110280 kubelet[2778]: I0515 15:48:41.110249 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:48:41.276011 sshd[6407]: Connection closed by 139.178.68.195 port 50360 May 15 15:48:41.277666 sshd-session[6405]: pam_unix(sshd:session): session closed for user core May 15 15:48:41.286900 systemd[1]: sshd@50-164.92.106.96:22-139.178.68.195:50360.service: Deactivated successfully. May 15 15:48:41.296053 systemd[1]: session-50.scope: Deactivated successfully. May 15 15:48:41.310981 systemd-logind[1513]: Session 50 logged out. Waiting for processes to exit. May 15 15:48:41.317073 systemd-logind[1513]: Removed session 50. May 15 15:48:41.542342 containerd[1530]: time="2025-05-15T15:48:41.542019683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\" id:\"dd10feb6d160d58fb91dac0495a25d0a6c1590af4c96c2395804c9718d43cc85\" pid:6430 exited_at:{seconds:1747324121 nanos:541379569}" May 15 15:48:46.303600 systemd[1]: Started sshd@51-164.92.106.96:22-139.178.68.195:54264.service - OpenSSH per-connection server daemon (139.178.68.195:54264). May 15 15:48:46.430574 sshd[6443]: Accepted publickey for core from 139.178.68.195 port 54264 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:46.434858 sshd-session[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:46.452619 systemd-logind[1513]: New session 51 of user core. May 15 15:48:46.461012 systemd[1]: Started session-51.scope - Session 51 of User core. May 15 15:48:46.767385 sshd[6445]: Connection closed by 139.178.68.195 port 54264 May 15 15:48:46.768676 sshd-session[6443]: pam_unix(sshd:session): session closed for user core May 15 15:48:46.777014 systemd[1]: sshd@51-164.92.106.96:22-139.178.68.195:54264.service: Deactivated successfully. May 15 15:48:46.786519 systemd[1]: session-51.scope: Deactivated successfully. May 15 15:48:46.792389 systemd-logind[1513]: Session 51 logged out. Waiting for processes to exit. May 15 15:48:46.795064 systemd-logind[1513]: Removed session 51. May 15 15:48:47.908107 kubelet[2778]: E0515 15:48:47.907964 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 15 15:48:49.909512 kubelet[2778]: E0515 15:48:49.909338 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:48:51.163995 kubelet[2778]: I0515 15:48:51.163937 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:48:51.163995 kubelet[2778]: I0515 15:48:51.163999 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:48:51.168575 kubelet[2778]: I0515 15:48:51.168532 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:48:51.205750 kubelet[2778]: I0515 15:48:51.205231 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:48:51.205750 kubelet[2778]: I0515 15:48:51.205499 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-lmnwc","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:48:51.205750 kubelet[2778]: E0515 15:48:51.205564 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:48:51.205750 kubelet[2778]: E0515 15:48:51.205587 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:48:51.205750 kubelet[2778]: E0515 15:48:51.205602 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:48:51.205750 kubelet[2778]: E0515 15:48:51.205617 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:48:51.205750 kubelet[2778]: E0515 15:48:51.205632 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:48:51.205750 kubelet[2778]: E0515 15:48:51.205648 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:48:51.205750 kubelet[2778]: E0515 15:48:51.205669 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:48:51.206438 kubelet[2778]: E0515 15:48:51.206372 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:48:51.206438 kubelet[2778]: E0515 15:48:51.206401 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:48:51.206438 kubelet[2778]: E0515 15:48:51.206412 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:48:51.206438 kubelet[2778]: I0515 15:48:51.206425 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:48:51.788012 systemd[1]: Started sshd@52-164.92.106.96:22-139.178.68.195:54268.service - OpenSSH per-connection server daemon (139.178.68.195:54268). May 15 15:48:51.871673 sshd[6458]: Accepted publickey for core from 139.178.68.195 port 54268 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:51.874441 sshd-session[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:51.887819 systemd-logind[1513]: New session 52 of user core. May 15 15:48:51.898182 systemd[1]: Started session-52.scope - Session 52 of User core. May 15 15:48:52.087070 sshd[6460]: Connection closed by 139.178.68.195 port 54268 May 15 15:48:52.087963 sshd-session[6458]: pam_unix(sshd:session): session closed for user core May 15 15:48:52.095943 systemd-logind[1513]: Session 52 logged out. Waiting for processes to exit. May 15 15:48:52.097459 systemd[1]: sshd@52-164.92.106.96:22-139.178.68.195:54268.service: Deactivated successfully. May 15 15:48:52.103611 systemd[1]: session-52.scope: Deactivated successfully. May 15 15:48:52.107006 systemd-logind[1513]: Removed session 52. May 15 15:48:57.109162 systemd[1]: Started sshd@53-164.92.106.96:22-139.178.68.195:42632.service - OpenSSH per-connection server daemon (139.178.68.195:42632). May 15 15:48:57.217765 sshd[6472]: Accepted publickey for core from 139.178.68.195 port 42632 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:48:57.222070 sshd-session[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:48:57.234824 systemd-logind[1513]: New session 53 of user core. May 15 15:48:57.240114 systemd[1]: Started session-53.scope - Session 53 of User core. May 15 15:48:57.643763 sshd[6474]: Connection closed by 139.178.68.195 port 42632 May 15 15:48:57.645007 sshd-session[6472]: pam_unix(sshd:session): session closed for user core May 15 15:48:57.663635 systemd[1]: sshd@53-164.92.106.96:22-139.178.68.195:42632.service: Deactivated successfully. May 15 15:48:57.667981 systemd[1]: session-53.scope: Deactivated successfully. May 15 15:48:57.674204 systemd-logind[1513]: Session 53 logged out. Waiting for processes to exit. May 15 15:48:57.678864 systemd-logind[1513]: Removed session 53. May 15 15:49:01.251528 kubelet[2778]: I0515 15:49:01.251401 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:49:01.252970 kubelet[2778]: I0515 15:49:01.251533 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:49:01.255645 kubelet[2778]: I0515 15:49:01.255603 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:49:01.288872 kubelet[2778]: I0515 15:49:01.288828 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:49:01.289152 kubelet[2778]: I0515 15:49:01.289115 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-lmnwc","kube-system/coredns-7db6d8ff4d-vdlk8","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:49:01.289254 kubelet[2778]: E0515 15:49:01.289171 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:49:01.289254 kubelet[2778]: E0515 15:49:01.289192 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:49:01.289254 kubelet[2778]: E0515 15:49:01.289203 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:49:01.289254 kubelet[2778]: E0515 15:49:01.289212 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:49:01.289254 kubelet[2778]: E0515 15:49:01.289223 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:49:01.289254 kubelet[2778]: E0515 15:49:01.289234 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:49:01.289254 kubelet[2778]: E0515 15:49:01.289248 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:49:01.289254 kubelet[2778]: E0515 15:49:01.289258 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:49:01.289458 kubelet[2778]: E0515 15:49:01.289270 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:49:01.289458 kubelet[2778]: E0515 15:49:01.289279 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:49:01.289458 kubelet[2778]: I0515 15:49:01.289290 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:49:02.673221 systemd[1]: Started sshd@54-164.92.106.96:22-139.178.68.195:42640.service - OpenSSH per-connection server daemon (139.178.68.195:42640). May 15 15:49:02.823546 sshd[6486]: Accepted publickey for core from 139.178.68.195 port 42640 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:49:02.830369 sshd-session[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:49:02.851109 systemd-logind[1513]: New session 54 of user core. May 15 15:49:02.858114 systemd[1]: Started session-54.scope - Session 54 of User core. May 15 15:49:03.494813 sshd[6488]: Connection closed by 139.178.68.195 port 42640 May 15 15:49:03.496457 sshd-session[6486]: pam_unix(sshd:session): session closed for user core May 15 15:49:03.513399 systemd[1]: sshd@54-164.92.106.96:22-139.178.68.195:42640.service: Deactivated successfully. May 15 15:49:03.518599 systemd[1]: session-54.scope: Deactivated successfully. May 15 15:49:03.522916 systemd-logind[1513]: Session 54 logged out. Waiting for processes to exit. May 15 15:49:03.530406 systemd-logind[1513]: Removed session 54. May 15 15:49:03.910248 kubelet[2778]: E0515 15:49:03.909818 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642" May 15 15:49:08.514844 systemd[1]: Started sshd@55-164.92.106.96:22-139.178.68.195:52016.service - OpenSSH per-connection server daemon (139.178.68.195:52016). May 15 15:49:08.629191 sshd[6500]: Accepted publickey for core from 139.178.68.195 port 52016 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:49:08.634086 sshd-session[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:49:08.647831 systemd-logind[1513]: New session 55 of user core. May 15 15:49:08.656176 systemd[1]: Started session-55.scope - Session 55 of User core. May 15 15:49:09.008147 sshd[6502]: Connection closed by 139.178.68.195 port 52016 May 15 15:49:09.009885 sshd-session[6500]: pam_unix(sshd:session): session closed for user core May 15 15:49:09.021655 systemd-logind[1513]: Session 55 logged out. Waiting for processes to exit. May 15 15:49:09.023065 systemd[1]: sshd@55-164.92.106.96:22-139.178.68.195:52016.service: Deactivated successfully. May 15 15:49:09.036117 systemd[1]: session-55.scope: Deactivated successfully. May 15 15:49:09.046821 systemd-logind[1513]: Removed session 55. May 15 15:49:11.406069 kubelet[2778]: I0515 15:49:11.404747 2778 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 15:49:11.406069 kubelet[2778]: I0515 15:49:11.404903 2778 container_gc.go:88] "Attempting to delete unused containers" May 15 15:49:11.415100 kubelet[2778]: I0515 15:49:11.415047 2778 image_gc_manager.go:404] "Attempting to delete unused images" May 15 15:49:11.475185 kubelet[2778]: I0515 15:49:11.475122 2778 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 15:49:11.476841 kubelet[2778]: I0515 15:49:11.476771 2778 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-65cd59455f-72w5b","calico-system/calico-typha-c75d45c47-9qmhx","kube-system/coredns-7db6d8ff4d-vdlk8","kube-system/coredns-7db6d8ff4d-lmnwc","calico-system/calico-node-nfvst","kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089","calico-system/csi-node-driver-h6786","kube-system/kube-proxy-mmxxf","kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089","kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089"] May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.476885 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.476914 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-c75d45c47-9qmhx" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.476935 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-vdlk8" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.476955 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7db6d8ff4d-lmnwc" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.476977 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-nfvst" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.476994 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-8a7930f089" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.477016 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-h6786" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.477034 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mmxxf" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.477050 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4334.0.0-a-8a7930f089" May 15 15:49:11.477145 kubelet[2778]: E0515 15:49:11.477068 2778 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4334.0.0-a-8a7930f089" May 15 15:49:11.477145 kubelet[2778]: I0515 15:49:11.477089 2778 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 15:49:11.567236 containerd[1530]: time="2025-05-15T15:49:11.567125775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b078250557c99df14a5642030c2fc78c226870ff5bb76e6a24c11ce3f92ee2\" id:\"85c3c9b2b251c128d61b2aba71c21082efb4ce37127d86fa2cea9cee36d81cc6\" pid:6529 exited_at:{seconds:1747324151 nanos:565992832}" May 15 15:49:14.030392 systemd[1]: Started sshd@56-164.92.106.96:22-139.178.68.195:37144.service - OpenSSH per-connection server daemon (139.178.68.195:37144). May 15 15:49:14.157756 sshd[6541]: Accepted publickey for core from 139.178.68.195 port 37144 ssh2: RSA SHA256:D6RDpgs86g07i9RnVi9m6DQ9xVgwtos+G7ePPwsGXvo May 15 15:49:14.162745 sshd-session[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 15:49:14.176658 systemd-logind[1513]: New session 56 of user core. May 15 15:49:14.186600 systemd[1]: Started session-56.scope - Session 56 of User core. May 15 15:49:14.518430 sshd[6543]: Connection closed by 139.178.68.195 port 37144 May 15 15:49:14.519536 sshd-session[6541]: pam_unix(sshd:session): session closed for user core May 15 15:49:14.527578 systemd-logind[1513]: Session 56 logged out. Waiting for processes to exit. May 15 15:49:14.528432 systemd[1]: sshd@56-164.92.106.96:22-139.178.68.195:37144.service: Deactivated successfully. May 15 15:49:14.534467 systemd[1]: session-56.scope: Deactivated successfully. May 15 15:49:14.545753 systemd-logind[1513]: Removed session 56. May 15 15:49:15.910383 kubelet[2778]: E0515 15:49:15.909827 2778 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-65cd59455f-72w5b" podUID="86e0d73b-0507-46e9-944b-4fbf6879e642"