May 15 13:06:28.994986 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 13:06:28.995017 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 13:06:28.995028 kernel: BIOS-provided physical RAM map: May 15 13:06:28.995041 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 15 13:06:28.995048 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 15 13:06:28.995055 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 13:06:28.995064 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 15 13:06:28.995072 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 15 13:06:28.995080 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 13:06:28.995087 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 13:06:28.995095 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 13:06:28.995103 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 13:06:28.995114 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 15 13:06:28.995121 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 13:06:28.995131 kernel: NX (Execute Disable) protection: active May 15 13:06:28.995139 kernel: APIC: Static calls initialized May 15 13:06:28.995148 kernel: SMBIOS 2.8 present. May 15 13:06:28.995159 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 15 13:06:28.995167 kernel: DMI: Memory slots populated: 1/1 May 15 13:06:28.995175 kernel: Hypervisor detected: KVM May 15 13:06:28.995184 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 13:06:28.995192 kernel: kvm-clock: using sched offset of 8022231937 cycles May 15 13:06:28.995201 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 13:06:28.995238 kernel: tsc: Detected 1999.996 MHz processor May 15 13:06:28.995247 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 13:06:28.995256 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 13:06:28.995265 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 15 13:06:28.995277 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 13:06:28.995286 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 13:06:28.995294 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 15 13:06:28.995302 kernel: Using GB pages for direct mapping May 15 13:06:28.995311 kernel: ACPI: Early table checksum verification disabled May 15 13:06:28.995319 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 15 13:06:28.995328 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 13:06:28.995337 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 13:06:28.995346 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 13:06:28.995357 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 13:06:28.995366 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 13:06:28.995374 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 13:06:28.995383 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 13:06:28.995396 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 13:06:28.995405 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 15 13:06:28.995417 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 15 13:06:28.995426 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 13:06:28.995435 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 15 13:06:28.995444 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 15 13:06:28.995452 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 15 13:06:28.995461 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 15 13:06:28.995493 kernel: No NUMA configuration found May 15 13:06:28.995502 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 15 13:06:28.995515 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] May 15 13:06:28.995524 kernel: Zone ranges: May 15 13:06:28.995533 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 13:06:28.995542 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 13:06:28.995551 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 15 13:06:28.995560 kernel: Device empty May 15 13:06:28.995569 kernel: Movable zone start for each node May 15 13:06:28.995577 kernel: Early memory node ranges May 15 13:06:28.995587 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 13:06:28.995598 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 15 13:06:28.995607 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 15 13:06:28.995617 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 15 13:06:28.995626 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 13:06:28.995634 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 13:06:28.995643 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 13:06:28.995652 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 13:06:28.995661 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 13:06:28.995670 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 13:06:28.995682 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 13:06:28.995691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 13:06:28.995700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 13:06:28.995708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 13:06:28.995717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 13:06:28.995726 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 13:06:28.995735 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 13:06:28.995744 kernel: TSC deadline timer available May 15 13:06:28.995753 kernel: CPU topo: Max. logical packages: 1 May 15 13:06:28.995765 kernel: CPU topo: Max. logical dies: 1 May 15 13:06:28.995773 kernel: CPU topo: Max. dies per package: 1 May 15 13:06:28.995782 kernel: CPU topo: Max. threads per core: 1 May 15 13:06:28.995791 kernel: CPU topo: Num. cores per package: 2 May 15 13:06:28.995800 kernel: CPU topo: Num. threads per package: 2 May 15 13:06:28.995808 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 13:06:28.995817 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 13:06:28.995826 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 13:06:28.995835 kernel: kvm-guest: setup PV sched yield May 15 13:06:28.995847 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 13:06:28.995856 kernel: Booting paravirtualized kernel on KVM May 15 13:06:28.995865 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 13:06:28.995874 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 13:06:28.995883 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 13:06:28.995892 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 13:06:28.995900 kernel: pcpu-alloc: [0] 0 1 May 15 13:06:28.995909 kernel: kvm-guest: PV spinlocks enabled May 15 13:06:28.995918 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 13:06:28.995932 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 13:06:28.995942 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 13:06:28.995953 kernel: random: crng init done May 15 13:06:28.995962 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 13:06:28.995969 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 13:06:28.995977 kernel: Fallback order for Node 0: 0 May 15 13:06:28.995985 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 15 13:06:28.995992 kernel: Policy zone: Normal May 15 13:06:28.996002 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 13:06:28.996010 kernel: software IO TLB: area num 2. May 15 13:06:28.996018 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 13:06:28.996025 kernel: ftrace: allocating 40065 entries in 157 pages May 15 13:06:28.996033 kernel: ftrace: allocated 157 pages with 5 groups May 15 13:06:28.996040 kernel: Dynamic Preempt: voluntary May 15 13:06:28.996048 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 13:06:28.996060 kernel: rcu: RCU event tracing is enabled. May 15 13:06:28.996068 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 13:06:28.996079 kernel: Trampoline variant of Tasks RCU enabled. May 15 13:06:28.996086 kernel: Rude variant of Tasks RCU enabled. May 15 13:06:28.996094 kernel: Tracing variant of Tasks RCU enabled. May 15 13:06:28.996102 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 13:06:28.996109 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 13:06:28.996117 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 13:06:28.996133 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 13:06:28.996143 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 13:06:28.996151 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 13:06:28.996159 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 13:06:28.996167 kernel: Console: colour VGA+ 80x25 May 15 13:06:28.996175 kernel: printk: legacy console [tty0] enabled May 15 13:06:28.996186 kernel: printk: legacy console [ttyS0] enabled May 15 13:06:28.996194 kernel: ACPI: Core revision 20240827 May 15 13:06:28.996202 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 13:06:28.996210 kernel: APIC: Switch to symmetric I/O mode setup May 15 13:06:28.996218 kernel: x2apic enabled May 15 13:06:28.996228 kernel: APIC: Switched APIC routing to: physical x2apic May 15 13:06:28.996236 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 13:06:28.996245 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 13:06:28.996253 kernel: kvm-guest: setup PV IPIs May 15 13:06:28.996261 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 13:06:28.996269 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns May 15 13:06:28.996277 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) May 15 13:06:28.996285 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 13:06:28.996293 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 13:06:28.996303 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 13:06:28.996312 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 13:06:28.996320 kernel: Spectre V2 : Mitigation: Retpolines May 15 13:06:28.996328 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 13:06:28.996336 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 13:06:28.996344 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 13:06:28.996352 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 13:06:28.996360 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 13:06:28.996368 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 13:06:28.996387 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 13:06:28.996395 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 13:06:28.996403 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 13:06:28.996411 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 13:06:28.996419 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 13:06:28.996427 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 15 13:06:28.996435 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 13:06:28.996443 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 15 13:06:28.996458 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 15 13:06:28.996466 kernel: Freeing SMP alternatives memory: 32K May 15 13:06:28.996498 kernel: pid_max: default: 32768 minimum: 301 May 15 13:06:28.996507 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 13:06:28.996515 kernel: landlock: Up and running. May 15 13:06:28.996523 kernel: SELinux: Initializing. May 15 13:06:28.996531 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 13:06:28.996539 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 13:06:28.996547 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 15 13:06:28.996565 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 13:06:28.996573 kernel: ... version: 0 May 15 13:06:28.996581 kernel: ... bit width: 48 May 15 13:06:28.996589 kernel: ... generic registers: 6 May 15 13:06:28.996597 kernel: ... value mask: 0000ffffffffffff May 15 13:06:28.996605 kernel: ... max period: 00007fffffffffff May 15 13:06:28.996613 kernel: ... fixed-purpose events: 0 May 15 13:06:28.996621 kernel: ... event mask: 000000000000003f May 15 13:06:28.996629 kernel: signal: max sigframe size: 3376 May 15 13:06:28.996645 kernel: rcu: Hierarchical SRCU implementation. May 15 13:06:28.996653 kernel: rcu: Max phase no-delay instances is 400. May 15 13:06:28.996661 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 13:06:28.996669 kernel: smp: Bringing up secondary CPUs ... May 15 13:06:28.996677 kernel: smpboot: x86: Booting SMP configuration: May 15 13:06:28.996685 kernel: .... node #0, CPUs: #1 May 15 13:06:28.996693 kernel: smp: Brought up 1 node, 2 CPUs May 15 13:06:28.996701 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) May 15 13:06:28.996710 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 227296K reserved, 0K cma-reserved) May 15 13:06:28.996726 kernel: devtmpfs: initialized May 15 13:06:28.996734 kernel: x86/mm: Memory block size: 128MB May 15 13:06:28.996742 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 13:06:28.996750 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 13:06:28.996758 kernel: pinctrl core: initialized pinctrl subsystem May 15 13:06:28.996766 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 13:06:28.996774 kernel: audit: initializing netlink subsys (disabled) May 15 13:06:28.996782 kernel: audit: type=2000 audit(1747314385.230:1): state=initialized audit_enabled=0 res=1 May 15 13:06:28.996790 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 13:06:28.996805 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 13:06:28.996813 kernel: cpuidle: using governor menu May 15 13:06:28.996821 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 13:06:28.996829 kernel: dca service started, version 1.12.1 May 15 13:06:28.996838 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 15 13:06:28.996846 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 13:06:28.996854 kernel: PCI: Using configuration type 1 for base access May 15 13:06:28.996862 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 13:06:28.996870 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 13:06:28.996885 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 13:06:28.996893 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 13:06:28.996901 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 13:06:28.996909 kernel: ACPI: Added _OSI(Module Device) May 15 13:06:28.996917 kernel: ACPI: Added _OSI(Processor Device) May 15 13:06:28.996925 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 13:06:28.996933 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 13:06:28.996941 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 13:06:28.996949 kernel: ACPI: Interpreter enabled May 15 13:06:28.996963 kernel: ACPI: PM: (supports S0 S3 S5) May 15 13:06:28.996971 kernel: ACPI: Using IOAPIC for interrupt routing May 15 13:06:28.996979 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 13:06:28.996987 kernel: PCI: Using E820 reservations for host bridge windows May 15 13:06:28.996995 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 13:06:28.997002 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 13:06:28.997234 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 13:06:28.997370 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 13:06:28.997542 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 13:06:28.997553 kernel: PCI host bridge to bus 0000:00 May 15 13:06:28.997703 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 13:06:28.997823 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 13:06:28.997938 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 13:06:28.998052 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 15 13:06:28.998167 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 13:06:28.998298 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 15 13:06:28.998413 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 13:06:28.998625 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 15 13:06:28.998792 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 15 13:06:28.998925 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 15 13:06:28.999051 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 15 13:06:28.999193 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 15 13:06:28.999318 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 13:06:28.999490 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 13:06:28.999651 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 15 13:06:28.999780 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 15 13:06:28.999905 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 15 13:06:29.000056 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 13:06:29.000200 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 15 13:06:29.000326 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 15 13:06:29.000451 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 15 13:06:29.000975 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 15 13:06:29.001153 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 15 13:06:29.001283 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 13:06:29.002465 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 15 13:06:29.002645 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 15 13:06:29.002774 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 15 13:06:29.002928 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 15 13:06:29.003058 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 15 13:06:29.003069 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 13:06:29.003077 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 13:06:29.003085 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 13:06:29.003105 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 13:06:29.003113 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 13:06:29.003120 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 13:06:29.003127 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 13:06:29.003135 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 13:06:29.003143 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 13:06:29.003150 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 13:06:29.003157 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 13:06:29.003165 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 13:06:29.003179 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 13:06:29.003186 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 13:06:29.003194 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 13:06:29.003201 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 13:06:29.003209 kernel: iommu: Default domain type: Translated May 15 13:06:29.003216 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 13:06:29.003223 kernel: PCI: Using ACPI for IRQ routing May 15 13:06:29.003231 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 13:06:29.003238 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 15 13:06:29.003256 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 15 13:06:29.003387 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 13:06:29.003758 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 13:06:29.003890 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 13:06:29.003901 kernel: vgaarb: loaded May 15 13:06:29.003908 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 13:06:29.003916 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 13:06:29.003923 kernel: clocksource: Switched to clocksource kvm-clock May 15 13:06:29.003944 kernel: VFS: Disk quotas dquot_6.6.0 May 15 13:06:29.003952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 13:06:29.003959 kernel: pnp: PnP ACPI init May 15 13:06:29.004131 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 13:06:29.004143 kernel: pnp: PnP ACPI: found 5 devices May 15 13:06:29.004151 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 13:06:29.004159 kernel: NET: Registered PF_INET protocol family May 15 13:06:29.004166 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 13:06:29.004183 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 13:06:29.004191 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 13:06:29.004198 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 13:06:29.004206 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 13:06:29.004213 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 13:06:29.004221 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 13:06:29.004229 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 13:06:29.004236 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 13:06:29.004243 kernel: NET: Registered PF_XDP protocol family May 15 13:06:29.004417 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 13:06:29.004565 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 13:06:29.004682 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 13:06:29.004798 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 15 13:06:29.004912 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 13:06:29.005027 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 15 13:06:29.005037 kernel: PCI: CLS 0 bytes, default 64 May 15 13:06:29.005044 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 13:06:29.005064 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 15 13:06:29.005072 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns May 15 13:06:29.005080 kernel: Initialise system trusted keyrings May 15 13:06:29.005087 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 13:06:29.005095 kernel: Key type asymmetric registered May 15 13:06:29.005102 kernel: Asymmetric key parser 'x509' registered May 15 13:06:29.005109 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 13:06:29.005116 kernel: io scheduler mq-deadline registered May 15 13:06:29.005123 kernel: io scheduler kyber registered May 15 13:06:29.005138 kernel: io scheduler bfq registered May 15 13:06:29.005145 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 13:06:29.005153 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 13:06:29.005160 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 13:06:29.005168 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 13:06:29.005175 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 13:06:29.005182 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 13:06:29.005190 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 13:06:29.005197 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 13:06:29.005370 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 13:06:29.005382 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 15 13:06:29.005533 kernel: rtc_cmos 00:03: registered as rtc0 May 15 13:06:29.005657 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T13:06:28 UTC (1747314388) May 15 13:06:29.005783 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 13:06:29.005793 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 13:06:29.005800 kernel: NET: Registered PF_INET6 protocol family May 15 13:06:29.005808 kernel: Segment Routing with IPv6 May 15 13:06:29.005828 kernel: In-situ OAM (IOAM) with IPv6 May 15 13:06:29.005835 kernel: NET: Registered PF_PACKET protocol family May 15 13:06:29.005843 kernel: Key type dns_resolver registered May 15 13:06:29.005850 kernel: IPI shorthand broadcast: enabled May 15 13:06:29.005857 kernel: sched_clock: Marking stable (4344003526, 217440295)->(4644370897, -82927076) May 15 13:06:29.005864 kernel: registered taskstats version 1 May 15 13:06:29.005872 kernel: Loading compiled-in X.509 certificates May 15 13:06:29.005879 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 13:06:29.005886 kernel: Demotion targets for Node 0: null May 15 13:06:29.005900 kernel: Key type .fscrypt registered May 15 13:06:29.005908 kernel: Key type fscrypt-provisioning registered May 15 13:06:29.005915 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 13:06:29.005923 kernel: ima: Allocated hash algorithm: sha1 May 15 13:06:29.005930 kernel: ima: No architecture policies found May 15 13:06:29.005938 kernel: clk: Disabling unused clocks May 15 13:06:29.005946 kernel: Warning: unable to open an initial console. May 15 13:06:29.005953 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 13:06:29.005961 kernel: Write protecting the kernel read-only data: 24576k May 15 13:06:29.005975 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 13:06:29.005983 kernel: Run /init as init process May 15 13:06:29.005991 kernel: with arguments: May 15 13:06:29.005998 kernel: /init May 15 13:06:29.006005 kernel: with environment: May 15 13:06:29.006013 kernel: HOME=/ May 15 13:06:29.006072 kernel: TERM=linux May 15 13:06:29.006087 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 13:06:29.006096 systemd[1]: Successfully made /usr/ read-only. May 15 13:06:29.006114 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 13:06:29.006123 systemd[1]: Detected virtualization kvm. May 15 13:06:29.006131 systemd[1]: Detected architecture x86-64. May 15 13:06:29.006139 systemd[1]: Running in initrd. May 15 13:06:29.006147 systemd[1]: No hostname configured, using default hostname. May 15 13:06:29.006155 systemd[1]: Hostname set to . May 15 13:06:29.006163 systemd[1]: Initializing machine ID from random generator. May 15 13:06:29.006178 systemd[1]: Queued start job for default target initrd.target. May 15 13:06:29.006193 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 13:06:29.006201 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 13:06:29.006210 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 13:06:29.006218 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 13:06:29.006227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 13:06:29.006236 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 13:06:29.006251 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 13:06:29.006260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 13:06:29.006268 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 13:06:29.006276 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 13:06:29.006284 systemd[1]: Reached target paths.target - Path Units. May 15 13:06:29.006292 systemd[1]: Reached target slices.target - Slice Units. May 15 13:06:29.006300 systemd[1]: Reached target swap.target - Swaps. May 15 13:06:29.006309 systemd[1]: Reached target timers.target - Timer Units. May 15 13:06:29.006323 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 13:06:29.006331 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 13:06:29.006340 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 13:06:29.006348 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 13:06:29.006356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 13:06:29.006364 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 13:06:29.006372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 13:06:29.006387 systemd[1]: Reached target sockets.target - Socket Units. May 15 13:06:29.006396 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 13:06:29.006404 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 13:06:29.006413 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 13:06:29.006421 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 13:06:29.006430 systemd[1]: Starting systemd-fsck-usr.service... May 15 13:06:29.006438 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 13:06:29.006453 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 13:06:29.006461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 13:06:29.006549 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 13:06:29.006559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 13:06:29.006577 systemd[1]: Finished systemd-fsck-usr.service. May 15 13:06:29.006586 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 13:06:29.006621 systemd-journald[206]: Collecting audit messages is disabled. May 15 13:06:29.006641 systemd-journald[206]: Journal started May 15 13:06:29.006671 systemd-journald[206]: Runtime Journal (/run/log/journal/b95c7b1b55554475aaa75619f170038f) is 8M, max 78.5M, 70.5M free. May 15 13:06:29.008885 systemd[1]: Started systemd-journald.service - Journal Service. May 15 13:06:28.981351 systemd-modules-load[207]: Inserted module 'overlay' May 15 13:06:29.018905 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 13:06:29.068093 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 13:06:29.122598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 13:06:29.122650 kernel: Bridge firewalling registered May 15 13:06:29.071711 systemd-modules-load[207]: Inserted module 'br_netfilter' May 15 13:06:29.076731 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 13:06:29.123848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 13:06:29.124720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 13:06:29.126156 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 13:06:29.129841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 13:06:29.133614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 13:06:29.142611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 13:06:29.155754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 13:06:29.160566 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 13:06:29.162896 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 13:06:29.164670 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 13:06:29.167618 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 13:06:29.188441 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 13:06:29.206747 systemd-resolved[242]: Positive Trust Anchors: May 15 13:06:29.206760 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 13:06:29.206789 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 13:06:29.213453 systemd-resolved[242]: Defaulting to hostname 'linux'. May 15 13:06:29.215984 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 13:06:29.224540 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 13:06:29.353525 kernel: SCSI subsystem initialized May 15 13:06:29.363522 kernel: Loading iSCSI transport class v2.0-870. May 15 13:06:29.374506 kernel: iscsi: registered transport (tcp) May 15 13:06:29.396857 kernel: iscsi: registered transport (qla4xxx) May 15 13:06:29.396966 kernel: QLogic iSCSI HBA Driver May 15 13:06:29.419695 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 13:06:29.437594 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 13:06:29.440009 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 13:06:29.494259 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 13:06:29.497417 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 13:06:29.550540 kernel: raid6: avx2x4 gen() 34637 MB/s May 15 13:06:29.568505 kernel: raid6: avx2x2 gen() 32613 MB/s May 15 13:06:29.586884 kernel: raid6: avx2x1 gen() 22918 MB/s May 15 13:06:29.586923 kernel: raid6: using algorithm avx2x4 gen() 34637 MB/s May 15 13:06:29.605919 kernel: raid6: .... xor() 4863 MB/s, rmw enabled May 15 13:06:29.605964 kernel: raid6: using avx2x2 recovery algorithm May 15 13:06:29.625507 kernel: xor: automatically using best checksumming function avx May 15 13:06:29.860510 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 13:06:29.869350 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 13:06:29.872534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 13:06:29.901032 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 15 13:06:29.907631 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 13:06:29.910231 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 13:06:29.938536 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation May 15 13:06:29.971796 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 13:06:29.974106 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 13:06:30.057983 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 13:06:30.061637 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 13:06:30.154506 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 13:06:30.166598 kernel: cryptd: max_cpu_qlen set to 1000 May 15 13:06:30.166641 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 15 13:06:30.197318 kernel: scsi host0: Virtio SCSI HBA May 15 13:06:30.197589 kernel: AES CTR mode by8 optimization enabled May 15 13:06:30.200494 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 15 13:06:30.297788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 13:06:30.298954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 13:06:30.301524 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 13:06:30.306808 kernel: libata version 3.00 loaded. May 15 13:06:30.305716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 13:06:30.306624 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 13:06:30.350565 kernel: ahci 0000:00:1f.2: version 3.0 May 15 13:06:30.396279 kernel: sd 0:0:0:0: Power-on or device reset occurred May 15 13:06:30.399876 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 13:06:30.399893 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 15 13:06:30.400064 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 15 13:06:30.400241 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 13:06:30.400410 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 15 13:06:30.400584 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 15 13:06:30.400744 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 13:06:30.400889 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 15 13:06:30.401042 kernel: scsi host1: ahci May 15 13:06:30.401219 kernel: scsi host2: ahci May 15 13:06:30.401388 kernel: scsi host3: ahci May 15 13:06:30.402602 kernel: scsi host4: ahci May 15 13:06:30.402777 kernel: scsi host5: ahci May 15 13:06:30.402934 kernel: scsi host6: ahci May 15 13:06:30.403097 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 May 15 13:06:30.403109 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 May 15 13:06:30.403130 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 May 15 13:06:30.403140 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 May 15 13:06:30.403150 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 May 15 13:06:30.403159 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 May 15 13:06:30.403169 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 13:06:30.403178 kernel: GPT:9289727 != 167739391 May 15 13:06:30.403187 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 13:06:30.403197 kernel: GPT:9289727 != 167739391 May 15 13:06:30.403206 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 13:06:30.403223 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 13:06:30.403232 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 13:06:30.480200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 13:06:30.709060 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 13:06:30.709134 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 15 13:06:30.709146 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 13:06:30.709157 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 13:06:30.709167 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 13:06:30.709212 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 13:06:30.766057 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 15 13:06:30.782994 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 15 13:06:30.796743 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 15 13:06:30.797346 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 15 13:06:30.807386 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 13:06:30.808211 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 13:06:30.811040 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 13:06:30.812403 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 13:06:30.813011 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 13:06:30.815073 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 13:06:30.817586 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 13:06:30.827369 disk-uuid[632]: Primary Header is updated. May 15 13:06:30.827369 disk-uuid[632]: Secondary Entries is updated. May 15 13:06:30.827369 disk-uuid[632]: Secondary Header is updated. May 15 13:06:30.833868 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 13:06:30.863629 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 13:06:30.880495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 13:06:31.879541 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 13:06:31.880721 disk-uuid[634]: The operation has completed successfully. May 15 13:06:31.932950 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 13:06:31.933122 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 13:06:31.957441 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 13:06:31.972605 sh[654]: Success May 15 13:06:32.005513 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 13:06:32.005578 kernel: device-mapper: uevent: version 1.0.3 May 15 13:06:32.008209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 13:06:32.021883 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 15 13:06:32.076205 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 13:06:32.081564 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 13:06:32.090136 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 13:06:32.104624 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 13:06:32.104661 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (667) May 15 13:06:32.108733 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 13:06:32.108759 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 13:06:32.111497 kernel: BTRFS info (device dm-0): using free-space-tree May 15 13:06:32.120001 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 13:06:32.121127 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 13:06:32.122434 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 13:06:32.123362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 13:06:32.125914 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 13:06:32.159516 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (700) May 15 13:06:32.173866 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 13:06:32.173934 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 13:06:32.173948 kernel: BTRFS info (device sda6): using free-space-tree May 15 13:06:32.188816 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 13:06:32.189527 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 13:06:32.192709 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 13:06:32.296960 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 13:06:32.604419 ignition[769]: Ignition 2.21.0 May 15 13:06:32.605269 ignition[769]: Stage: fetch-offline May 15 13:06:32.613182 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 15 13:06:32.613203 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 13:06:32.613411 ignition[769]: parsed url from cmdline: "" May 15 13:06:32.613417 ignition[769]: no config URL provided May 15 13:06:32.613425 ignition[769]: reading system config file "/usr/lib/ignition/user.ign" May 15 13:06:32.613446 ignition[769]: no config at "/usr/lib/ignition/user.ign" May 15 13:06:32.613454 ignition[769]: failed to fetch config: resource requires networking May 15 13:06:32.614985 ignition[769]: Ignition finished successfully May 15 13:06:32.666845 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 13:06:32.667841 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 13:06:32.715488 systemd-networkd[842]: lo: Link UP May 15 13:06:32.715529 systemd-networkd[842]: lo: Gained carrier May 15 13:06:32.718056 systemd-networkd[842]: Enumeration completed May 15 13:06:32.718545 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 13:06:32.718689 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 13:06:32.718693 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 13:06:32.720604 systemd[1]: Reached target network.target - Network. May 15 13:06:32.728963 systemd-networkd[842]: eth0: Link UP May 15 13:06:32.728968 systemd-networkd[842]: eth0: Gained carrier May 15 13:06:32.728978 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 13:06:32.730593 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 13:06:32.783451 ignition[846]: Ignition 2.21.0 May 15 13:06:32.783494 ignition[846]: Stage: fetch May 15 13:06:32.783686 ignition[846]: no configs at "/usr/lib/ignition/base.d" May 15 13:06:32.783698 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 13:06:32.783801 ignition[846]: parsed url from cmdline: "" May 15 13:06:32.783805 ignition[846]: no config URL provided May 15 13:06:32.783811 ignition[846]: reading system config file "/usr/lib/ignition/user.ign" May 15 13:06:32.783820 ignition[846]: no config at "/usr/lib/ignition/user.ign" May 15 13:06:32.783861 ignition[846]: PUT http://169.254.169.254/v1/token: attempt #1 May 15 13:06:32.784941 ignition[846]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 13:06:32.985122 ignition[846]: PUT http://169.254.169.254/v1/token: attempt #2 May 15 13:06:32.985347 ignition[846]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 13:06:33.217569 systemd-networkd[842]: eth0: DHCPv4 address 172.236.109.179/24, gateway 172.236.109.1 acquired from 23.215.118.230 May 15 13:06:33.385557 ignition[846]: PUT http://169.254.169.254/v1/token: attempt #3 May 15 13:06:33.477364 ignition[846]: PUT result: OK May 15 13:06:33.477433 ignition[846]: GET http://169.254.169.254/v1/user-data: attempt #1 May 15 13:06:33.588554 ignition[846]: GET result: OK May 15 13:06:33.588658 ignition[846]: parsing config with SHA512: 3d48973c0bd589913d3c006714adf4b4311b1ef53d574ff12dc41a9f2a9394b554e57ff0a16a87c9ee6eaec8bced16045e333552916d342de39c8d140a3a0c0a May 15 13:06:33.592126 unknown[846]: fetched base config from "system" May 15 13:06:33.592137 unknown[846]: fetched base config from "system" May 15 13:06:33.592372 ignition[846]: fetch: fetch complete May 15 13:06:33.592143 unknown[846]: fetched user config from "akamai" May 15 13:06:33.592377 ignition[846]: fetch: fetch passed May 15 13:06:33.592424 ignition[846]: Ignition finished successfully May 15 13:06:33.595323 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 13:06:33.598591 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 13:06:33.660901 ignition[854]: Ignition 2.21.0 May 15 13:06:33.660918 ignition[854]: Stage: kargs May 15 13:06:33.661219 ignition[854]: no configs at "/usr/lib/ignition/base.d" May 15 13:06:33.661232 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 13:06:33.664110 ignition[854]: kargs: kargs passed May 15 13:06:33.664167 ignition[854]: Ignition finished successfully May 15 13:06:33.665972 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 13:06:33.668578 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 13:06:33.726828 ignition[861]: Ignition 2.21.0 May 15 13:06:33.726843 ignition[861]: Stage: disks May 15 13:06:33.726983 ignition[861]: no configs at "/usr/lib/ignition/base.d" May 15 13:06:33.726995 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 13:06:33.728164 ignition[861]: disks: disks passed May 15 13:06:33.728236 ignition[861]: Ignition finished successfully May 15 13:06:33.731046 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 13:06:33.732461 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 13:06:33.733773 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 13:06:33.734380 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 13:06:33.735710 systemd[1]: Reached target sysinit.target - System Initialization. May 15 13:06:33.736855 systemd[1]: Reached target basic.target - Basic System. May 15 13:06:33.739131 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 13:06:33.765228 systemd-fsck[870]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 13:06:33.768017 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 13:06:33.771244 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 13:06:33.916514 kernel: EXT4-fs (sda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 13:06:33.917214 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 13:06:33.918262 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 13:06:33.920169 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 13:06:33.922550 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 13:06:33.924842 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 13:06:33.925882 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 13:06:33.925913 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 13:06:33.933776 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 13:06:33.935190 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 13:06:33.943506 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (878) May 15 13:06:33.948719 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 13:06:33.948756 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 13:06:33.948769 kernel: BTRFS info (device sda6): using free-space-tree May 15 13:06:33.959933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 13:06:34.024524 initrd-setup-root[902]: cut: /sysroot/etc/passwd: No such file or directory May 15 13:06:34.030143 initrd-setup-root[909]: cut: /sysroot/etc/group: No such file or directory May 15 13:06:34.035101 initrd-setup-root[916]: cut: /sysroot/etc/shadow: No such file or directory May 15 13:06:34.040080 initrd-setup-root[923]: cut: /sysroot/etc/gshadow: No such file or directory May 15 13:06:34.276434 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 13:06:34.278695 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 13:06:34.279880 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 13:06:34.287778 systemd-networkd[842]: eth0: Gained IPv6LL May 15 13:06:34.298127 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 13:06:34.300520 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 13:06:34.316646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 13:06:34.325852 ignition[992]: INFO : Ignition 2.21.0 May 15 13:06:34.325852 ignition[992]: INFO : Stage: mount May 15 13:06:34.327139 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 13:06:34.327139 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 13:06:34.328601 ignition[992]: INFO : mount: mount passed May 15 13:06:34.328601 ignition[992]: INFO : Ignition finished successfully May 15 13:06:34.329928 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 13:06:34.332011 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 13:06:34.919372 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 13:06:34.942656 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (1002) May 15 13:06:34.950503 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 13:06:34.950596 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 13:06:34.950613 kernel: BTRFS info (device sda6): using free-space-tree May 15 13:06:34.959631 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 13:06:34.988541 ignition[1019]: INFO : Ignition 2.21.0 May 15 13:06:34.988541 ignition[1019]: INFO : Stage: files May 15 13:06:34.990246 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 13:06:34.990246 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 13:06:34.990246 ignition[1019]: DEBUG : files: compiled without relabeling support, skipping May 15 13:06:34.992758 ignition[1019]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 13:06:34.992758 ignition[1019]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 13:06:34.994483 ignition[1019]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 13:06:34.995355 ignition[1019]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 13:06:34.996200 ignition[1019]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 13:06:34.995647 unknown[1019]: wrote ssh authorized keys file for user: core May 15 13:06:34.997947 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 13:06:34.997947 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 13:06:35.291241 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 13:06:35.491677 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 13:06:35.492953 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 13:06:35.492953 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 13:06:35.492953 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 13:06:35.492953 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 13:06:35.492953 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 13:06:35.492953 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 13:06:35.492953 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 13:06:35.492953 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 13:06:35.500667 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 13:06:35.500667 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 13:06:35.500667 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 13:06:35.500667 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 13:06:35.500667 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 13:06:35.500667 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 15 13:06:35.744430 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 13:06:36.582137 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 13:06:36.582137 ignition[1019]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 13:06:36.585042 ignition[1019]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 13:06:36.586508 ignition[1019]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 13:06:36.586508 ignition[1019]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 13:06:36.586508 ignition[1019]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 13:06:36.586508 ignition[1019]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 13:06:36.586508 ignition[1019]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 13:06:36.586508 ignition[1019]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 13:06:36.586508 ignition[1019]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 15 13:06:36.596863 ignition[1019]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 15 13:06:36.596863 ignition[1019]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 13:06:36.596863 ignition[1019]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 13:06:36.596863 ignition[1019]: INFO : files: files passed May 15 13:06:36.596863 ignition[1019]: INFO : Ignition finished successfully May 15 13:06:36.589977 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 13:06:36.593675 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 13:06:36.598673 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 13:06:36.610643 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 13:06:36.611618 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 13:06:36.619494 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 13:06:36.619494 initrd-setup-root-after-ignition[1049]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 13:06:36.622201 initrd-setup-root-after-ignition[1053]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 13:06:36.623750 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 13:06:36.625577 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 13:06:36.627928 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 13:06:36.680891 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 13:06:36.681030 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 13:06:36.682383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 13:06:36.683296 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 13:06:36.684542 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 13:06:36.685494 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 13:06:36.724049 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 13:06:36.726533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 13:06:36.747923 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 13:06:36.748672 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 13:06:36.750257 systemd[1]: Stopped target timers.target - Timer Units. May 15 13:06:36.751615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 13:06:36.751774 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 13:06:36.753106 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 13:06:36.753979 systemd[1]: Stopped target basic.target - Basic System. May 15 13:06:36.755153 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 13:06:36.756270 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 13:06:36.757281 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 13:06:36.758416 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 13:06:36.759722 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 13:06:36.760901 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 13:06:36.762136 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 13:06:36.763400 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 13:06:36.764559 systemd[1]: Stopped target swap.target - Swaps. May 15 13:06:36.765716 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 13:06:36.765906 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 13:06:36.767277 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 13:06:36.768113 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 13:06:36.769128 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 13:06:36.769460 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 13:06:36.770336 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 13:06:36.770500 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 13:06:36.771980 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 13:06:36.772103 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 13:06:36.772868 systemd[1]: ignition-files.service: Deactivated successfully. May 15 13:06:36.773010 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 13:06:36.775574 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 13:06:36.777659 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 13:06:36.778177 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 13:06:36.779621 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 13:06:36.781994 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 13:06:36.782168 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 13:06:36.790053 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 13:06:36.790169 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 13:06:36.818492 ignition[1073]: INFO : Ignition 2.21.0 May 15 13:06:36.818492 ignition[1073]: INFO : Stage: umount May 15 13:06:36.818492 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 13:06:36.818492 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 13:06:36.818492 ignition[1073]: INFO : umount: umount passed May 15 13:06:36.818492 ignition[1073]: INFO : Ignition finished successfully May 15 13:06:36.822845 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 13:06:36.823861 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 13:06:36.824733 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 13:06:36.848351 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 13:06:36.848510 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 13:06:36.849891 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 13:06:36.849955 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 13:06:36.850733 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 13:06:36.850795 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 13:06:36.851745 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 13:06:36.851796 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 13:06:36.852735 systemd[1]: Stopped target network.target - Network. May 15 13:06:36.853753 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 13:06:36.853809 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 13:06:36.854851 systemd[1]: Stopped target paths.target - Path Units. May 15 13:06:36.855829 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 13:06:36.859555 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 13:06:36.860152 systemd[1]: Stopped target slices.target - Slice Units. May 15 13:06:36.861379 systemd[1]: Stopped target sockets.target - Socket Units. May 15 13:06:36.862415 systemd[1]: iscsid.socket: Deactivated successfully. May 15 13:06:36.862486 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 13:06:36.863415 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 13:06:36.863457 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 13:06:36.864383 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 13:06:36.864453 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 13:06:36.865388 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 13:06:36.865439 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 13:06:36.866410 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 13:06:36.866464 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 13:06:36.867572 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 13:06:36.868634 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 13:06:36.872764 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 13:06:36.872908 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 13:06:36.876899 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 13:06:36.877206 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 13:06:36.877363 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 13:06:36.879202 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 13:06:36.880635 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 13:06:36.881836 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 13:06:36.881881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 13:06:36.883885 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 13:06:36.885804 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 13:06:36.885869 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 13:06:36.888606 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 13:06:36.888669 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 13:06:36.891136 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 13:06:36.891196 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 13:06:36.892284 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 13:06:36.892339 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 13:06:36.894140 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 13:06:36.900584 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 13:06:36.900656 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 13:06:36.912131 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 13:06:36.913277 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 13:06:36.915049 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 13:06:36.915185 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 13:06:36.917298 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 13:06:36.917400 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 13:06:36.918037 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 13:06:36.918079 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 13:06:36.919597 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 13:06:36.919654 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 13:06:36.921367 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 13:06:36.921418 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 13:06:36.922590 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 13:06:36.922646 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 13:06:36.924720 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 13:06:36.928927 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 13:06:36.928994 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 13:06:36.930350 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 13:06:36.930414 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 13:06:36.934053 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 13:06:36.934105 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 13:06:36.935572 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 13:06:36.935631 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 13:06:36.939749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 13:06:36.939802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 13:06:36.941763 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 13:06:36.941824 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 15 13:06:36.941878 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 13:06:36.941930 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 13:06:36.943831 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 13:06:36.943955 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 13:06:36.945769 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 13:06:36.947715 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 13:06:36.968531 systemd[1]: Switching root. May 15 13:06:37.001683 systemd-journald[206]: Journal stopped May 15 13:06:38.299328 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). May 15 13:06:38.299364 kernel: SELinux: policy capability network_peer_controls=1 May 15 13:06:38.299377 kernel: SELinux: policy capability open_perms=1 May 15 13:06:38.299391 kernel: SELinux: policy capability extended_socket_class=1 May 15 13:06:38.299400 kernel: SELinux: policy capability always_check_network=0 May 15 13:06:38.299410 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 13:06:38.299420 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 13:06:38.299430 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 13:06:38.299441 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 13:06:38.299450 kernel: SELinux: policy capability userspace_initial_context=0 May 15 13:06:38.299463 kernel: audit: type=1403 audit(1747314397.157:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 13:06:38.301949 systemd[1]: Successfully loaded SELinux policy in 78.442ms. May 15 13:06:38.301967 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.531ms. May 15 13:06:38.301980 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 13:06:38.301991 systemd[1]: Detected virtualization kvm. May 15 13:06:38.302007 systemd[1]: Detected architecture x86-64. May 15 13:06:38.302018 systemd[1]: Detected first boot. May 15 13:06:38.302029 systemd[1]: Initializing machine ID from random generator. May 15 13:06:38.302039 zram_generator::config[1117]: No configuration found. May 15 13:06:38.302050 kernel: Guest personality initialized and is inactive May 15 13:06:38.302060 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 13:06:38.302070 kernel: Initialized host personality May 15 13:06:38.302083 kernel: NET: Registered PF_VSOCK protocol family May 15 13:06:38.302093 systemd[1]: Populated /etc with preset unit settings. May 15 13:06:38.302105 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 13:06:38.302116 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 13:06:38.302126 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 13:06:38.302137 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 13:06:38.302147 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 13:06:38.302161 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 13:06:38.302172 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 13:06:38.302182 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 13:06:38.302193 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 13:06:38.302203 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 13:06:38.302214 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 13:06:38.302225 systemd[1]: Created slice user.slice - User and Session Slice. May 15 13:06:38.302238 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 13:06:38.302248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 13:06:38.302259 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 13:06:38.302269 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 13:06:38.302283 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 13:06:38.302295 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 13:06:38.302306 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 13:06:38.302316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 13:06:38.302329 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 13:06:38.302340 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 13:06:38.302351 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 13:06:38.302362 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 13:06:38.302373 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 13:06:38.302383 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 13:06:38.302394 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 13:06:38.302405 systemd[1]: Reached target slices.target - Slice Units. May 15 13:06:38.302418 systemd[1]: Reached target swap.target - Swaps. May 15 13:06:38.302429 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 13:06:38.302439 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 13:06:38.302450 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 13:06:38.302461 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 13:06:38.302601 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 13:06:38.302614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 13:06:38.302625 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 13:06:38.302636 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 13:06:38.302647 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 13:06:38.302658 systemd[1]: Mounting media.mount - External Media Directory... May 15 13:06:38.302669 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 13:06:38.302679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 13:06:38.302694 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 13:06:38.302705 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 13:06:38.302717 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 13:06:38.302728 systemd[1]: Reached target machines.target - Containers. May 15 13:06:38.302739 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 13:06:38.302750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 13:06:38.302762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 13:06:38.302772 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 13:06:38.302786 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 13:06:38.302797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 13:06:38.302808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 13:06:38.302819 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 13:06:38.302830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 13:06:38.302841 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 13:06:38.302852 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 13:06:38.302863 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 13:06:38.302874 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 13:06:38.302888 systemd[1]: Stopped systemd-fsck-usr.service. May 15 13:06:38.302899 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 13:06:38.302910 kernel: loop: module loaded May 15 13:06:38.302920 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 13:06:38.302932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 13:06:38.302943 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 13:06:38.302955 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 13:06:38.302966 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 13:06:38.302979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 13:06:38.302990 kernel: fuse: init (API version 7.41) May 15 13:06:38.303001 systemd[1]: verity-setup.service: Deactivated successfully. May 15 13:06:38.303012 systemd[1]: Stopped verity-setup.service. May 15 13:06:38.303023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 13:06:38.303034 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 13:06:38.303044 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 13:06:38.303055 systemd[1]: Mounted media.mount - External Media Directory. May 15 13:06:38.303068 kernel: ACPI: bus type drm_connector registered May 15 13:06:38.303079 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 13:06:38.303090 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 13:06:38.303101 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 13:06:38.303111 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 13:06:38.303122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 13:06:38.303132 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 13:06:38.303143 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 13:06:38.303153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 13:06:38.303166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 13:06:38.303177 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 13:06:38.303187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 13:06:38.303198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 13:06:38.303208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 13:06:38.303219 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 13:06:38.303229 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 13:06:38.303240 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 13:06:38.303250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 13:06:38.303263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 13:06:38.303273 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 13:06:38.303325 systemd-journald[1198]: Collecting audit messages is disabled. May 15 13:06:38.303352 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 13:06:38.303364 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 13:06:38.303377 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 13:06:38.303389 systemd-journald[1198]: Journal started May 15 13:06:38.303409 systemd-journald[1198]: Runtime Journal (/run/log/journal/fa5957fdf9de479f893832549c4700c7) is 8M, max 78.5M, 70.5M free. May 15 13:06:37.828094 systemd[1]: Queued start job for default target multi-user.target. May 15 13:06:37.841699 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 13:06:37.842220 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 13:06:38.310518 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 13:06:38.316491 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 13:06:38.320645 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 13:06:38.327083 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 13:06:38.327117 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 13:06:38.333530 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 13:06:38.336499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 13:06:38.341512 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 13:06:38.346505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 13:06:38.352550 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 13:06:38.360508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 13:06:38.365500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 13:06:38.571897 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 13:06:38.572037 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 13:06:38.572062 systemd[1]: Started systemd-journald.service - Journal Service. May 15 13:06:38.547858 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 13:06:38.549651 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 13:06:38.573514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 13:06:38.593686 kernel: loop0: detected capacity change from 0 to 205544 May 15 13:06:38.588063 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 13:06:38.615432 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 13:06:38.629053 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 13:06:38.632083 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 13:06:38.633192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 13:06:38.653753 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 13:06:38.681496 kernel: loop1: detected capacity change from 0 to 146240 May 15 13:06:38.687663 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 13:06:38.689181 systemd-journald[1198]: Time spent on flushing to /var/log/journal/fa5957fdf9de479f893832549c4700c7 is 43.310ms for 1010 entries. May 15 13:06:38.689181 systemd-journald[1198]: System Journal (/var/log/journal/fa5957fdf9de479f893832549c4700c7) is 8M, max 195.6M, 187.6M free. May 15 13:06:38.737144 systemd-journald[1198]: Received client request to flush runtime journal. May 15 13:06:38.707758 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. May 15 13:06:38.707771 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. May 15 13:06:38.738024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 13:06:38.740580 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 13:06:38.744846 kernel: loop2: detected capacity change from 0 to 8 May 15 13:06:38.746621 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 13:06:38.769545 kernel: loop3: detected capacity change from 0 to 113872 May 15 13:06:38.814675 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 13:06:38.821973 kernel: loop4: detected capacity change from 0 to 205544 May 15 13:06:38.822601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 13:06:38.844224 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 13:06:38.872305 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. May 15 13:06:38.872576 kernel: loop5: detected capacity change from 0 to 146240 May 15 13:06:38.872623 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. May 15 13:06:38.878302 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 13:06:38.931531 kernel: loop6: detected capacity change from 0 to 8 May 15 13:06:38.935531 kernel: loop7: detected capacity change from 0 to 113872 May 15 13:06:38.947851 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 15 13:06:38.950681 (sd-merge)[1265]: Merged extensions into '/usr'. May 15 13:06:38.965996 systemd[1]: Reload requested from client PID 1223 ('systemd-sysext') (unit systemd-sysext.service)... May 15 13:06:38.966010 systemd[1]: Reloading... May 15 13:06:39.129636 zram_generator::config[1290]: No configuration found. May 15 13:06:39.281727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 13:06:39.460897 systemd[1]: Reloading finished in 494 ms. May 15 13:06:39.500752 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 13:06:39.512970 systemd[1]: Starting ensure-sysext.service... May 15 13:06:39.516607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 13:06:39.635769 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 15 13:06:39.635792 systemd[1]: Reloading... May 15 13:06:39.966878 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 13:06:39.968016 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 13:06:39.968541 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 13:06:39.968900 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 13:06:39.971085 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 13:06:39.973657 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 15 13:06:39.974588 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 15 13:06:39.990910 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 15 13:06:39.991015 systemd-tmpfiles[1337]: Skipping /boot May 15 13:06:40.016655 ldconfig[1219]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 13:06:40.039795 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 15 13:06:40.041507 systemd-tmpfiles[1337]: Skipping /boot May 15 13:06:40.067512 zram_generator::config[1365]: No configuration found. May 15 13:06:40.182305 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 13:06:40.258690 systemd[1]: Reloading finished in 622 ms. May 15 13:06:40.282992 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 13:06:40.284147 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 13:06:40.302222 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 13:06:40.312201 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 13:06:40.316546 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 13:06:40.322334 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 13:06:40.328195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 13:06:40.333726 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 13:06:40.339541 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 13:06:40.342746 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 13:06:40.342908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 13:06:40.345735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 13:06:40.355802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 13:06:40.400671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 13:06:40.401383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 13:06:40.401536 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 13:06:40.401635 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 13:06:40.403904 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 13:06:40.411927 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 13:06:40.417151 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 13:06:40.429424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 13:06:40.429673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 13:06:40.429898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 13:06:40.430028 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 13:06:40.430147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 13:06:40.435737 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 13:06:40.445068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 13:06:40.445354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 13:06:40.468042 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 13:06:40.469864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 13:06:40.470416 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 13:06:40.470583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 13:06:40.472508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 13:06:40.472793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 13:06:40.475321 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 13:06:40.479434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 13:06:40.480811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 13:06:40.489397 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 13:06:40.491552 systemd[1]: Finished ensure-sysext.service. May 15 13:06:40.492142 augenrules[1448]: No rules May 15 13:06:40.505653 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 13:06:40.507696 systemd[1]: audit-rules.service: Deactivated successfully. May 15 13:06:40.507955 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 13:06:40.509014 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 13:06:40.510365 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 13:06:40.512906 systemd-udevd[1418]: Using default interface naming scheme 'v255'. May 15 13:06:40.517515 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 13:06:40.517759 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 13:06:40.519142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 13:06:40.519547 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 13:06:40.519763 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 13:06:40.523185 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 13:06:40.565493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 13:06:40.571697 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 13:06:40.688655 systemd-resolved[1415]: Positive Trust Anchors: May 15 13:06:40.688686 systemd-resolved[1415]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 13:06:40.688715 systemd-resolved[1415]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 13:06:40.694544 systemd-resolved[1415]: Defaulting to hostname 'linux'. May 15 13:06:40.698231 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 13:06:40.698975 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 13:06:40.753784 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 13:06:40.765363 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 13:06:40.766044 systemd[1]: Reached target sysinit.target - System Initialization. May 15 13:06:40.766668 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 13:06:40.767257 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 13:06:40.767851 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 13:06:40.768397 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 13:06:40.768997 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 13:06:40.769025 systemd[1]: Reached target paths.target - Path Units. May 15 13:06:40.769535 systemd[1]: Reached target time-set.target - System Time Set. May 15 13:06:40.770221 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 13:06:40.770891 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 13:06:40.772521 systemd[1]: Reached target timers.target - Timer Units. May 15 13:06:40.774955 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 13:06:40.778127 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 13:06:40.783211 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 13:06:40.785103 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 13:06:40.786561 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 13:06:40.796075 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 13:06:40.798007 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 13:06:40.802454 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 13:06:40.805132 systemd[1]: Reached target sockets.target - Socket Units. May 15 13:06:40.805707 systemd[1]: Reached target basic.target - Basic System. May 15 13:06:40.807568 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 13:06:40.807601 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 13:06:40.811970 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 13:06:40.815635 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 13:06:40.820637 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 13:06:40.825297 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 13:06:40.827689 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 13:06:40.828252 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 13:06:40.839886 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 13:06:40.854687 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 13:06:40.865579 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 13:06:40.871419 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 13:06:40.915714 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 13:06:40.969706 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 13:06:40.972378 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 13:06:40.972929 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 13:06:40.978517 jq[1504]: false May 15 13:06:40.978881 systemd-networkd[1468]: lo: Link UP May 15 13:06:40.978890 systemd-networkd[1468]: lo: Gained carrier May 15 13:06:40.979739 coreos-metadata[1501]: May 15 13:06:40.979 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 13:06:40.983507 systemd-networkd[1468]: Enumeration completed May 15 13:06:40.986739 systemd[1]: Starting update-engine.service - Update Engine... May 15 13:06:40.995506 extend-filesystems[1505]: Found loop4 May 15 13:06:40.995506 extend-filesystems[1505]: Found loop5 May 15 13:06:40.995506 extend-filesystems[1505]: Found loop6 May 15 13:06:40.995506 extend-filesystems[1505]: Found loop7 May 15 13:06:40.995506 extend-filesystems[1505]: Found sda May 15 13:06:40.995506 extend-filesystems[1505]: Found sda1 May 15 13:06:40.995506 extend-filesystems[1505]: Found sda2 May 15 13:06:40.995506 extend-filesystems[1505]: Found sda3 May 15 13:06:40.995506 extend-filesystems[1505]: Found usr May 15 13:06:40.995506 extend-filesystems[1505]: Found sda4 May 15 13:06:40.995506 extend-filesystems[1505]: Found sda6 May 15 13:06:40.995506 extend-filesystems[1505]: Found sda7 May 15 13:06:40.995506 extend-filesystems[1505]: Found sda9 May 15 13:06:41.007535 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 13:06:41.009105 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 13:06:41.011006 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 13:06:41.012922 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 13:06:41.013170 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 13:06:41.013531 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 13:06:41.013793 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 13:06:41.015909 systemd[1]: motdgen.service: Deactivated successfully. May 15 13:06:41.016153 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 13:06:41.018454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 13:06:41.019895 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 13:06:41.037573 systemd[1]: Reached target network.target - Network. May 15 13:06:41.042893 systemd[1]: Starting containerd.service - containerd container runtime... May 15 13:06:41.051768 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 13:06:41.060044 jq[1526]: true May 15 13:06:41.061419 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 13:06:41.101749 dbus-daemon[1502]: [system] SELinux support is enabled May 15 13:06:41.103487 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 13:06:41.105826 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 13:06:41.108777 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 13:06:41.108805 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 13:06:41.109407 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 13:06:41.109422 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 13:06:41.144437 tar[1528]: linux-amd64/helm May 15 13:06:41.148494 jq[1538]: true May 15 13:06:41.147632 oslogin_cache_refresh[1506]: Refreshing passwd entry cache May 15 13:06:41.148928 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Refreshing passwd entry cache May 15 13:06:41.150978 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Failure getting users, quitting May 15 13:06:41.151045 oslogin_cache_refresh[1506]: Failure getting users, quitting May 15 13:06:41.151262 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 13:06:41.151296 oslogin_cache_refresh[1506]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 13:06:41.151391 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Refreshing group entry cache May 15 13:06:41.151424 oslogin_cache_refresh[1506]: Refreshing group entry cache May 15 13:06:41.152043 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Failure getting groups, quitting May 15 13:06:41.152101 oslogin_cache_refresh[1506]: Failure getting groups, quitting May 15 13:06:41.152162 google_oslogin_nss_cache[1506]: oslogin_cache_refresh[1506]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 13:06:41.152191 oslogin_cache_refresh[1506]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 13:06:41.155902 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 13:06:41.156204 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 13:06:41.182525 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 15 13:06:41.204905 update_engine[1518]: I20250515 13:06:41.204792 1518 main.cc:92] Flatcar Update Engine starting May 15 13:06:41.219491 kernel: ACPI: button: Power Button [PWRF] May 15 13:06:41.219899 systemd[1]: Started update-engine.service - Update Engine. May 15 13:06:41.222878 bash[1565]: Updated "/home/core/.ssh/authorized_keys" May 15 13:06:41.222996 update_engine[1518]: I20250515 13:06:41.219884 1518 update_check_scheduler.cc:74] Next update check in 3m59s May 15 13:06:41.223266 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 13:06:41.224422 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 13:06:41.230007 systemd[1]: Starting sshkeys.service... May 15 13:06:41.246834 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 13:06:41.259025 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 13:06:41.261958 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 13:06:41.276823 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 13:06:41.277289 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 13:06:41.549461 systemd-logind[1516]: New seat seat0. May 15 13:06:41.551717 systemd[1]: Started systemd-logind.service - User Login Management. May 15 13:06:41.613240 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 13:06:41.613251 systemd-networkd[1468]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 13:06:41.614376 systemd-networkd[1468]: eth0: Link UP May 15 13:06:41.614809 systemd-networkd[1468]: eth0: Gained carrier May 15 13:06:41.614822 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 13:06:41.774159 kernel: mousedev: PS/2 mouse device common for all mice May 15 13:06:41.833873 coreos-metadata[1570]: May 15 13:06:41.831 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 13:06:42.042113 coreos-metadata[1501]: May 15 13:06:42.041 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 15 13:06:42.177617 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 13:06:42.191750 kernel: EDAC MC: Ver: 3.0.0 May 15 13:06:42.334708 systemd-networkd[1468]: eth0: DHCPv4 address 172.236.109.179/24, gateway 172.236.109.1 acquired from 23.215.118.230 May 15 13:06:42.335137 dbus-daemon[1502]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1468 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 15 13:06:42.346754 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. May 15 13:06:42.352027 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 15 13:06:42.380498 sshd_keygen[1527]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 13:06:42.408232 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 13:06:42.416723 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 13:06:42.497260 systemd[1]: issuegen.service: Deactivated successfully. May 15 13:06:42.497599 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 13:06:42.698717 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 13:06:42.734734 containerd[1543]: time="2025-05-15T13:06:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 13:06:42.734407 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 13:06:42.736046 containerd[1543]: time="2025-05-15T13:06:42.736012235Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 13:06:42.739429 systemd[1]: Started sshd@0-172.236.109.179:22-139.178.89.65:36344.service - OpenSSH per-connection server daemon (139.178.89.65:36344). May 15 13:06:42.777805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 13:06:42.785329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 13:06:42.844566 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 13:06:42.846222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 13:06:42.849603 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 13:06:42.854869 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 13:06:42.855763 systemd[1]: Reached target getty.target - Login Prompts. May 15 13:06:42.879392 coreos-metadata[1570]: May 15 13:06:42.876 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 15 13:06:42.893942 containerd[1543]: time="2025-05-15T13:06:42.893891461Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.18µs" May 15 13:06:42.893942 containerd[1543]: time="2025-05-15T13:06:42.893936801Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 13:06:42.894017 containerd[1543]: time="2025-05-15T13:06:42.893954781Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 13:06:42.894175 containerd[1543]: time="2025-05-15T13:06:42.894152391Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 13:06:42.894201 containerd[1543]: time="2025-05-15T13:06:42.894175741Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 13:06:42.894219 containerd[1543]: time="2025-05-15T13:06:42.894208341Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 13:06:42.894320 containerd[1543]: time="2025-05-15T13:06:42.894294762Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 13:06:42.894320 containerd[1543]: time="2025-05-15T13:06:42.894318372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 13:06:42.897023 containerd[1543]: time="2025-05-15T13:06:42.896989267Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 13:06:42.897023 containerd[1543]: time="2025-05-15T13:06:42.897014147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 13:06:42.897082 containerd[1543]: time="2025-05-15T13:06:42.897037077Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 13:06:42.897082 containerd[1543]: time="2025-05-15T13:06:42.897046447Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 13:06:42.897189 containerd[1543]: time="2025-05-15T13:06:42.897164247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 13:06:42.899507 containerd[1543]: time="2025-05-15T13:06:42.897433708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 13:06:42.899507 containerd[1543]: time="2025-05-15T13:06:42.897512548Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 13:06:42.899507 containerd[1543]: time="2025-05-15T13:06:42.897529538Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 13:06:42.899507 containerd[1543]: time="2025-05-15T13:06:42.898155199Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 13:06:42.898756 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 15 13:06:42.901236 containerd[1543]: time="2025-05-15T13:06:42.901205325Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 13:06:42.901311 containerd[1543]: time="2025-05-15T13:06:42.901290716Z" level=info msg="metadata content store policy set" policy=shared May 15 13:06:42.901638 dbus-daemon[1502]: [system] Successfully activated service 'org.freedesktop.hostname1' May 15 13:06:42.902364 dbus-daemon[1502]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1590 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 15 13:06:42.906465 systemd[1]: Starting polkit.service - Authorization Manager... May 15 13:06:42.917367 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 13:06:42.921712 containerd[1543]: time="2025-05-15T13:06:42.921666486Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 13:06:42.922108 containerd[1543]: time="2025-05-15T13:06:42.922077007Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 13:06:42.922139 containerd[1543]: time="2025-05-15T13:06:42.922111147Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 13:06:42.922139 containerd[1543]: time="2025-05-15T13:06:42.922131487Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 13:06:42.922192 containerd[1543]: time="2025-05-15T13:06:42.922166477Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 13:06:42.922213 containerd[1543]: time="2025-05-15T13:06:42.922193497Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 13:06:42.922242 containerd[1543]: time="2025-05-15T13:06:42.922212087Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 13:06:42.922242 containerd[1543]: time="2025-05-15T13:06:42.922228798Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 13:06:42.922276 containerd[1543]: time="2025-05-15T13:06:42.922245948Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 13:06:42.922276 containerd[1543]: time="2025-05-15T13:06:42.922261938Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 13:06:42.922308 containerd[1543]: time="2025-05-15T13:06:42.922275798Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 13:06:42.922308 containerd[1543]: time="2025-05-15T13:06:42.922287248Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 13:06:42.922546 containerd[1543]: time="2025-05-15T13:06:42.922523218Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 13:06:42.922583 containerd[1543]: time="2025-05-15T13:06:42.922570988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 13:06:42.922604 containerd[1543]: time="2025-05-15T13:06:42.922591308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 13:06:42.922621 containerd[1543]: time="2025-05-15T13:06:42.922606668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 13:06:42.922639 containerd[1543]: time="2025-05-15T13:06:42.922620748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 13:06:42.922639 containerd[1543]: time="2025-05-15T13:06:42.922634688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 13:06:42.922683 containerd[1543]: time="2025-05-15T13:06:42.922649798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 13:06:42.922683 containerd[1543]: time="2025-05-15T13:06:42.922670158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 13:06:42.922721 containerd[1543]: time="2025-05-15T13:06:42.922685828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 13:06:42.922721 containerd[1543]: time="2025-05-15T13:06:42.922701968Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 13:06:42.922721 containerd[1543]: time="2025-05-15T13:06:42.922716789Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 13:06:42.922871 containerd[1543]: time="2025-05-15T13:06:42.922837289Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 13:06:42.922871 containerd[1543]: time="2025-05-15T13:06:42.922867839Z" level=info msg="Start snapshots syncer" May 15 13:06:42.922933 containerd[1543]: time="2025-05-15T13:06:42.922910949Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 13:06:42.929548 containerd[1543]: time="2025-05-15T13:06:42.927531438Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 13:06:42.929763 containerd[1543]: time="2025-05-15T13:06:42.929583392Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 13:06:42.929814 containerd[1543]: time="2025-05-15T13:06:42.929785463Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 13:06:42.930013 containerd[1543]: time="2025-05-15T13:06:42.929984683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 13:06:42.930040 containerd[1543]: time="2025-05-15T13:06:42.930023213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 13:06:42.930059 containerd[1543]: time="2025-05-15T13:06:42.930040503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 13:06:42.930059 containerd[1543]: time="2025-05-15T13:06:42.930053853Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 13:06:42.930093 containerd[1543]: time="2025-05-15T13:06:42.930076283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 13:06:42.930111 containerd[1543]: time="2025-05-15T13:06:42.930089373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 13:06:42.930111 containerd[1543]: time="2025-05-15T13:06:42.930103043Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 13:06:42.930168 containerd[1543]: time="2025-05-15T13:06:42.930145153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 13:06:42.930188 containerd[1543]: time="2025-05-15T13:06:42.930170073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 13:06:42.930206 containerd[1543]: time="2025-05-15T13:06:42.930198743Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 13:06:42.931555 containerd[1543]: time="2025-05-15T13:06:42.931525296Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 13:06:42.931658 containerd[1543]: time="2025-05-15T13:06:42.931635826Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 13:06:42.931658 containerd[1543]: time="2025-05-15T13:06:42.931652986Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 13:06:42.931751 containerd[1543]: time="2025-05-15T13:06:42.931663496Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 13:06:42.931751 containerd[1543]: time="2025-05-15T13:06:42.931671436Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 13:06:42.931751 containerd[1543]: time="2025-05-15T13:06:42.931680296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 13:06:42.931751 containerd[1543]: time="2025-05-15T13:06:42.931698006Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 13:06:42.931751 containerd[1543]: time="2025-05-15T13:06:42.931726747Z" level=info msg="runtime interface created" May 15 13:06:42.931751 containerd[1543]: time="2025-05-15T13:06:42.931732547Z" level=info msg="created NRI interface" May 15 13:06:42.931751 containerd[1543]: time="2025-05-15T13:06:42.931740367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 13:06:42.931751 containerd[1543]: time="2025-05-15T13:06:42.931751767Z" level=info msg="Connect containerd service" May 15 13:06:42.931902 containerd[1543]: time="2025-05-15T13:06:42.931775057Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 13:06:42.958627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 13:06:42.971528 containerd[1543]: time="2025-05-15T13:06:42.968836571Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 13:06:43.020228 coreos-metadata[1570]: May 15 13:06:43.019 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 15 13:06:43.131253 systemd-logind[1516]: Watching system buttons on /dev/input/event2 (Power Button) May 15 13:06:43.181049 coreos-metadata[1570]: May 15 13:06:43.179 INFO Fetch successful May 15 13:06:43.244998 update-ssh-keys[1638]: Updated "/home/core/.ssh/authorized_keys" May 15 13:06:43.247350 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 13:06:43.399128 systemd-networkd[1468]: eth0: Gained IPv6LL May 15 13:06:43.409614 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. May 15 13:06:43.456031 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 13:06:43.460547 systemd[1]: Finished sshkeys.service. May 15 13:06:43.529859 systemd[1]: Reached target network-online.target - Network is Online. May 15 13:06:43.535599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 13:06:43.559535 containerd[1543]: time="2025-05-15T13:06:43.559014941Z" level=info msg="Start subscribing containerd event" May 15 13:06:43.559535 containerd[1543]: time="2025-05-15T13:06:43.559338901Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 13:06:43.559892 containerd[1543]: time="2025-05-15T13:06:43.559866283Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 13:06:43.561058 containerd[1543]: time="2025-05-15T13:06:43.560987035Z" level=info msg="Start recovering state" May 15 13:06:43.561562 containerd[1543]: time="2025-05-15T13:06:43.561535236Z" level=info msg="Start event monitor" May 15 13:06:43.561598 containerd[1543]: time="2025-05-15T13:06:43.561567266Z" level=info msg="Start cni network conf syncer for default" May 15 13:06:43.561975 containerd[1543]: time="2025-05-15T13:06:43.561951347Z" level=info msg="Start streaming server" May 15 13:06:43.561999 containerd[1543]: time="2025-05-15T13:06:43.561985597Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 13:06:43.561999 containerd[1543]: time="2025-05-15T13:06:43.561994027Z" level=info msg="runtime interface starting up..." May 15 13:06:43.562034 containerd[1543]: time="2025-05-15T13:06:43.561999947Z" level=info msg="starting plugins..." May 15 13:06:43.562955 containerd[1543]: time="2025-05-15T13:06:43.562922569Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 13:06:43.564142 containerd[1543]: time="2025-05-15T13:06:43.564116381Z" level=info msg="containerd successfully booted in 0.847374s" May 15 13:06:43.573714 polkitd[1625]: Started polkitd version 126 May 15 13:06:43.594680 polkitd[1625]: Loading rules from directory /etc/polkit-1/rules.d May 15 13:06:43.595004 polkitd[1625]: Loading rules from directory /run/polkit-1/rules.d May 15 13:06:43.595066 polkitd[1625]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 13:06:43.595335 polkitd[1625]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 15 13:06:43.595358 polkitd[1625]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 13:06:43.595400 polkitd[1625]: Loading rules from directory /usr/share/polkit-1/rules.d May 15 13:06:43.600855 polkitd[1625]: Finished loading, compiling and executing 2 rules May 15 13:06:43.601668 sshd[1614]: Accepted publickey for core from 139.178.89.65 port 36344 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:43.602891 dbus-daemon[1502]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 15 13:06:43.604013 polkitd[1625]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 15 13:06:43.605908 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:43.615894 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 13:06:43.666525 systemd[1]: Started containerd.service - containerd container runtime. May 15 13:06:43.667358 systemd[1]: Started polkit.service - Authorization Manager. May 15 13:06:43.676934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 13:06:43.690457 systemd-hostnamed[1590]: Hostname set to <172-236-109-179> (transient) May 15 13:06:43.690704 systemd-resolved[1415]: System hostname changed to '172-236-109-179'. May 15 13:06:43.692872 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 13:06:43.697544 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 13:06:43.698721 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 13:06:43.714414 systemd-logind[1516]: New session 1 of user core. May 15 13:06:43.745799 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 13:06:43.752659 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 13:06:43.786304 (systemd)[1672]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 13:06:43.792206 systemd-logind[1516]: New session c1 of user core. May 15 13:06:43.989807 tar[1528]: linux-amd64/LICENSE May 15 13:06:43.992589 tar[1528]: linux-amd64/README.md May 15 13:06:44.023740 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 13:06:44.063817 coreos-metadata[1501]: May 15 13:06:44.063 INFO Putting http://169.254.169.254/v1/token: Attempt #3 May 15 13:06:44.118111 systemd[1672]: Queued start job for default target default.target. May 15 13:06:44.125229 systemd[1672]: Created slice app.slice - User Application Slice. May 15 13:06:44.125256 systemd[1672]: Reached target paths.target - Paths. May 15 13:06:44.125761 systemd[1672]: Reached target timers.target - Timers. May 15 13:06:44.133741 systemd[1672]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 13:06:44.163202 coreos-metadata[1501]: May 15 13:06:44.158 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 15 13:06:44.167757 systemd[1672]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 13:06:44.168186 systemd[1672]: Reached target sockets.target - Sockets. May 15 13:06:44.168362 systemd[1672]: Reached target basic.target - Basic System. May 15 13:06:44.168549 systemd[1672]: Reached target default.target - Main User Target. May 15 13:06:44.168660 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 13:06:44.168681 systemd[1672]: Startup finished in 355ms. May 15 13:06:44.178185 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 13:06:44.442586 coreos-metadata[1501]: May 15 13:06:44.433 INFO Fetch successful May 15 13:06:44.442586 coreos-metadata[1501]: May 15 13:06:44.435 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 15 13:06:44.521700 systemd[1]: Started sshd@1-172.236.109.179:22-139.178.89.65:36346.service - OpenSSH per-connection server daemon (139.178.89.65:36346). May 15 13:06:44.841598 coreos-metadata[1501]: May 15 13:06:44.835 INFO Fetch successful May 15 13:06:44.905760 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. May 15 13:06:44.954490 sshd[1688]: Accepted publickey for core from 139.178.89.65 port 36346 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:44.954768 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:44.965863 systemd-logind[1516]: New session 2 of user core. May 15 13:06:44.969343 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 13:06:44.979613 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 13:06:44.982092 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 13:06:45.310598 sshd[1708]: Connection closed by 139.178.89.65 port 36346 May 15 13:06:45.312785 sshd-session[1688]: pam_unix(sshd:session): session closed for user core May 15 13:06:45.319275 systemd[1]: sshd@1-172.236.109.179:22-139.178.89.65:36346.service: Deactivated successfully. May 15 13:06:45.322288 systemd[1]: session-2.scope: Deactivated successfully. May 15 13:06:45.324356 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. May 15 13:06:45.327005 systemd-logind[1516]: Removed session 2. May 15 13:06:45.371697 systemd[1]: Started sshd@2-172.236.109.179:22-139.178.89.65:36350.service - OpenSSH per-connection server daemon (139.178.89.65:36350). May 15 13:06:45.576673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:06:45.577839 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 13:06:45.580252 systemd[1]: Startup finished in 4.458s (kernel) + 8.448s (initrd) + 8.499s (userspace) = 21.406s. May 15 13:06:45.585306 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 13:06:45.733554 sshd[1715]: Accepted publickey for core from 139.178.89.65 port 36350 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:45.735323 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:45.744814 systemd-logind[1516]: New session 3 of user core. May 15 13:06:45.796783 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 13:06:46.013457 sshd[1727]: Connection closed by 139.178.89.65 port 36350 May 15 13:06:46.019013 sshd-session[1715]: pam_unix(sshd:session): session closed for user core May 15 13:06:46.025498 systemd[1]: sshd@2-172.236.109.179:22-139.178.89.65:36350.service: Deactivated successfully. May 15 13:06:46.028012 systemd[1]: session-3.scope: Deactivated successfully. May 15 13:06:46.029687 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. May 15 13:06:46.031788 systemd-logind[1516]: Removed session 3. May 15 13:06:46.464496 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. May 15 13:06:46.665330 kubelet[1722]: E0515 13:06:46.665206 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 13:06:46.671275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 13:06:46.671549 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 13:06:46.672239 systemd[1]: kubelet.service: Consumed 2.277s CPU time, 234M memory peak. May 15 13:06:56.093015 systemd[1]: Started sshd@3-172.236.109.179:22-139.178.89.65:33948.service - OpenSSH per-connection server daemon (139.178.89.65:33948). May 15 13:06:56.437054 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 33948 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:56.438745 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:56.444700 systemd-logind[1516]: New session 4 of user core. May 15 13:06:56.450619 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 13:06:56.684053 sshd[1741]: Connection closed by 139.178.89.65 port 33948 May 15 13:06:56.684776 sshd-session[1739]: pam_unix(sshd:session): session closed for user core May 15 13:06:56.689443 systemd[1]: sshd@3-172.236.109.179:22-139.178.89.65:33948.service: Deactivated successfully. May 15 13:06:56.691880 systemd[1]: session-4.scope: Deactivated successfully. May 15 13:06:56.693005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 13:06:56.693829 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. May 15 13:06:56.696832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 13:06:56.698289 systemd-logind[1516]: Removed session 4. May 15 13:06:56.742691 systemd[1]: Started sshd@4-172.236.109.179:22-139.178.89.65:33960.service - OpenSSH per-connection server daemon (139.178.89.65:33960). May 15 13:06:56.918581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:06:56.931121 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 13:06:57.028775 kubelet[1757]: E0515 13:06:57.028621 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 13:06:57.033980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 13:06:57.034192 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 13:06:57.034632 systemd[1]: kubelet.service: Consumed 291ms CPU time, 96.2M memory peak. May 15 13:06:57.079971 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 33960 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:57.081683 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:57.087964 systemd-logind[1516]: New session 5 of user core. May 15 13:06:57.093608 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 13:06:57.318339 sshd[1764]: Connection closed by 139.178.89.65 port 33960 May 15 13:06:57.318997 sshd-session[1750]: pam_unix(sshd:session): session closed for user core May 15 13:06:57.323665 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. May 15 13:06:57.324503 systemd[1]: sshd@4-172.236.109.179:22-139.178.89.65:33960.service: Deactivated successfully. May 15 13:06:57.326963 systemd[1]: session-5.scope: Deactivated successfully. May 15 13:06:57.329131 systemd-logind[1516]: Removed session 5. May 15 13:06:57.388712 systemd[1]: Started sshd@5-172.236.109.179:22-139.178.89.65:33974.service - OpenSSH per-connection server daemon (139.178.89.65:33974). May 15 13:06:57.547333 systemd[1]: Started sshd@6-172.236.109.179:22-80.94.95.116:41616.service - OpenSSH per-connection server daemon (80.94.95.116:41616). May 15 13:06:57.736108 sshd[1770]: Accepted publickey for core from 139.178.89.65 port 33974 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:57.737588 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:57.742768 systemd-logind[1516]: New session 6 of user core. May 15 13:06:57.749655 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 13:06:57.983314 sshd[1774]: Connection closed by 139.178.89.65 port 33974 May 15 13:06:57.984188 sshd-session[1770]: pam_unix(sshd:session): session closed for user core May 15 13:06:57.988889 systemd[1]: sshd@5-172.236.109.179:22-139.178.89.65:33974.service: Deactivated successfully. May 15 13:06:57.992076 systemd[1]: session-6.scope: Deactivated successfully. May 15 13:06:57.992982 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. May 15 13:06:57.995456 systemd-logind[1516]: Removed session 6. May 15 13:06:58.042429 systemd[1]: Started sshd@7-172.236.109.179:22-139.178.89.65:33982.service - OpenSSH per-connection server daemon (139.178.89.65:33982). May 15 13:06:58.387339 sshd[1780]: Accepted publickey for core from 139.178.89.65 port 33982 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:58.388808 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:58.394079 systemd-logind[1516]: New session 7 of user core. May 15 13:06:58.403616 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 13:06:58.597422 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 13:06:58.597778 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 13:06:58.614722 sudo[1783]: pam_unix(sudo:session): session closed for user root May 15 13:06:58.666082 sshd[1782]: Connection closed by 139.178.89.65 port 33982 May 15 13:06:58.667153 sshd-session[1780]: pam_unix(sshd:session): session closed for user core May 15 13:06:58.670954 systemd[1]: sshd@7-172.236.109.179:22-139.178.89.65:33982.service: Deactivated successfully. May 15 13:06:58.673317 systemd[1]: session-7.scope: Deactivated successfully. May 15 13:06:58.676281 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. May 15 13:06:58.677711 systemd-logind[1516]: Removed session 7. May 15 13:06:58.728307 systemd[1]: Started sshd@8-172.236.109.179:22-139.178.89.65:33994.service - OpenSSH per-connection server daemon (139.178.89.65:33994). May 15 13:06:59.067673 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 33994 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:59.069040 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:59.074096 systemd-logind[1516]: New session 8 of user core. May 15 13:06:59.083608 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 13:06:59.266061 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 13:06:59.266429 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 13:06:59.271117 sudo[1794]: pam_unix(sudo:session): session closed for user root May 15 13:06:59.277084 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 13:06:59.277406 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 13:06:59.286984 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 13:06:59.328470 augenrules[1816]: No rules May 15 13:06:59.330027 systemd[1]: audit-rules.service: Deactivated successfully. May 15 13:06:59.330325 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 13:06:59.332142 sudo[1793]: pam_unix(sudo:session): session closed for user root May 15 13:06:59.383444 sshd[1792]: Connection closed by 139.178.89.65 port 33994 May 15 13:06:59.384152 sshd-session[1790]: pam_unix(sshd:session): session closed for user core May 15 13:06:59.387716 systemd[1]: sshd@8-172.236.109.179:22-139.178.89.65:33994.service: Deactivated successfully. May 15 13:06:59.390129 systemd[1]: session-8.scope: Deactivated successfully. May 15 13:06:59.391064 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. May 15 13:06:59.392494 systemd-logind[1516]: Removed session 8. May 15 13:06:59.447019 systemd[1]: Started sshd@9-172.236.109.179:22-139.178.89.65:33996.service - OpenSSH per-connection server daemon (139.178.89.65:33996). May 15 13:06:59.703181 sshd[1773]: Invalid user config from 80.94.95.116 port 41616 May 15 13:06:59.801820 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 33996 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:06:59.803304 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:06:59.808656 systemd-logind[1516]: New session 9 of user core. May 15 13:06:59.814612 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 13:07:00.006412 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 13:07:00.006922 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 13:07:00.481433 sshd[1773]: Connection closed by invalid user config 80.94.95.116 port 41616 [preauth] May 15 13:07:00.489624 systemd[1]: sshd@6-172.236.109.179:22-80.94.95.116:41616.service: Deactivated successfully. May 15 13:07:00.943251 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 13:07:00.974179 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 13:07:01.648172 dockerd[1847]: time="2025-05-15T13:07:01.648077691Z" level=info msg="Starting up" May 15 13:07:01.650583 dockerd[1847]: time="2025-05-15T13:07:01.650512656Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 13:07:01.708633 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4227559897-merged.mount: Deactivated successfully. May 15 13:07:01.747850 dockerd[1847]: time="2025-05-15T13:07:01.747684900Z" level=info msg="Loading containers: start." May 15 13:07:01.766498 kernel: Initializing XFRM netlink socket May 15 13:07:01.997233 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. May 15 13:07:02.507735 systemd-timesyncd[1454]: Contacted time server [2600:1702:7400:9ac0::314:5c]:123 (2.flatcar.pool.ntp.org). May 15 13:07:02.507815 systemd-timesyncd[1454]: Initial clock synchronization to Thu 2025-05-15 13:07:02.507431 UTC. May 15 13:07:02.507922 systemd-resolved[1415]: Clock change detected. Flushing caches. May 15 13:07:02.565911 systemd-networkd[1468]: docker0: Link UP May 15 13:07:02.569255 dockerd[1847]: time="2025-05-15T13:07:02.569200823Z" level=info msg="Loading containers: done." May 15 13:07:02.595220 dockerd[1847]: time="2025-05-15T13:07:02.595162685Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 13:07:02.595434 dockerd[1847]: time="2025-05-15T13:07:02.595255455Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 13:07:02.595434 dockerd[1847]: time="2025-05-15T13:07:02.595361385Z" level=info msg="Initializing buildkit" May 15 13:07:02.616052 dockerd[1847]: time="2025-05-15T13:07:02.615995396Z" level=info msg="Completed buildkit initialization" May 15 13:07:02.622448 dockerd[1847]: time="2025-05-15T13:07:02.622400019Z" level=info msg="Daemon has completed initialization" May 15 13:07:02.622654 dockerd[1847]: time="2025-05-15T13:07:02.622546769Z" level=info msg="API listen on /run/docker.sock" May 15 13:07:02.622728 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 13:07:03.344793 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3239800802-merged.mount: Deactivated successfully. May 15 13:07:03.647004 containerd[1543]: time="2025-05-15T13:07:03.646779857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 13:07:04.538002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996411576.mount: Deactivated successfully. May 15 13:07:06.351482 containerd[1543]: time="2025-05-15T13:07:06.351413095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:06.352237 containerd[1543]: time="2025-05-15T13:07:06.352192497Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 15 13:07:06.354476 containerd[1543]: time="2025-05-15T13:07:06.354434511Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:06.356774 containerd[1543]: time="2025-05-15T13:07:06.356745626Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.709825078s" May 15 13:07:06.358494 containerd[1543]: time="2025-05-15T13:07:06.356870316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 15 13:07:06.358494 containerd[1543]: time="2025-05-15T13:07:06.357267157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:06.360786 containerd[1543]: time="2025-05-15T13:07:06.360758324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 13:07:07.780796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 13:07:07.784829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 13:07:08.092717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:07:08.101031 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 13:07:08.592472 kubelet[2113]: E0515 13:07:08.592053 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 13:07:08.597113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 13:07:08.597362 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 13:07:08.598276 systemd[1]: kubelet.service: Consumed 755ms CPU time, 95.6M memory peak. May 15 13:07:09.066153 containerd[1543]: time="2025-05-15T13:07:09.064808531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:09.067373 containerd[1543]: time="2025-05-15T13:07:09.066694225Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 15 13:07:09.069376 containerd[1543]: time="2025-05-15T13:07:09.068089337Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:09.070500 containerd[1543]: time="2025-05-15T13:07:09.070466712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:09.071494 containerd[1543]: time="2025-05-15T13:07:09.071457384Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.71066392s" May 15 13:07:09.071581 containerd[1543]: time="2025-05-15T13:07:09.071569384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 15 13:07:09.073248 containerd[1543]: time="2025-05-15T13:07:09.073204198Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 13:07:10.950665 containerd[1543]: time="2025-05-15T13:07:10.950617592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:10.951712 containerd[1543]: time="2025-05-15T13:07:10.951571354Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 15 13:07:10.952917 containerd[1543]: time="2025-05-15T13:07:10.952883236Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:10.958185 containerd[1543]: time="2025-05-15T13:07:10.957698736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:10.959495 containerd[1543]: time="2025-05-15T13:07:10.959438259Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.886186171s" May 15 13:07:10.959615 containerd[1543]: time="2025-05-15T13:07:10.959592810Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 15 13:07:10.960317 containerd[1543]: time="2025-05-15T13:07:10.960171451Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 13:07:12.846495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175183833.mount: Deactivated successfully. May 15 13:07:13.713045 containerd[1543]: time="2025-05-15T13:07:13.712965775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:13.713877 containerd[1543]: time="2025-05-15T13:07:13.713853197Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 15 13:07:13.714349 containerd[1543]: time="2025-05-15T13:07:13.714293918Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:13.716031 containerd[1543]: time="2025-05-15T13:07:13.715995571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:13.716888 containerd[1543]: time="2025-05-15T13:07:13.716860363Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.756384922s" May 15 13:07:13.717164 containerd[1543]: time="2025-05-15T13:07:13.717147813Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 15 13:07:13.719658 containerd[1543]: time="2025-05-15T13:07:13.719572428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 13:07:14.208358 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 15 13:07:14.392994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670122811.mount: Deactivated successfully. May 15 13:07:15.918592 containerd[1543]: time="2025-05-15T13:07:15.918222295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:15.921493 containerd[1543]: time="2025-05-15T13:07:15.919660567Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 13:07:15.924062 containerd[1543]: time="2025-05-15T13:07:15.924021896Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:15.927361 containerd[1543]: time="2025-05-15T13:07:15.927317723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:15.928432 containerd[1543]: time="2025-05-15T13:07:15.928406085Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.208649106s" May 15 13:07:15.928742 containerd[1543]: time="2025-05-15T13:07:15.928491695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 13:07:15.931476 containerd[1543]: time="2025-05-15T13:07:15.931446731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 13:07:16.563600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575216724.mount: Deactivated successfully. May 15 13:07:16.567584 containerd[1543]: time="2025-05-15T13:07:16.567513773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 13:07:16.567971 containerd[1543]: time="2025-05-15T13:07:16.567923254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 13:07:16.568836 containerd[1543]: time="2025-05-15T13:07:16.568798405Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 13:07:16.570416 containerd[1543]: time="2025-05-15T13:07:16.570391489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 13:07:16.571045 containerd[1543]: time="2025-05-15T13:07:16.571011370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 639.525539ms" May 15 13:07:16.571112 containerd[1543]: time="2025-05-15T13:07:16.571049800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 13:07:16.572255 containerd[1543]: time="2025-05-15T13:07:16.572231452Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 13:07:17.199738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774329141.mount: Deactivated successfully. May 15 13:07:18.967076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 13:07:18.972748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 13:07:19.304484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:07:19.317623 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 13:07:19.428032 kubelet[2245]: E0515 13:07:19.427954 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 13:07:19.431369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 13:07:19.431629 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 13:07:19.433064 systemd[1]: kubelet.service: Consumed 368ms CPU time, 93.4M memory peak. May 15 13:07:20.539681 containerd[1543]: time="2025-05-15T13:07:20.539536515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:20.541870 containerd[1543]: time="2025-05-15T13:07:20.541486819Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 15 13:07:20.542485 containerd[1543]: time="2025-05-15T13:07:20.542439551Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:20.546014 containerd[1543]: time="2025-05-15T13:07:20.545969368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:20.547623 containerd[1543]: time="2025-05-15T13:07:20.547589221Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.975325199s" May 15 13:07:20.547755 containerd[1543]: time="2025-05-15T13:07:20.547729031Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 13:07:21.930514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:07:21.930743 systemd[1]: kubelet.service: Consumed 368ms CPU time, 93.4M memory peak. May 15 13:07:21.934256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 13:07:21.961528 systemd[1]: Reload requested from client PID 2280 ('systemctl') (unit session-9.scope)... May 15 13:07:21.961637 systemd[1]: Reloading... May 15 13:07:22.099600 zram_generator::config[2320]: No configuration found. May 15 13:07:22.221230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 13:07:22.335165 systemd[1]: Reloading finished in 373 ms. May 15 13:07:22.415154 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 13:07:22.415272 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 13:07:22.415588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:07:22.415632 systemd[1]: kubelet.service: Consumed 265ms CPU time, 83.6M memory peak. May 15 13:07:22.418149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 13:07:22.612908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:07:22.622961 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 13:07:22.697060 kubelet[2377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 13:07:22.697060 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 13:07:22.697060 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 13:07:22.698262 kubelet[2377]: I0515 13:07:22.698189 2377 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 13:07:23.232866 kubelet[2377]: I0515 13:07:23.232810 2377 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 13:07:23.232866 kubelet[2377]: I0515 13:07:23.232843 2377 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 13:07:23.233261 kubelet[2377]: I0515 13:07:23.233245 2377 server.go:929] "Client rotation is on, will bootstrap in background" May 15 13:07:23.265542 kubelet[2377]: I0515 13:07:23.264654 2377 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 13:07:23.265542 kubelet[2377]: E0515 13:07:23.265495 2377 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.236.109.179:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.109.179:6443: connect: connection refused" logger="UnhandledError" May 15 13:07:23.281054 kubelet[2377]: I0515 13:07:23.281030 2377 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 13:07:23.289737 kubelet[2377]: I0515 13:07:23.289699 2377 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 13:07:23.293033 kubelet[2377]: I0515 13:07:23.292805 2377 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 13:07:23.293373 kubelet[2377]: I0515 13:07:23.293294 2377 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 13:07:23.293739 kubelet[2377]: I0515 13:07:23.293367 2377 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-109-179","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 13:07:23.294032 kubelet[2377]: I0515 13:07:23.293789 2377 topology_manager.go:138] "Creating topology manager with none policy" May 15 13:07:23.294032 kubelet[2377]: I0515 13:07:23.293806 2377 container_manager_linux.go:300] "Creating device plugin manager" May 15 13:07:23.294133 kubelet[2377]: I0515 13:07:23.294099 2377 state_mem.go:36] "Initialized new in-memory state store" May 15 13:07:23.298426 kubelet[2377]: I0515 13:07:23.297874 2377 kubelet.go:408] "Attempting to sync node with API server" May 15 13:07:23.298426 kubelet[2377]: I0515 13:07:23.297914 2377 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 13:07:23.298426 kubelet[2377]: I0515 13:07:23.298187 2377 kubelet.go:314] "Adding apiserver pod source" May 15 13:07:23.298426 kubelet[2377]: I0515 13:07:23.298230 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 13:07:23.308888 kubelet[2377]: W0515 13:07:23.308806 2377 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.109.179:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-109-179&limit=500&resourceVersion=0": dial tcp 172.236.109.179:6443: connect: connection refused May 15 13:07:23.309203 kubelet[2377]: E0515 13:07:23.309159 2377 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.109.179:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-109-179&limit=500&resourceVersion=0\": dial tcp 172.236.109.179:6443: connect: connection refused" logger="UnhandledError" May 15 13:07:23.310281 kubelet[2377]: I0515 13:07:23.309298 2377 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 13:07:23.312153 kubelet[2377]: I0515 13:07:23.311730 2377 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 13:07:23.313638 kubelet[2377]: W0515 13:07:23.312949 2377 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 13:07:23.315205 kubelet[2377]: I0515 13:07:23.315187 2377 server.go:1269] "Started kubelet" May 15 13:07:23.317128 kubelet[2377]: W0515 13:07:23.316975 2377 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.109.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.236.109.179:6443: connect: connection refused May 15 13:07:23.317128 kubelet[2377]: E0515 13:07:23.317036 2377 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.236.109.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.109.179:6443: connect: connection refused" logger="UnhandledError" May 15 13:07:23.317224 kubelet[2377]: I0515 13:07:23.317163 2377 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 13:07:23.319546 kubelet[2377]: I0515 13:07:23.319470 2377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 13:07:23.320255 kubelet[2377]: I0515 13:07:23.320226 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 13:07:23.320342 kubelet[2377]: I0515 13:07:23.320326 2377 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 13:07:23.323516 kubelet[2377]: E0515 13:07:23.320654 2377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.109.179:6443/api/v1/namespaces/default/events\": dial tcp 172.236.109.179:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-109-179.183fb53a80b589b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-109-179,UID:172-236-109-179,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-109-179,},FirstTimestamp:2025-05-15 13:07:23.315153335 +0000 UTC m=+0.682705136,LastTimestamp:2025-05-15 13:07:23.315153335 +0000 UTC m=+0.682705136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-109-179,}" May 15 13:07:23.324446 kubelet[2377]: I0515 13:07:23.324429 2377 server.go:460] "Adding debug handlers to kubelet server" May 15 13:07:23.325883 kubelet[2377]: I0515 13:07:23.325862 2377 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 13:07:23.330289 kubelet[2377]: I0515 13:07:23.330252 2377 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 13:07:23.330585 kubelet[2377]: E0515 13:07:23.330537 2377 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-236-109-179\" not found" May 15 13:07:23.334232 kubelet[2377]: I0515 13:07:23.334068 2377 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 13:07:23.334232 kubelet[2377]: I0515 13:07:23.334171 2377 reconciler.go:26] "Reconciler: start to sync state" May 15 13:07:23.335346 kubelet[2377]: E0515 13:07:23.335316 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.109.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-109-179?timeout=10s\": dial tcp 172.236.109.179:6443: connect: connection refused" interval="200ms" May 15 13:07:23.336378 kubelet[2377]: I0515 13:07:23.336359 2377 factory.go:221] Registration of the systemd container factory successfully May 15 13:07:23.336500 kubelet[2377]: I0515 13:07:23.336482 2377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 13:07:23.337253 kubelet[2377]: W0515 13:07:23.337221 2377 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.109.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.109.179:6443: connect: connection refused May 15 13:07:23.337920 kubelet[2377]: E0515 13:07:23.337901 2377 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.236.109.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.109.179:6443: connect: connection refused" logger="UnhandledError" May 15 13:07:23.338268 kubelet[2377]: E0515 13:07:23.338251 2377 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 13:07:23.338897 kubelet[2377]: I0515 13:07:23.338880 2377 factory.go:221] Registration of the containerd container factory successfully May 15 13:07:23.366269 kubelet[2377]: I0515 13:07:23.366208 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 13:07:23.367725 kubelet[2377]: I0515 13:07:23.367692 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 13:07:23.367784 kubelet[2377]: I0515 13:07:23.367760 2377 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 13:07:23.367905 kubelet[2377]: I0515 13:07:23.367814 2377 kubelet.go:2321] "Starting kubelet main sync loop" May 15 13:07:23.367905 kubelet[2377]: E0515 13:07:23.367884 2377 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 13:07:23.378796 kubelet[2377]: W0515 13:07:23.378732 2377 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.109.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.109.179:6443: connect: connection refused May 15 13:07:23.378796 kubelet[2377]: E0515 13:07:23.378788 2377 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.236.109.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.109.179:6443: connect: connection refused" logger="UnhandledError" May 15 13:07:23.379593 kubelet[2377]: I0515 13:07:23.379523 2377 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 13:07:23.379593 kubelet[2377]: I0515 13:07:23.379537 2377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 13:07:23.379771 kubelet[2377]: I0515 13:07:23.379683 2377 state_mem.go:36] "Initialized new in-memory state store" May 15 13:07:23.381222 kubelet[2377]: I0515 13:07:23.381166 2377 policy_none.go:49] "None policy: Start" May 15 13:07:23.381797 kubelet[2377]: I0515 13:07:23.381778 2377 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 13:07:23.381852 kubelet[2377]: I0515 13:07:23.381812 2377 state_mem.go:35] "Initializing new in-memory state store" May 15 13:07:23.390655 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 13:07:23.405738 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 13:07:23.409532 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 13:07:23.420198 kubelet[2377]: I0515 13:07:23.420166 2377 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 13:07:23.420438 kubelet[2377]: I0515 13:07:23.420358 2377 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 13:07:23.420438 kubelet[2377]: I0515 13:07:23.420383 2377 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 13:07:23.421440 kubelet[2377]: I0515 13:07:23.420713 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 13:07:23.423117 kubelet[2377]: E0515 13:07:23.423061 2377 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-109-179\" not found" May 15 13:07:23.479233 systemd[1]: Created slice kubepods-burstable-pod56193c5c79b72f02fcd8287c32f469ed.slice - libcontainer container kubepods-burstable-pod56193c5c79b72f02fcd8287c32f469ed.slice. May 15 13:07:23.506991 systemd[1]: Created slice kubepods-burstable-poddbb963492b4d91fca5265ac730055a2a.slice - libcontainer container kubepods-burstable-poddbb963492b4d91fca5265ac730055a2a.slice. May 15 13:07:23.512751 systemd[1]: Created slice kubepods-burstable-pod74cb566fde9a0eb500aa5f409a127a72.slice - libcontainer container kubepods-burstable-pod74cb566fde9a0eb500aa5f409a127a72.slice. May 15 13:07:23.522044 kubelet[2377]: I0515 13:07:23.521986 2377 kubelet_node_status.go:72] "Attempting to register node" node="172-236-109-179" May 15 13:07:23.522318 kubelet[2377]: E0515 13:07:23.522294 2377 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.109.179:6443/api/v1/nodes\": dial tcp 172.236.109.179:6443: connect: connection refused" node="172-236-109-179" May 15 13:07:23.535967 kubelet[2377]: I0515 13:07:23.535755 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbb963492b4d91fca5265ac730055a2a-ca-certs\") pod \"kube-apiserver-172-236-109-179\" (UID: \"dbb963492b4d91fca5265ac730055a2a\") " pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:07:23.535967 kubelet[2377]: I0515 13:07:23.535789 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbb963492b4d91fca5265ac730055a2a-k8s-certs\") pod \"kube-apiserver-172-236-109-179\" (UID: \"dbb963492b4d91fca5265ac730055a2a\") " pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:07:23.535967 kubelet[2377]: I0515 13:07:23.535809 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbb963492b4d91fca5265ac730055a2a-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-109-179\" (UID: \"dbb963492b4d91fca5265ac730055a2a\") " pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:07:23.535967 kubelet[2377]: I0515 13:07:23.535830 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-ca-certs\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:23.535967 kubelet[2377]: E0515 13:07:23.535829 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.109.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-109-179?timeout=10s\": dial tcp 172.236.109.179:6443: connect: connection refused" interval="400ms" May 15 13:07:23.536171 kubelet[2377]: I0515 13:07:23.535846 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-flexvolume-dir\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:23.536171 kubelet[2377]: I0515 13:07:23.535862 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-k8s-certs\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:23.536171 kubelet[2377]: I0515 13:07:23.535878 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/56193c5c79b72f02fcd8287c32f469ed-kubeconfig\") pod \"kube-scheduler-172-236-109-179\" (UID: \"56193c5c79b72f02fcd8287c32f469ed\") " pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:07:23.536171 kubelet[2377]: I0515 13:07:23.535894 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-kubeconfig\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:23.536171 kubelet[2377]: I0515 13:07:23.535910 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:23.725083 kubelet[2377]: I0515 13:07:23.725017 2377 kubelet_node_status.go:72] "Attempting to register node" node="172-236-109-179" May 15 13:07:23.725607 kubelet[2377]: E0515 13:07:23.725304 2377 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.109.179:6443/api/v1/nodes\": dial tcp 172.236.109.179:6443: connect: connection refused" node="172-236-109-179" May 15 13:07:23.804215 kubelet[2377]: E0515 13:07:23.804126 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:23.805350 containerd[1543]: time="2025-05-15T13:07:23.805158575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-109-179,Uid:56193c5c79b72f02fcd8287c32f469ed,Namespace:kube-system,Attempt:0,}" May 15 13:07:23.811626 kubelet[2377]: E0515 13:07:23.811587 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:23.812026 containerd[1543]: time="2025-05-15T13:07:23.811993669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-109-179,Uid:dbb963492b4d91fca5265ac730055a2a,Namespace:kube-system,Attempt:0,}" May 15 13:07:23.815929 kubelet[2377]: E0515 13:07:23.815750 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:23.816110 containerd[1543]: time="2025-05-15T13:07:23.816082347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-109-179,Uid:74cb566fde9a0eb500aa5f409a127a72,Namespace:kube-system,Attempt:0,}" May 15 13:07:23.860581 containerd[1543]: time="2025-05-15T13:07:23.859737934Z" level=info msg="connecting to shim 69804abbb9c6955efcabc5b03c4f387f4565c5657259003ff3b0a71a237244bb" address="unix:///run/containerd/s/7c20858ac878a5df1fc34322847c11e320c23300c0895e6d963e4fef8ef660e6" namespace=k8s.io protocol=ttrpc version=3 May 15 13:07:23.867723 containerd[1543]: time="2025-05-15T13:07:23.867689940Z" level=info msg="connecting to shim a15b7e7d653fb79896c8f3631c82f9654a1675fd087bc9bf78caf746d5f1cd48" address="unix:///run/containerd/s/ba1d0c1e4b53720e58b3343aa6503b91b45af3d435e0c9b2308512ef66d161ea" namespace=k8s.io protocol=ttrpc version=3 May 15 13:07:23.882139 containerd[1543]: time="2025-05-15T13:07:23.882108089Z" level=info msg="connecting to shim 0906db8c1f880c0d28fa9a424ea2c542d95e85acace31e90e7a0a9ff8c7e358b" address="unix:///run/containerd/s/aa2189b032aa742c93c7788f96ceab77a68f6fc3c1fe027a83d753668092ab39" namespace=k8s.io protocol=ttrpc version=3 May 15 13:07:23.929937 systemd[1]: Started cri-containerd-a15b7e7d653fb79896c8f3631c82f9654a1675fd087bc9bf78caf746d5f1cd48.scope - libcontainer container a15b7e7d653fb79896c8f3631c82f9654a1675fd087bc9bf78caf746d5f1cd48. May 15 13:07:23.937450 kubelet[2377]: E0515 13:07:23.937391 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.109.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-109-179?timeout=10s\": dial tcp 172.236.109.179:6443: connect: connection refused" interval="800ms" May 15 13:07:23.940865 systemd[1]: Started cri-containerd-0906db8c1f880c0d28fa9a424ea2c542d95e85acace31e90e7a0a9ff8c7e358b.scope - libcontainer container 0906db8c1f880c0d28fa9a424ea2c542d95e85acace31e90e7a0a9ff8c7e358b. May 15 13:07:23.947603 systemd[1]: Started cri-containerd-69804abbb9c6955efcabc5b03c4f387f4565c5657259003ff3b0a71a237244bb.scope - libcontainer container 69804abbb9c6955efcabc5b03c4f387f4565c5657259003ff3b0a71a237244bb. May 15 13:07:24.053258 containerd[1543]: time="2025-05-15T13:07:24.053216921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-109-179,Uid:dbb963492b4d91fca5265ac730055a2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"69804abbb9c6955efcabc5b03c4f387f4565c5657259003ff3b0a71a237244bb\"" May 15 13:07:24.055594 kubelet[2377]: E0515 13:07:24.055412 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:24.061576 containerd[1543]: time="2025-05-15T13:07:24.061395997Z" level=info msg="CreateContainer within sandbox \"69804abbb9c6955efcabc5b03c4f387f4565c5657259003ff3b0a71a237244bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 13:07:24.064323 containerd[1543]: time="2025-05-15T13:07:24.064297973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-109-179,Uid:74cb566fde9a0eb500aa5f409a127a72,Namespace:kube-system,Attempt:0,} returns sandbox id \"a15b7e7d653fb79896c8f3631c82f9654a1675fd087bc9bf78caf746d5f1cd48\"" May 15 13:07:24.065120 kubelet[2377]: E0515 13:07:24.065097 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:24.067691 containerd[1543]: time="2025-05-15T13:07:24.067627960Z" level=info msg="CreateContainer within sandbox \"a15b7e7d653fb79896c8f3631c82f9654a1675fd087bc9bf78caf746d5f1cd48\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 13:07:24.074051 containerd[1543]: time="2025-05-15T13:07:24.074025032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-109-179,Uid:56193c5c79b72f02fcd8287c32f469ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"0906db8c1f880c0d28fa9a424ea2c542d95e85acace31e90e7a0a9ff8c7e358b\"" May 15 13:07:24.075267 kubelet[2377]: E0515 13:07:24.075122 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:24.077851 containerd[1543]: time="2025-05-15T13:07:24.077822690Z" level=info msg="CreateContainer within sandbox \"0906db8c1f880c0d28fa9a424ea2c542d95e85acace31e90e7a0a9ff8c7e358b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 13:07:24.079342 containerd[1543]: time="2025-05-15T13:07:24.079278553Z" level=info msg="Container e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf: CDI devices from CRI Config.CDIDevices: []" May 15 13:07:24.080903 containerd[1543]: time="2025-05-15T13:07:24.080883426Z" level=info msg="Container 6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666: CDI devices from CRI Config.CDIDevices: []" May 15 13:07:24.085848 containerd[1543]: time="2025-05-15T13:07:24.085817726Z" level=info msg="Container 4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad: CDI devices from CRI Config.CDIDevices: []" May 15 13:07:24.092467 containerd[1543]: time="2025-05-15T13:07:24.092434469Z" level=info msg="CreateContainer within sandbox \"69804abbb9c6955efcabc5b03c4f387f4565c5657259003ff3b0a71a237244bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf\"" May 15 13:07:24.092799 containerd[1543]: time="2025-05-15T13:07:24.092778570Z" level=info msg="CreateContainer within sandbox \"a15b7e7d653fb79896c8f3631c82f9654a1675fd087bc9bf78caf746d5f1cd48\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666\"" May 15 13:07:24.097843 containerd[1543]: time="2025-05-15T13:07:24.097808320Z" level=info msg="StartContainer for \"6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666\"" May 15 13:07:24.099579 containerd[1543]: time="2025-05-15T13:07:24.099140153Z" level=info msg="connecting to shim 6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666" address="unix:///run/containerd/s/ba1d0c1e4b53720e58b3343aa6503b91b45af3d435e0c9b2308512ef66d161ea" protocol=ttrpc version=3 May 15 13:07:24.099807 containerd[1543]: time="2025-05-15T13:07:24.099781534Z" level=info msg="StartContainer for \"e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf\"" May 15 13:07:24.102056 containerd[1543]: time="2025-05-15T13:07:24.102029018Z" level=info msg="CreateContainer within sandbox \"0906db8c1f880c0d28fa9a424ea2c542d95e85acace31e90e7a0a9ff8c7e358b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad\"" May 15 13:07:24.102459 containerd[1543]: time="2025-05-15T13:07:24.102436859Z" level=info msg="StartContainer for \"4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad\"" May 15 13:07:24.103333 containerd[1543]: time="2025-05-15T13:07:24.103301081Z" level=info msg="connecting to shim e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf" address="unix:///run/containerd/s/7c20858ac878a5df1fc34322847c11e320c23300c0895e6d963e4fef8ef660e6" protocol=ttrpc version=3 May 15 13:07:24.105747 containerd[1543]: time="2025-05-15T13:07:24.105723826Z" level=info msg="connecting to shim 4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad" address="unix:///run/containerd/s/aa2189b032aa742c93c7788f96ceab77a68f6fc3c1fe027a83d753668092ab39" protocol=ttrpc version=3 May 15 13:07:24.126752 systemd[1]: Started cri-containerd-e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf.scope - libcontainer container e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf. May 15 13:07:24.129605 kubelet[2377]: I0515 13:07:24.129480 2377 kubelet_node_status.go:72] "Attempting to register node" node="172-236-109-179" May 15 13:07:24.130166 kubelet[2377]: E0515 13:07:24.130124 2377 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.109.179:6443/api/v1/nodes\": dial tcp 172.236.109.179:6443: connect: connection refused" node="172-236-109-179" May 15 13:07:24.139678 systemd[1]: Started cri-containerd-6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666.scope - libcontainer container 6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666. May 15 13:07:24.143426 systemd[1]: Started cri-containerd-4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad.scope - libcontainer container 4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad. May 15 13:07:24.223805 containerd[1543]: time="2025-05-15T13:07:24.223666092Z" level=info msg="StartContainer for \"e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf\" returns successfully" May 15 13:07:24.238646 containerd[1543]: time="2025-05-15T13:07:24.238599912Z" level=info msg="StartContainer for \"6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666\" returns successfully" May 15 13:07:24.286753 containerd[1543]: time="2025-05-15T13:07:24.286696838Z" level=info msg="StartContainer for \"4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad\" returns successfully" May 15 13:07:24.391254 kubelet[2377]: E0515 13:07:24.391220 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:24.404134 kubelet[2377]: E0515 13:07:24.392631 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:24.411879 kubelet[2377]: E0515 13:07:24.411846 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:24.937295 kubelet[2377]: I0515 13:07:24.937247 2377 kubelet_node_status.go:72] "Attempting to register node" node="172-236-109-179" May 15 13:07:25.418691 kubelet[2377]: E0515 13:07:25.414855 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:26.216034 kubelet[2377]: E0515 13:07:26.215925 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:26.307577 kubelet[2377]: E0515 13:07:26.306979 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:26.708377 update_engine[1518]: I20250515 13:07:26.708076 1518 update_attempter.cc:509] Updating boot flags... May 15 13:07:27.049107 kubelet[2377]: E0515 13:07:27.048871 2377 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-109-179\" not found" node="172-236-109-179" May 15 13:07:27.135184 kubelet[2377]: I0515 13:07:27.132826 2377 kubelet_node_status.go:75] "Successfully registered node" node="172-236-109-179" May 15 13:07:27.135184 kubelet[2377]: E0515 13:07:27.132858 2377 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-236-109-179\": node \"172-236-109-179\" not found" May 15 13:07:27.152870 kubelet[2377]: E0515 13:07:27.152730 2377 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-236-109-179.183fb53a80b589b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-109-179,UID:172-236-109-179,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-109-179,},FirstTimestamp:2025-05-15 13:07:23.315153335 +0000 UTC m=+0.682705136,LastTimestamp:2025-05-15 13:07:23.315153335 +0000 UTC m=+0.682705136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-109-179,}" May 15 13:07:27.318734 kubelet[2377]: I0515 13:07:27.318600 2377 apiserver.go:52] "Watching apiserver" May 15 13:07:27.335586 kubelet[2377]: I0515 13:07:27.335305 2377 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 13:07:29.099123 systemd[1]: Reload requested from client PID 2668 ('systemctl') (unit session-9.scope)... May 15 13:07:29.099172 systemd[1]: Reloading... May 15 13:07:29.243606 zram_generator::config[2717]: No configuration found. May 15 13:07:29.318665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 13:07:29.330971 kubelet[2377]: E0515 13:07:29.330927 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:29.419146 kubelet[2377]: E0515 13:07:29.419061 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:29.450237 systemd[1]: Reloading finished in 350 ms. May 15 13:07:29.478411 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 13:07:29.505098 systemd[1]: kubelet.service: Deactivated successfully. May 15 13:07:29.505435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:07:29.505528 systemd[1]: kubelet.service: Consumed 1.212s CPU time, 119.6M memory peak. May 15 13:07:29.508234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 13:07:29.808241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 13:07:29.816924 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 13:07:29.879128 kubelet[2762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 13:07:29.879128 kubelet[2762]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 13:07:29.879128 kubelet[2762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 13:07:29.879532 kubelet[2762]: I0515 13:07:29.879217 2762 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 13:07:29.886831 kubelet[2762]: I0515 13:07:29.886800 2762 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 13:07:29.886831 kubelet[2762]: I0515 13:07:29.886822 2762 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 13:07:29.887022 kubelet[2762]: I0515 13:07:29.886997 2762 server.go:929] "Client rotation is on, will bootstrap in background" May 15 13:07:29.892421 kubelet[2762]: I0515 13:07:29.891705 2762 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 13:07:29.894341 kubelet[2762]: I0515 13:07:29.894321 2762 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 13:07:29.904598 kubelet[2762]: I0515 13:07:29.903052 2762 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 13:07:29.910293 kubelet[2762]: I0515 13:07:29.910104 2762 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 13:07:29.910676 kubelet[2762]: I0515 13:07:29.910654 2762 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 13:07:29.910918 kubelet[2762]: I0515 13:07:29.910884 2762 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 13:07:29.912166 kubelet[2762]: I0515 13:07:29.910919 2762 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-109-179","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 13:07:29.912166 kubelet[2762]: I0515 13:07:29.911267 2762 topology_manager.go:138] "Creating topology manager with none policy" May 15 13:07:29.912166 kubelet[2762]: I0515 13:07:29.911279 2762 container_manager_linux.go:300] "Creating device plugin manager" May 15 13:07:29.912166 kubelet[2762]: I0515 13:07:29.911351 2762 state_mem.go:36] "Initialized new in-memory state store" May 15 13:07:29.912166 kubelet[2762]: I0515 13:07:29.911515 2762 kubelet.go:408] "Attempting to sync node with API server" May 15 13:07:29.912405 kubelet[2762]: I0515 13:07:29.911534 2762 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 13:07:29.912405 kubelet[2762]: I0515 13:07:29.911615 2762 kubelet.go:314] "Adding apiserver pod source" May 15 13:07:29.912405 kubelet[2762]: I0515 13:07:29.911637 2762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 13:07:29.921321 kubelet[2762]: I0515 13:07:29.921279 2762 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 13:07:29.922455 kubelet[2762]: I0515 13:07:29.921755 2762 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 13:07:29.923721 kubelet[2762]: I0515 13:07:29.923698 2762 server.go:1269] "Started kubelet" May 15 13:07:29.930243 kubelet[2762]: I0515 13:07:29.930154 2762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 13:07:29.938046 kubelet[2762]: I0515 13:07:29.938027 2762 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 13:07:29.938391 kubelet[2762]: E0515 13:07:29.938349 2762 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-236-109-179\" not found" May 15 13:07:29.939096 kubelet[2762]: I0515 13:07:29.939078 2762 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 13:07:29.939494 kubelet[2762]: I0515 13:07:29.939429 2762 reconciler.go:26] "Reconciler: start to sync state" May 15 13:07:29.940577 kubelet[2762]: I0515 13:07:29.940487 2762 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 13:07:29.941936 kubelet[2762]: I0515 13:07:29.941911 2762 server.go:460] "Adding debug handlers to kubelet server" May 15 13:07:29.943054 kubelet[2762]: I0515 13:07:29.942971 2762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 13:07:29.943352 kubelet[2762]: I0515 13:07:29.943327 2762 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 13:07:29.943686 kubelet[2762]: I0515 13:07:29.943652 2762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 13:07:29.945985 kubelet[2762]: I0515 13:07:29.945921 2762 factory.go:221] Registration of the systemd container factory successfully May 15 13:07:29.946150 kubelet[2762]: I0515 13:07:29.946131 2762 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 13:07:29.953608 kubelet[2762]: I0515 13:07:29.953338 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 13:07:29.955651 kubelet[2762]: I0515 13:07:29.955324 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 13:07:29.955651 kubelet[2762]: I0515 13:07:29.955457 2762 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 13:07:29.955651 kubelet[2762]: I0515 13:07:29.955480 2762 kubelet.go:2321] "Starting kubelet main sync loop" May 15 13:07:29.955651 kubelet[2762]: E0515 13:07:29.955521 2762 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 13:07:29.964292 kubelet[2762]: I0515 13:07:29.964254 2762 factory.go:221] Registration of the containerd container factory successfully May 15 13:07:29.979961 kubelet[2762]: E0515 13:07:29.979928 2762 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 13:07:30.021250 kubelet[2762]: I0515 13:07:30.021209 2762 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 13:07:30.021472 kubelet[2762]: I0515 13:07:30.021410 2762 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 13:07:30.021472 kubelet[2762]: I0515 13:07:30.021446 2762 state_mem.go:36] "Initialized new in-memory state store" May 15 13:07:30.021767 kubelet[2762]: I0515 13:07:30.021746 2762 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 13:07:30.021848 kubelet[2762]: I0515 13:07:30.021825 2762 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 13:07:30.021906 kubelet[2762]: I0515 13:07:30.021897 2762 policy_none.go:49] "None policy: Start" May 15 13:07:30.022762 kubelet[2762]: I0515 13:07:30.022740 2762 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 13:07:30.022822 kubelet[2762]: I0515 13:07:30.022781 2762 state_mem.go:35] "Initializing new in-memory state store" May 15 13:07:30.022986 kubelet[2762]: I0515 13:07:30.022957 2762 state_mem.go:75] "Updated machine memory state" May 15 13:07:30.028718 kubelet[2762]: I0515 13:07:30.028689 2762 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 13:07:30.028899 kubelet[2762]: I0515 13:07:30.028873 2762 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 13:07:30.028947 kubelet[2762]: I0515 13:07:30.028899 2762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 13:07:30.029530 kubelet[2762]: I0515 13:07:30.029445 2762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 13:07:30.063921 kubelet[2762]: E0515 13:07:30.062978 2762 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-109-179\" already exists" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:07:30.132468 kubelet[2762]: I0515 13:07:30.132419 2762 kubelet_node_status.go:72] "Attempting to register node" node="172-236-109-179" May 15 13:07:30.141344 kubelet[2762]: I0515 13:07:30.141076 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbb963492b4d91fca5265ac730055a2a-k8s-certs\") pod \"kube-apiserver-172-236-109-179\" (UID: \"dbb963492b4d91fca5265ac730055a2a\") " pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:07:30.141886 kubelet[2762]: I0515 13:07:30.141814 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbb963492b4d91fca5265ac730055a2a-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-109-179\" (UID: \"dbb963492b4d91fca5265ac730055a2a\") " pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:07:30.142441 kubelet[2762]: I0515 13:07:30.142041 2762 kubelet_node_status.go:111] "Node was previously registered" node="172-236-109-179" May 15 13:07:30.142441 kubelet[2762]: I0515 13:07:30.142117 2762 kubelet_node_status.go:75] "Successfully registered node" node="172-236-109-179" May 15 13:07:30.143163 kubelet[2762]: I0515 13:07:30.142907 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-flexvolume-dir\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:30.143285 kubelet[2762]: I0515 13:07:30.143266 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-k8s-certs\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:30.143647 kubelet[2762]: I0515 13:07:30.143361 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/56193c5c79b72f02fcd8287c32f469ed-kubeconfig\") pod \"kube-scheduler-172-236-109-179\" (UID: \"56193c5c79b72f02fcd8287c32f469ed\") " pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:07:30.143647 kubelet[2762]: I0515 13:07:30.143384 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbb963492b4d91fca5265ac730055a2a-ca-certs\") pod \"kube-apiserver-172-236-109-179\" (UID: \"dbb963492b4d91fca5265ac730055a2a\") " pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:07:30.143647 kubelet[2762]: I0515 13:07:30.143400 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:30.143647 kubelet[2762]: I0515 13:07:30.143413 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-ca-certs\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:30.143647 kubelet[2762]: I0515 13:07:30.143427 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74cb566fde9a0eb500aa5f409a127a72-kubeconfig\") pod \"kube-controller-manager-172-236-109-179\" (UID: \"74cb566fde9a0eb500aa5f409a127a72\") " pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:07:30.364240 kubelet[2762]: E0515 13:07:30.363315 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:30.364240 kubelet[2762]: E0515 13:07:30.363844 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:30.364240 kubelet[2762]: E0515 13:07:30.363922 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:31.399392 kubelet[2762]: I0515 13:07:31.399239 2762 apiserver.go:52] "Watching apiserver" May 15 13:07:31.442577 kubelet[2762]: E0515 13:07:31.442425 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:31.444800 kubelet[2762]: E0515 13:07:31.443878 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:31.445339 kubelet[2762]: I0515 13:07:31.444725 2762 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 13:07:31.450265 kubelet[2762]: E0515 13:07:31.449710 2762 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-109-179\" already exists" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:07:31.450522 kubelet[2762]: E0515 13:07:31.450471 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:31.535496 kubelet[2762]: I0515 13:07:31.535332 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-109-179" podStartSLOduration=1.5352937020000001 podStartE2EDuration="1.535293702s" podCreationTimestamp="2025-05-15 13:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 13:07:31.526692444 +0000 UTC m=+1.698990328" watchObservedRunningTime="2025-05-15 13:07:31.535293702 +0000 UTC m=+1.707591586" May 15 13:07:31.551217 kubelet[2762]: I0515 13:07:31.551106 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-109-179" podStartSLOduration=1.551092183 podStartE2EDuration="1.551092183s" podCreationTimestamp="2025-05-15 13:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 13:07:31.536983615 +0000 UTC m=+1.709281499" watchObservedRunningTime="2025-05-15 13:07:31.551092183 +0000 UTC m=+1.723390067" May 15 13:07:31.572925 kubelet[2762]: I0515 13:07:31.572790 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-109-179" podStartSLOduration=2.572771467 podStartE2EDuration="2.572771467s" podCreationTimestamp="2025-05-15 13:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 13:07:31.551809545 +0000 UTC m=+1.724107429" watchObservedRunningTime="2025-05-15 13:07:31.572771467 +0000 UTC m=+1.745069351" May 15 13:07:32.438295 kubelet[2762]: E0515 13:07:32.438259 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:33.127890 kubelet[2762]: E0515 13:07:33.127838 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:33.439916 kubelet[2762]: E0515 13:07:33.439786 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:36.066839 kubelet[2762]: I0515 13:07:36.066797 2762 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 13:07:36.068458 containerd[1543]: time="2025-05-15T13:07:36.067429751Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 13:07:36.068980 kubelet[2762]: I0515 13:07:36.068697 2762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 13:07:36.269548 sudo[1828]: pam_unix(sudo:session): session closed for user root May 15 13:07:36.323480 sshd[1827]: Connection closed by 139.178.89.65 port 33996 May 15 13:07:36.324543 sshd-session[1825]: pam_unix(sshd:session): session closed for user core May 15 13:07:36.331216 systemd[1]: sshd@9-172.236.109.179:22-139.178.89.65:33996.service: Deactivated successfully. May 15 13:07:36.334517 systemd[1]: session-9.scope: Deactivated successfully. May 15 13:07:36.335172 systemd[1]: session-9.scope: Consumed 5.699s CPU time, 228.9M memory peak. May 15 13:07:36.336694 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. May 15 13:07:36.339224 systemd-logind[1516]: Removed session 9. May 15 13:07:36.848590 systemd[1]: Created slice kubepods-besteffort-podf19dfc37_4274_4d00_98ba_54a6cf08dec2.slice - libcontainer container kubepods-besteffort-podf19dfc37_4274_4d00_98ba_54a6cf08dec2.slice. May 15 13:07:37.013044 kubelet[2762]: I0515 13:07:37.012991 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f19dfc37-4274-4d00-98ba-54a6cf08dec2-kube-proxy\") pod \"kube-proxy-cwjrl\" (UID: \"f19dfc37-4274-4d00-98ba-54a6cf08dec2\") " pod="kube-system/kube-proxy-cwjrl" May 15 13:07:37.013177 kubelet[2762]: I0515 13:07:37.013101 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f19dfc37-4274-4d00-98ba-54a6cf08dec2-lib-modules\") pod \"kube-proxy-cwjrl\" (UID: \"f19dfc37-4274-4d00-98ba-54a6cf08dec2\") " pod="kube-system/kube-proxy-cwjrl" May 15 13:07:37.013238 kubelet[2762]: I0515 13:07:37.013197 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkhg\" (UniqueName: \"kubernetes.io/projected/f19dfc37-4274-4d00-98ba-54a6cf08dec2-kube-api-access-wfkhg\") pod \"kube-proxy-cwjrl\" (UID: \"f19dfc37-4274-4d00-98ba-54a6cf08dec2\") " pod="kube-system/kube-proxy-cwjrl" May 15 13:07:37.013324 kubelet[2762]: I0515 13:07:37.013306 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f19dfc37-4274-4d00-98ba-54a6cf08dec2-xtables-lock\") pod \"kube-proxy-cwjrl\" (UID: \"f19dfc37-4274-4d00-98ba-54a6cf08dec2\") " pod="kube-system/kube-proxy-cwjrl" May 15 13:07:37.158957 kubelet[2762]: E0515 13:07:37.158909 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:37.160679 containerd[1543]: time="2025-05-15T13:07:37.160096226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cwjrl,Uid:f19dfc37-4274-4d00-98ba-54a6cf08dec2,Namespace:kube-system,Attempt:0,}" May 15 13:07:37.200247 systemd[1]: Created slice kubepods-besteffort-poddf892b94_50e9_44b7_bfad_3bb7cb8029a0.slice - libcontainer container kubepods-besteffort-poddf892b94_50e9_44b7_bfad_3bb7cb8029a0.slice. May 15 13:07:37.207619 containerd[1543]: time="2025-05-15T13:07:37.207378826Z" level=info msg="connecting to shim 92f5427d4ca5000cf8150be355f0f67c410eeaad2f108db48f13f1dd0154f937" address="unix:///run/containerd/s/93bdb1981b5261537dc07243f8f607d3c7d5d9a71a983b7d34981432883517bd" namespace=k8s.io protocol=ttrpc version=3 May 15 13:07:37.257886 systemd[1]: Started cri-containerd-92f5427d4ca5000cf8150be355f0f67c410eeaad2f108db48f13f1dd0154f937.scope - libcontainer container 92f5427d4ca5000cf8150be355f0f67c410eeaad2f108db48f13f1dd0154f937. May 15 13:07:37.295264 containerd[1543]: time="2025-05-15T13:07:37.295215588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cwjrl,Uid:f19dfc37-4274-4d00-98ba-54a6cf08dec2,Namespace:kube-system,Attempt:0,} returns sandbox id \"92f5427d4ca5000cf8150be355f0f67c410eeaad2f108db48f13f1dd0154f937\"" May 15 13:07:37.296613 kubelet[2762]: E0515 13:07:37.296541 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:37.299860 containerd[1543]: time="2025-05-15T13:07:37.299813379Z" level=info msg="CreateContainer within sandbox \"92f5427d4ca5000cf8150be355f0f67c410eeaad2f108db48f13f1dd0154f937\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 13:07:37.311798 containerd[1543]: time="2025-05-15T13:07:37.311763559Z" level=info msg="Container 5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f: CDI devices from CRI Config.CDIDevices: []" May 15 13:07:37.315796 kubelet[2762]: I0515 13:07:37.314455 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wds46\" (UniqueName: \"kubernetes.io/projected/df892b94-50e9-44b7-bfad-3bb7cb8029a0-kube-api-access-wds46\") pod \"tigera-operator-6f6897fdc5-gkqfs\" (UID: \"df892b94-50e9-44b7-bfad-3bb7cb8029a0\") " pod="tigera-operator/tigera-operator-6f6897fdc5-gkqfs" May 15 13:07:37.315210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113834612.mount: Deactivated successfully. May 15 13:07:37.318970 kubelet[2762]: I0515 13:07:37.318368 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df892b94-50e9-44b7-bfad-3bb7cb8029a0-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-gkqfs\" (UID: \"df892b94-50e9-44b7-bfad-3bb7cb8029a0\") " pod="tigera-operator/tigera-operator-6f6897fdc5-gkqfs" May 15 13:07:37.321613 containerd[1543]: time="2025-05-15T13:07:37.321540300Z" level=info msg="CreateContainer within sandbox \"92f5427d4ca5000cf8150be355f0f67c410eeaad2f108db48f13f1dd0154f937\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f\"" May 15 13:07:37.323618 containerd[1543]: time="2025-05-15T13:07:37.323591177Z" level=info msg="StartContainer for \"5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f\"" May 15 13:07:37.325912 containerd[1543]: time="2025-05-15T13:07:37.325877587Z" level=info msg="connecting to shim 5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f" address="unix:///run/containerd/s/93bdb1981b5261537dc07243f8f607d3c7d5d9a71a983b7d34981432883517bd" protocol=ttrpc version=3 May 15 13:07:37.350721 systemd[1]: Started cri-containerd-5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f.scope - libcontainer container 5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f. May 15 13:07:37.417020 containerd[1543]: time="2025-05-15T13:07:37.416873044Z" level=info msg="StartContainer for \"5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f\" returns successfully" May 15 13:07:37.448498 kubelet[2762]: E0515 13:07:37.448446 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:37.507968 containerd[1543]: time="2025-05-15T13:07:37.507919383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-gkqfs,Uid:df892b94-50e9-44b7-bfad-3bb7cb8029a0,Namespace:tigera-operator,Attempt:0,}" May 15 13:07:37.524735 containerd[1543]: time="2025-05-15T13:07:37.524684538Z" level=info msg="connecting to shim 4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d" address="unix:///run/containerd/s/8ed34d2b5d29a4248fab71c706deca5c23feb2eae136ff8f35b14baf4ec90998" namespace=k8s.io protocol=ttrpc version=3 May 15 13:07:37.828679 systemd[1]: Started cri-containerd-4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d.scope - libcontainer container 4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d. May 15 13:07:37.905085 containerd[1543]: time="2025-05-15T13:07:37.905013485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-gkqfs,Uid:df892b94-50e9-44b7-bfad-3bb7cb8029a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\"" May 15 13:07:37.908206 containerd[1543]: time="2025-05-15T13:07:37.907869826Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 13:07:38.745088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830387453.mount: Deactivated successfully. May 15 13:07:39.576374 containerd[1543]: time="2025-05-15T13:07:39.576309682Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:39.577322 containerd[1543]: time="2025-05-15T13:07:39.577156056Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 13:07:39.577896 containerd[1543]: time="2025-05-15T13:07:39.577862206Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:39.579529 containerd[1543]: time="2025-05-15T13:07:39.579495632Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:39.580283 containerd[1543]: time="2025-05-15T13:07:39.580248444Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.672126735s" May 15 13:07:39.580364 containerd[1543]: time="2025-05-15T13:07:39.580347066Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 13:07:39.583015 containerd[1543]: time="2025-05-15T13:07:39.582414648Z" level=info msg="CreateContainer within sandbox \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 13:07:39.588467 containerd[1543]: time="2025-05-15T13:07:39.587917824Z" level=info msg="Container 7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248: CDI devices from CRI Config.CDIDevices: []" May 15 13:07:39.599540 containerd[1543]: time="2025-05-15T13:07:39.599508816Z" level=info msg="CreateContainer within sandbox \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\"" May 15 13:07:39.600819 containerd[1543]: time="2025-05-15T13:07:39.600776266Z" level=info msg="StartContainer for \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\"" May 15 13:07:39.602326 containerd[1543]: time="2025-05-15T13:07:39.602279499Z" level=info msg="connecting to shim 7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248" address="unix:///run/containerd/s/8ed34d2b5d29a4248fab71c706deca5c23feb2eae136ff8f35b14baf4ec90998" protocol=ttrpc version=3 May 15 13:07:39.677704 systemd[1]: Started cri-containerd-7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248.scope - libcontainer container 7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248. May 15 13:07:39.740967 containerd[1543]: time="2025-05-15T13:07:39.740891512Z" level=info msg="StartContainer for \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" returns successfully" May 15 13:07:39.768652 kubelet[2762]: E0515 13:07:39.768543 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:39.791811 kubelet[2762]: I0515 13:07:39.791741 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cwjrl" podStartSLOduration=3.791245211 podStartE2EDuration="3.791245211s" podCreationTimestamp="2025-05-15 13:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 13:07:37.459802898 +0000 UTC m=+7.632100782" watchObservedRunningTime="2025-05-15 13:07:39.791245211 +0000 UTC m=+9.963543115" May 15 13:07:40.456160 kubelet[2762]: E0515 13:07:40.456125 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:42.004210 kubelet[2762]: E0515 13:07:42.004150 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:42.015916 kubelet[2762]: I0515 13:07:42.015589 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-gkqfs" podStartSLOduration=3.341066645 podStartE2EDuration="5.01557225s" podCreationTimestamp="2025-05-15 13:07:37 +0000 UTC" firstStartedPulling="2025-05-15 13:07:37.906693724 +0000 UTC m=+8.078991608" lastFinishedPulling="2025-05-15 13:07:39.581199329 +0000 UTC m=+9.753497213" observedRunningTime="2025-05-15 13:07:40.471969169 +0000 UTC m=+10.644267053" watchObservedRunningTime="2025-05-15 13:07:42.01557225 +0000 UTC m=+12.187870134" May 15 13:07:43.134234 kubelet[2762]: E0515 13:07:43.134037 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:43.318093 systemd[1]: Created slice kubepods-besteffort-podde1a3c20_465c_4eb4_ad1a_f73d2921c906.slice - libcontainer container kubepods-besteffort-podde1a3c20_465c_4eb4_ad1a_f73d2921c906.slice. May 15 13:07:43.338133 kubelet[2762]: I0515 13:07:43.338093 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de1a3c20-465c-4eb4-ad1a-f73d2921c906-tigera-ca-bundle\") pod \"calico-typha-8d889846f-9b2wr\" (UID: \"de1a3c20-465c-4eb4-ad1a-f73d2921c906\") " pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:07:43.338304 kubelet[2762]: I0515 13:07:43.338289 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5lvd\" (UniqueName: \"kubernetes.io/projected/de1a3c20-465c-4eb4-ad1a-f73d2921c906-kube-api-access-z5lvd\") pod \"calico-typha-8d889846f-9b2wr\" (UID: \"de1a3c20-465c-4eb4-ad1a-f73d2921c906\") " pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:07:43.338408 kubelet[2762]: I0515 13:07:43.338392 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de1a3c20-465c-4eb4-ad1a-f73d2921c906-typha-certs\") pod \"calico-typha-8d889846f-9b2wr\" (UID: \"de1a3c20-465c-4eb4-ad1a-f73d2921c906\") " pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:07:43.503968 systemd[1]: Created slice kubepods-besteffort-pod1a8a24dd_708e_4ec3_b972_4df98026b344.slice - libcontainer container kubepods-besteffort-pod1a8a24dd_708e_4ec3_b972_4df98026b344.slice. May 15 13:07:43.539476 kubelet[2762]: I0515 13:07:43.539430 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-cni-net-dir\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539706 kubelet[2762]: I0515 13:07:43.539495 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-xtables-lock\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539780 kubelet[2762]: I0515 13:07:43.539715 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1a8a24dd-708e-4ec3-b972-4df98026b344-node-certs\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539780 kubelet[2762]: I0515 13:07:43.539730 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-policysync\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539780 kubelet[2762]: I0515 13:07:43.539745 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-cni-bin-dir\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539780 kubelet[2762]: I0515 13:07:43.539764 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-lib-modules\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539780 kubelet[2762]: I0515 13:07:43.539777 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-var-lib-calico\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539963 kubelet[2762]: I0515 13:07:43.539807 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-cni-log-dir\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539963 kubelet[2762]: I0515 13:07:43.539820 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-var-run-calico\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539963 kubelet[2762]: I0515 13:07:43.539837 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1a8a24dd-708e-4ec3-b972-4df98026b344-flexvol-driver-host\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539963 kubelet[2762]: I0515 13:07:43.539855 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5bx\" (UniqueName: \"kubernetes.io/projected/1a8a24dd-708e-4ec3-b972-4df98026b344-kube-api-access-pg5bx\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.539963 kubelet[2762]: I0515 13:07:43.539868 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a8a24dd-708e-4ec3-b972-4df98026b344-tigera-ca-bundle\") pod \"calico-node-h5k9z\" (UID: \"1a8a24dd-708e-4ec3-b972-4df98026b344\") " pod="calico-system/calico-node-h5k9z" May 15 13:07:43.621842 kubelet[2762]: E0515 13:07:43.621764 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:07:43.622652 kubelet[2762]: E0515 13:07:43.622590 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:43.623877 containerd[1543]: time="2025-05-15T13:07:43.623700947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d889846f-9b2wr,Uid:de1a3c20-465c-4eb4-ad1a-f73d2921c906,Namespace:calico-system,Attempt:0,}" May 15 13:07:43.643587 kubelet[2762]: I0515 13:07:43.640685 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/85ebef63-264f-4ef9-b5f5-d3d0ecc23527-varrun\") pod \"csi-node-driver-fxxht\" (UID: \"85ebef63-264f-4ef9-b5f5-d3d0ecc23527\") " pod="calico-system/csi-node-driver-fxxht" May 15 13:07:43.643587 kubelet[2762]: I0515 13:07:43.640745 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59mln\" (UniqueName: \"kubernetes.io/projected/85ebef63-264f-4ef9-b5f5-d3d0ecc23527-kube-api-access-59mln\") pod \"csi-node-driver-fxxht\" (UID: \"85ebef63-264f-4ef9-b5f5-d3d0ecc23527\") " pod="calico-system/csi-node-driver-fxxht" May 15 13:07:43.643587 kubelet[2762]: I0515 13:07:43.640831 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85ebef63-264f-4ef9-b5f5-d3d0ecc23527-socket-dir\") pod \"csi-node-driver-fxxht\" (UID: \"85ebef63-264f-4ef9-b5f5-d3d0ecc23527\") " pod="calico-system/csi-node-driver-fxxht" May 15 13:07:43.643587 kubelet[2762]: I0515 13:07:43.640890 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85ebef63-264f-4ef9-b5f5-d3d0ecc23527-kubelet-dir\") pod \"csi-node-driver-fxxht\" (UID: \"85ebef63-264f-4ef9-b5f5-d3d0ecc23527\") " pod="calico-system/csi-node-driver-fxxht" May 15 13:07:43.643587 kubelet[2762]: I0515 13:07:43.640927 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85ebef63-264f-4ef9-b5f5-d3d0ecc23527-registration-dir\") pod \"csi-node-driver-fxxht\" (UID: \"85ebef63-264f-4ef9-b5f5-d3d0ecc23527\") " pod="calico-system/csi-node-driver-fxxht" May 15 13:07:43.657449 kubelet[2762]: E0515 13:07:43.651322 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.657449 kubelet[2762]: W0515 13:07:43.651344 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.657449 kubelet[2762]: E0515 13:07:43.651489 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.657449 kubelet[2762]: E0515 13:07:43.652211 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.657449 kubelet[2762]: W0515 13:07:43.652496 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.657449 kubelet[2762]: E0515 13:07:43.652519 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.657449 kubelet[2762]: E0515 13:07:43.653539 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.657449 kubelet[2762]: W0515 13:07:43.653795 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.657449 kubelet[2762]: E0515 13:07:43.653959 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.657449 kubelet[2762]: E0515 13:07:43.654960 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.658115 kubelet[2762]: W0515 13:07:43.655083 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.658115 kubelet[2762]: E0515 13:07:43.655476 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.658115 kubelet[2762]: W0515 13:07:43.655492 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.658115 kubelet[2762]: E0515 13:07:43.656110 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.658115 kubelet[2762]: W0515 13:07:43.656277 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.658115 kubelet[2762]: E0515 13:07:43.656292 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.658115 kubelet[2762]: E0515 13:07:43.656310 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.658115 kubelet[2762]: E0515 13:07:43.656743 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.658115 kubelet[2762]: W0515 13:07:43.656752 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.658115 kubelet[2762]: E0515 13:07:43.656762 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.661214 kubelet[2762]: E0515 13:07:43.657084 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.661214 kubelet[2762]: W0515 13:07:43.657092 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.661214 kubelet[2762]: E0515 13:07:43.657103 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.661214 kubelet[2762]: E0515 13:07:43.657464 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.661214 kubelet[2762]: W0515 13:07:43.657473 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.661214 kubelet[2762]: E0515 13:07:43.657589 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.661214 kubelet[2762]: E0515 13:07:43.660209 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.662676 kubelet[2762]: E0515 13:07:43.662650 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.662676 kubelet[2762]: W0515 13:07:43.662671 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.662768 kubelet[2762]: E0515 13:07:43.662695 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.701839 kubelet[2762]: E0515 13:07:43.701546 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.701839 kubelet[2762]: W0515 13:07:43.701831 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.702762 kubelet[2762]: E0515 13:07:43.702166 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.703946 kubelet[2762]: E0515 13:07:43.703920 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.703946 kubelet[2762]: W0515 13:07:43.703939 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.704005 kubelet[2762]: E0515 13:07:43.703956 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.724695 kubelet[2762]: E0515 13:07:43.724650 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.724695 kubelet[2762]: W0515 13:07:43.724684 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.724903 kubelet[2762]: E0515 13:07:43.724703 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.736586 containerd[1543]: time="2025-05-15T13:07:43.736376053Z" level=info msg="connecting to shim e9a974d0d3697d496d25111fd3100468eb230419767e826c8d53ffda97a915df" address="unix:///run/containerd/s/1636cebd6956d7ad9d5582eff2dfb5f8116cc3d16276161e0c5c77858d426fbc" namespace=k8s.io protocol=ttrpc version=3 May 15 13:07:43.741663 kubelet[2762]: E0515 13:07:43.741635 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.741663 kubelet[2762]: W0515 13:07:43.741656 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.741961 kubelet[2762]: E0515 13:07:43.741677 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.742248 kubelet[2762]: E0515 13:07:43.742232 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.742248 kubelet[2762]: W0515 13:07:43.742246 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.742461 kubelet[2762]: E0515 13:07:43.742263 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.742461 kubelet[2762]: E0515 13:07:43.742421 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.742461 kubelet[2762]: W0515 13:07:43.742428 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.742461 kubelet[2762]: E0515 13:07:43.742439 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.742766 kubelet[2762]: E0515 13:07:43.742610 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.742766 kubelet[2762]: W0515 13:07:43.742625 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.742766 kubelet[2762]: E0515 13:07:43.742633 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.743190 kubelet[2762]: E0515 13:07:43.743175 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.743326 kubelet[2762]: W0515 13:07:43.743297 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.743457 kubelet[2762]: E0515 13:07:43.743396 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.743944 kubelet[2762]: E0515 13:07:43.743905 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.744265 kubelet[2762]: W0515 13:07:43.744092 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.744265 kubelet[2762]: E0515 13:07:43.744158 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.744678 kubelet[2762]: E0515 13:07:43.744665 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.744772 kubelet[2762]: W0515 13:07:43.744733 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.744946 kubelet[2762]: E0515 13:07:43.744933 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.745199 kubelet[2762]: E0515 13:07:43.745186 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.745364 kubelet[2762]: W0515 13:07:43.745285 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.745418 kubelet[2762]: E0515 13:07:43.745406 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.745765 kubelet[2762]: E0515 13:07:43.745753 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.745884 kubelet[2762]: W0515 13:07:43.745831 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.746046 kubelet[2762]: E0515 13:07:43.745973 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.746272 kubelet[2762]: E0515 13:07:43.746246 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.746272 kubelet[2762]: W0515 13:07:43.746257 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.746473 kubelet[2762]: E0515 13:07:43.746460 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.746756 kubelet[2762]: E0515 13:07:43.746716 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.746830 kubelet[2762]: W0515 13:07:43.746798 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.746936 kubelet[2762]: E0515 13:07:43.746917 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.747227 kubelet[2762]: E0515 13:07:43.747181 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.747227 kubelet[2762]: W0515 13:07:43.747191 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.747589 kubelet[2762]: E0515 13:07:43.747384 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.747859 kubelet[2762]: E0515 13:07:43.747847 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.747969 kubelet[2762]: W0515 13:07:43.747916 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.748056 kubelet[2762]: E0515 13:07:43.748044 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.748325 kubelet[2762]: E0515 13:07:43.748289 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.748325 kubelet[2762]: W0515 13:07:43.748299 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.748512 kubelet[2762]: E0515 13:07:43.748492 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.748797 kubelet[2762]: E0515 13:07:43.748772 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.748797 kubelet[2762]: W0515 13:07:43.748783 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.748962 kubelet[2762]: E0515 13:07:43.748950 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.749220 kubelet[2762]: E0515 13:07:43.749175 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.749220 kubelet[2762]: W0515 13:07:43.749185 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.749401 kubelet[2762]: E0515 13:07:43.749364 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.749796 kubelet[2762]: E0515 13:07:43.749719 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.749796 kubelet[2762]: W0515 13:07:43.749752 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.749886 kubelet[2762]: E0515 13:07:43.749874 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.750082 kubelet[2762]: E0515 13:07:43.750058 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.750082 kubelet[2762]: W0515 13:07:43.750068 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.750243 kubelet[2762]: E0515 13:07:43.750223 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.750432 kubelet[2762]: E0515 13:07:43.750408 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.750432 kubelet[2762]: W0515 13:07:43.750418 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.750597 kubelet[2762]: E0515 13:07:43.750572 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.750907 kubelet[2762]: E0515 13:07:43.750883 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.750907 kubelet[2762]: W0515 13:07:43.750894 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.751905 kubelet[2762]: E0515 13:07:43.751727 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.752912 kubelet[2762]: E0515 13:07:43.751973 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.752912 kubelet[2762]: W0515 13:07:43.751985 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.752912 kubelet[2762]: E0515 13:07:43.752109 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.752912 kubelet[2762]: E0515 13:07:43.752199 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.752912 kubelet[2762]: W0515 13:07:43.752206 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.752912 kubelet[2762]: E0515 13:07:43.752264 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.752912 kubelet[2762]: E0515 13:07:43.752421 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.752912 kubelet[2762]: W0515 13:07:43.752428 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.752912 kubelet[2762]: E0515 13:07:43.752506 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.752912 kubelet[2762]: E0515 13:07:43.752744 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.753686 kubelet[2762]: W0515 13:07:43.752753 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.753686 kubelet[2762]: E0515 13:07:43.752765 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.753686 kubelet[2762]: E0515 13:07:43.753343 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.753686 kubelet[2762]: W0515 13:07:43.753352 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.753686 kubelet[2762]: E0515 13:07:43.753362 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.764739 kubelet[2762]: E0515 13:07:43.764509 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:43.764739 kubelet[2762]: W0515 13:07:43.764526 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:43.764739 kubelet[2762]: E0515 13:07:43.764576 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:43.809723 kubelet[2762]: E0515 13:07:43.809367 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:43.816279 containerd[1543]: time="2025-05-15T13:07:43.816198876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h5k9z,Uid:1a8a24dd-708e-4ec3-b972-4df98026b344,Namespace:calico-system,Attempt:0,}" May 15 13:07:43.864100 systemd[1]: Started cri-containerd-e9a974d0d3697d496d25111fd3100468eb230419767e826c8d53ffda97a915df.scope - libcontainer container e9a974d0d3697d496d25111fd3100468eb230419767e826c8d53ffda97a915df. May 15 13:07:43.917796 containerd[1543]: time="2025-05-15T13:07:43.917725161Z" level=info msg="connecting to shim 1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5" address="unix:///run/containerd/s/a77803d17418b2d2db4702b8e9402f5186877e7ae232a68d53b97b391b0ad662" namespace=k8s.io protocol=ttrpc version=3 May 15 13:07:43.979818 systemd[1]: Started cri-containerd-1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5.scope - libcontainer container 1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5. May 15 13:07:44.046589 containerd[1543]: time="2025-05-15T13:07:44.046223186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d889846f-9b2wr,Uid:de1a3c20-465c-4eb4-ad1a-f73d2921c906,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9a974d0d3697d496d25111fd3100468eb230419767e826c8d53ffda97a915df\"" May 15 13:07:44.047269 kubelet[2762]: E0515 13:07:44.047245 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:44.049143 containerd[1543]: time="2025-05-15T13:07:44.049107430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 13:07:44.049781 containerd[1543]: time="2025-05-15T13:07:44.049700908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h5k9z,Uid:1a8a24dd-708e-4ec3-b972-4df98026b344,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5\"" May 15 13:07:44.050473 kubelet[2762]: E0515 13:07:44.050355 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:44.955999 kubelet[2762]: E0515 13:07:44.955931 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:07:46.427722 containerd[1543]: time="2025-05-15T13:07:46.427601230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:46.430920 containerd[1543]: time="2025-05-15T13:07:46.428806594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 13:07:46.430920 containerd[1543]: time="2025-05-15T13:07:46.429734783Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:46.431884 containerd[1543]: time="2025-05-15T13:07:46.431858796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:46.432753 containerd[1543]: time="2025-05-15T13:07:46.432712275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.383565325s" May 15 13:07:46.432808 containerd[1543]: time="2025-05-15T13:07:46.432758885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 13:07:46.436063 containerd[1543]: time="2025-05-15T13:07:46.436030801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 13:07:46.457678 containerd[1543]: time="2025-05-15T13:07:46.457650022Z" level=info msg="CreateContainer within sandbox \"e9a974d0d3697d496d25111fd3100468eb230419767e826c8d53ffda97a915df\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 13:07:46.466973 containerd[1543]: time="2025-05-15T13:07:46.466918252Z" level=info msg="Container 93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c: CDI devices from CRI Config.CDIDevices: []" May 15 13:07:46.478613 containerd[1543]: time="2025-05-15T13:07:46.478549956Z" level=info msg="CreateContainer within sandbox \"e9a974d0d3697d496d25111fd3100468eb230419767e826c8d53ffda97a915df\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c\"" May 15 13:07:46.480676 containerd[1543]: time="2025-05-15T13:07:46.480629568Z" level=info msg="StartContainer for \"93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c\"" May 15 13:07:46.483959 containerd[1543]: time="2025-05-15T13:07:46.483919074Z" level=info msg="connecting to shim 93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c" address="unix:///run/containerd/s/1636cebd6956d7ad9d5582eff2dfb5f8116cc3d16276161e0c5c77858d426fbc" protocol=ttrpc version=3 May 15 13:07:46.538748 systemd[1]: Started cri-containerd-93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c.scope - libcontainer container 93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c. May 15 13:07:46.699758 containerd[1543]: time="2025-05-15T13:07:46.699632243Z" level=info msg="StartContainer for \"93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c\" returns successfully" May 15 13:07:46.956985 kubelet[2762]: E0515 13:07:46.956848 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:07:47.480956 kubelet[2762]: E0515 13:07:47.480288 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:47.507753 kubelet[2762]: I0515 13:07:47.507302 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8d889846f-9b2wr" podStartSLOduration=2.121641686 podStartE2EDuration="4.507287913s" podCreationTimestamp="2025-05-15 13:07:43 +0000 UTC" firstStartedPulling="2025-05-15 13:07:44.048707606 +0000 UTC m=+14.221005490" lastFinishedPulling="2025-05-15 13:07:46.434353833 +0000 UTC m=+16.606651717" observedRunningTime="2025-05-15 13:07:47.506912959 +0000 UTC m=+17.679210843" watchObservedRunningTime="2025-05-15 13:07:47.507287913 +0000 UTC m=+17.679585797" May 15 13:07:47.537207 containerd[1543]: time="2025-05-15T13:07:47.536614471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:47.539305 containerd[1543]: time="2025-05-15T13:07:47.539255198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 13:07:47.540229 containerd[1543]: time="2025-05-15T13:07:47.539778473Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:47.544610 containerd[1543]: time="2025-05-15T13:07:47.544575212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:47.546213 containerd[1543]: time="2025-05-15T13:07:47.546141118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.110071397s" May 15 13:07:47.546304 containerd[1543]: time="2025-05-15T13:07:47.546286779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 13:07:47.550423 containerd[1543]: time="2025-05-15T13:07:47.550378160Z" level=info msg="CreateContainer within sandbox \"1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 13:07:47.560155 kubelet[2762]: E0515 13:07:47.560106 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.560155 kubelet[2762]: W0515 13:07:47.560132 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.560155 kubelet[2762]: E0515 13:07:47.560150 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.561456 kubelet[2762]: E0515 13:07:47.561426 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.561456 kubelet[2762]: W0515 13:07:47.561450 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.562792 kubelet[2762]: E0515 13:07:47.561464 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.562792 kubelet[2762]: E0515 13:07:47.562780 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.562792 kubelet[2762]: W0515 13:07:47.562789 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.563092 kubelet[2762]: E0515 13:07:47.562799 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.563092 kubelet[2762]: E0515 13:07:47.563088 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.563178 kubelet[2762]: W0515 13:07:47.563098 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.563178 kubelet[2762]: E0515 13:07:47.563108 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.563669 kubelet[2762]: E0515 13:07:47.563628 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.563669 kubelet[2762]: W0515 13:07:47.563646 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.563669 kubelet[2762]: E0515 13:07:47.563656 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.564213 kubelet[2762]: E0515 13:07:47.564175 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.564213 kubelet[2762]: W0515 13:07:47.564192 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.564213 kubelet[2762]: E0515 13:07:47.564203 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.564473 kubelet[2762]: E0515 13:07:47.564400 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.564473 kubelet[2762]: W0515 13:07:47.564408 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.564473 kubelet[2762]: E0515 13:07:47.564417 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.565235 kubelet[2762]: E0515 13:07:47.564714 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.565235 kubelet[2762]: W0515 13:07:47.564727 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.565235 kubelet[2762]: E0515 13:07:47.564912 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.569602 kubelet[2762]: E0515 13:07:47.565739 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.569602 kubelet[2762]: W0515 13:07:47.565754 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.569602 kubelet[2762]: E0515 13:07:47.565764 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.569602 kubelet[2762]: E0515 13:07:47.566475 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.569602 kubelet[2762]: W0515 13:07:47.566484 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.569602 kubelet[2762]: E0515 13:07:47.566494 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.569770 containerd[1543]: time="2025-05-15T13:07:47.569634717Z" level=info msg="Container e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80: CDI devices from CRI Config.CDIDevices: []" May 15 13:07:47.572112 kubelet[2762]: E0515 13:07:47.571993 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.572112 kubelet[2762]: W0515 13:07:47.572051 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.572112 kubelet[2762]: E0515 13:07:47.572077 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.572537 kubelet[2762]: E0515 13:07:47.572498 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.572764 kubelet[2762]: W0515 13:07:47.572686 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.572764 kubelet[2762]: E0515 13:07:47.572706 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.573472 kubelet[2762]: E0515 13:07:47.573450 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.573472 kubelet[2762]: W0515 13:07:47.573467 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.573538 kubelet[2762]: E0515 13:07:47.573480 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.574292 kubelet[2762]: E0515 13:07:47.574225 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.574292 kubelet[2762]: W0515 13:07:47.574245 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.574292 kubelet[2762]: E0515 13:07:47.574255 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.574970 kubelet[2762]: E0515 13:07:47.574946 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.574970 kubelet[2762]: W0515 13:07:47.574962 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.574970 kubelet[2762]: E0515 13:07:47.574972 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.579270 containerd[1543]: time="2025-05-15T13:07:47.579227964Z" level=info msg="CreateContainer within sandbox \"1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80\"" May 15 13:07:47.579938 containerd[1543]: time="2025-05-15T13:07:47.579861070Z" level=info msg="StartContainer for \"e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80\"" May 15 13:07:47.583096 containerd[1543]: time="2025-05-15T13:07:47.583044673Z" level=info msg="connecting to shim e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80" address="unix:///run/containerd/s/a77803d17418b2d2db4702b8e9402f5186877e7ae232a68d53b97b391b0ad662" protocol=ttrpc version=3 May 15 13:07:47.641738 systemd[1]: Started cri-containerd-e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80.scope - libcontainer container e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80. May 15 13:07:47.659772 kubelet[2762]: E0515 13:07:47.659732 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.659772 kubelet[2762]: W0515 13:07:47.659759 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.659923 kubelet[2762]: E0515 13:07:47.659801 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.660184 kubelet[2762]: E0515 13:07:47.660158 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.660184 kubelet[2762]: W0515 13:07:47.660174 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.660249 kubelet[2762]: E0515 13:07:47.660226 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.660726 kubelet[2762]: E0515 13:07:47.660697 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.660775 kubelet[2762]: W0515 13:07:47.660752 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.660814 kubelet[2762]: E0515 13:07:47.660772 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.661596 kubelet[2762]: E0515 13:07:47.661471 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.661596 kubelet[2762]: W0515 13:07:47.661507 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.661596 kubelet[2762]: E0515 13:07:47.661522 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.662141 kubelet[2762]: E0515 13:07:47.661855 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.662141 kubelet[2762]: W0515 13:07:47.661869 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.662141 kubelet[2762]: E0515 13:07:47.661949 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.662458 kubelet[2762]: E0515 13:07:47.662252 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.662458 kubelet[2762]: W0515 13:07:47.662284 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.662458 kubelet[2762]: E0515 13:07:47.662344 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.662801 kubelet[2762]: E0515 13:07:47.662715 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.662801 kubelet[2762]: W0515 13:07:47.662791 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.662879 kubelet[2762]: E0515 13:07:47.662855 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.663176 kubelet[2762]: E0515 13:07:47.663150 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.663218 kubelet[2762]: W0515 13:07:47.663195 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.663218 kubelet[2762]: E0515 13:07:47.663213 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.663686 kubelet[2762]: E0515 13:07:47.663664 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.663686 kubelet[2762]: W0515 13:07:47.663679 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.664541 kubelet[2762]: E0515 13:07:47.663813 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.664541 kubelet[2762]: E0515 13:07:47.664134 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.664541 kubelet[2762]: W0515 13:07:47.664143 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.664541 kubelet[2762]: E0515 13:07:47.664154 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.664696 kubelet[2762]: E0515 13:07:47.664643 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.664696 kubelet[2762]: W0515 13:07:47.664688 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.664762 kubelet[2762]: E0515 13:07:47.664697 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.665219 kubelet[2762]: E0515 13:07:47.665188 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.665219 kubelet[2762]: W0515 13:07:47.665207 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.665281 kubelet[2762]: E0515 13:07:47.665229 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.665784 kubelet[2762]: E0515 13:07:47.665677 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.665784 kubelet[2762]: W0515 13:07:47.665693 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.665784 kubelet[2762]: E0515 13:07:47.665715 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.666063 kubelet[2762]: E0515 13:07:47.666037 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.666063 kubelet[2762]: W0515 13:07:47.666056 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.666117 kubelet[2762]: E0515 13:07:47.666102 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.666427 kubelet[2762]: E0515 13:07:47.666345 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.666427 kubelet[2762]: W0515 13:07:47.666384 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.666427 kubelet[2762]: E0515 13:07:47.666422 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.667619 kubelet[2762]: E0515 13:07:47.666758 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.667619 kubelet[2762]: W0515 13:07:47.666772 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.667619 kubelet[2762]: E0515 13:07:47.666803 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.667619 kubelet[2762]: E0515 13:07:47.667004 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.667619 kubelet[2762]: W0515 13:07:47.667012 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.667619 kubelet[2762]: E0515 13:07:47.667024 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.667619 kubelet[2762]: E0515 13:07:47.667309 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 13:07:47.667619 kubelet[2762]: W0515 13:07:47.667318 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 13:07:47.667619 kubelet[2762]: E0515 13:07:47.667326 2762 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 13:07:47.752485 containerd[1543]: time="2025-05-15T13:07:47.751244831Z" level=info msg="StartContainer for \"e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80\" returns successfully" May 15 13:07:47.787053 systemd[1]: cri-containerd-e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80.scope: Deactivated successfully. May 15 13:07:47.794107 containerd[1543]: time="2025-05-15T13:07:47.793975525Z" level=info msg="received exit event container_id:\"e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80\" id:\"e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80\" pid:3357 exited_at:{seconds:1747314467 nanos:792751993}" May 15 13:07:47.794107 containerd[1543]: time="2025-05-15T13:07:47.793997815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80\" id:\"e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80\" pid:3357 exited_at:{seconds:1747314467 nanos:792751993}" May 15 13:07:47.829550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80-rootfs.mount: Deactivated successfully. May 15 13:07:48.486035 kubelet[2762]: E0515 13:07:48.485984 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:48.486610 kubelet[2762]: E0515 13:07:48.486542 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:48.488751 containerd[1543]: time="2025-05-15T13:07:48.488425674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 13:07:48.955885 kubelet[2762]: E0515 13:07:48.955808 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:07:49.488510 kubelet[2762]: E0515 13:07:49.488445 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:50.957429 kubelet[2762]: E0515 13:07:50.956830 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:07:52.058637 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1628535386 wd_nsec: 1628535377 May 15 13:07:52.956510 kubelet[2762]: E0515 13:07:52.956445 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:07:54.957493 kubelet[2762]: E0515 13:07:54.957363 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:07:55.092923 containerd[1543]: time="2025-05-15T13:07:55.092846723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:55.094049 containerd[1543]: time="2025-05-15T13:07:55.093960720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 13:07:55.094824 containerd[1543]: time="2025-05-15T13:07:55.094755955Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:55.096613 containerd[1543]: time="2025-05-15T13:07:55.096487437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:07:55.097574 containerd[1543]: time="2025-05-15T13:07:55.097144941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.608255573s" May 15 13:07:55.097574 containerd[1543]: time="2025-05-15T13:07:55.097184532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 13:07:55.101299 containerd[1543]: time="2025-05-15T13:07:55.101268799Z" level=info msg="CreateContainer within sandbox \"1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 13:07:55.113099 containerd[1543]: time="2025-05-15T13:07:55.112737348Z" level=info msg="Container eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7: CDI devices from CRI Config.CDIDevices: []" May 15 13:07:55.127981 containerd[1543]: time="2025-05-15T13:07:55.127927303Z" level=info msg="CreateContainer within sandbox \"1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7\"" May 15 13:07:55.133990 containerd[1543]: time="2025-05-15T13:07:55.132374914Z" level=info msg="StartContainer for \"eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7\"" May 15 13:07:55.137831 containerd[1543]: time="2025-05-15T13:07:55.136908615Z" level=info msg="connecting to shim eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7" address="unix:///run/containerd/s/a77803d17418b2d2db4702b8e9402f5186877e7ae232a68d53b97b391b0ad662" protocol=ttrpc version=3 May 15 13:07:55.185702 systemd[1]: Started cri-containerd-eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7.scope - libcontainer container eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7. May 15 13:07:55.429714 containerd[1543]: time="2025-05-15T13:07:55.429627385Z" level=info msg="StartContainer for \"eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7\" returns successfully" May 15 13:07:55.821448 kubelet[2762]: E0515 13:07:55.820397 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:56.821504 kubelet[2762]: E0515 13:07:56.821342 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:56.959618 kubelet[2762]: E0515 13:07:56.956746 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:07:57.203463 systemd[1]: cri-containerd-eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7.scope: Deactivated successfully. May 15 13:07:57.204274 systemd[1]: cri-containerd-eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7.scope: Consumed 2.016s CPU time, 175.1M memory peak, 154M written to disk. May 15 13:07:57.208450 containerd[1543]: time="2025-05-15T13:07:57.208381976Z" level=info msg="received exit event container_id:\"eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7\" id:\"eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7\" pid:3432 exited_at:{seconds:1747314477 nanos:207926843}" May 15 13:07:57.209548 containerd[1543]: time="2025-05-15T13:07:57.208983620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7\" id:\"eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7\" pid:3432 exited_at:{seconds:1747314477 nanos:207926843}" May 15 13:07:57.216992 kubelet[2762]: I0515 13:07:57.216150 2762 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 13:07:57.278543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7-rootfs.mount: Deactivated successfully. May 15 13:07:57.310357 systemd[1]: Created slice kubepods-burstable-podb53c6794_8ef1_4efd_9179_2e706d6227cb.slice - libcontainer container kubepods-burstable-podb53c6794_8ef1_4efd_9179_2e706d6227cb.slice. May 15 13:07:57.325187 kubelet[2762]: W0515 13:07:57.324740 2762 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-236-109-179" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-236-109-179' and this object May 15 13:07:57.325187 kubelet[2762]: E0515 13:07:57.324819 2762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-236-109-179\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-236-109-179' and this object" logger="UnhandledError" May 15 13:07:57.330969 systemd[1]: Created slice kubepods-burstable-pod4bce6dbe_21aa_444f_ac75_71dc3b47fb22.slice - libcontainer container kubepods-burstable-pod4bce6dbe_21aa_444f_ac75_71dc3b47fb22.slice. May 15 13:07:57.346290 systemd[1]: Created slice kubepods-besteffort-pod627c03e7_e267_48fe_b4ed_2069e33dcd5c.slice - libcontainer container kubepods-besteffort-pod627c03e7_e267_48fe_b4ed_2069e33dcd5c.slice. May 15 13:07:57.367917 systemd[1]: Created slice kubepods-besteffort-pod84ee8d18_97d2_488c_bd60_c81efc773f5c.slice - libcontainer container kubepods-besteffort-pod84ee8d18_97d2_488c_bd60_c81efc773f5c.slice. May 15 13:07:57.377128 systemd[1]: Created slice kubepods-besteffort-pod7ce2331b_8e2f_4249_b899_450b084743d6.slice - libcontainer container kubepods-besteffort-pod7ce2331b_8e2f_4249_b899_450b084743d6.slice. May 15 13:07:57.401507 kubelet[2762]: I0515 13:07:57.401380 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b53c6794-8ef1-4efd-9179-2e706d6227cb-config-volume\") pod \"coredns-6f6b679f8f-xfdz2\" (UID: \"b53c6794-8ef1-4efd-9179-2e706d6227cb\") " pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:07:57.401694 kubelet[2762]: I0515 13:07:57.401674 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88v2\" (UniqueName: \"kubernetes.io/projected/b53c6794-8ef1-4efd-9179-2e706d6227cb-kube-api-access-v88v2\") pod \"coredns-6f6b679f8f-xfdz2\" (UID: \"b53c6794-8ef1-4efd-9179-2e706d6227cb\") " pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:07:57.503606 kubelet[2762]: I0515 13:07:57.502867 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84ee8d18-97d2-488c-bd60-c81efc773f5c-calico-apiserver-certs\") pod \"calico-apiserver-84ff55988-7cwzf\" (UID: \"84ee8d18-97d2-488c-bd60-c81efc773f5c\") " pod="calico-apiserver/calico-apiserver-84ff55988-7cwzf" May 15 13:07:57.503606 kubelet[2762]: I0515 13:07:57.502936 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvhg4\" (UniqueName: \"kubernetes.io/projected/4bce6dbe-21aa-444f-ac75-71dc3b47fb22-kube-api-access-gvhg4\") pod \"coredns-6f6b679f8f-ftdbf\" (UID: \"4bce6dbe-21aa-444f-ac75-71dc3b47fb22\") " pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:07:57.503606 kubelet[2762]: I0515 13:07:57.502960 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7p84\" (UniqueName: \"kubernetes.io/projected/627c03e7-e267-48fe-b4ed-2069e33dcd5c-kube-api-access-l7p84\") pod \"calico-kube-controllers-6f97f99f64-zpxjv\" (UID: \"627c03e7-e267-48fe-b4ed-2069e33dcd5c\") " pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:07:57.503606 kubelet[2762]: I0515 13:07:57.502980 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9d2\" (UniqueName: \"kubernetes.io/projected/84ee8d18-97d2-488c-bd60-c81efc773f5c-kube-api-access-wr9d2\") pod \"calico-apiserver-84ff55988-7cwzf\" (UID: \"84ee8d18-97d2-488c-bd60-c81efc773f5c\") " pod="calico-apiserver/calico-apiserver-84ff55988-7cwzf" May 15 13:07:57.503606 kubelet[2762]: I0515 13:07:57.503009 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ce2331b-8e2f-4249-b899-450b084743d6-calico-apiserver-certs\") pod \"calico-apiserver-84ff55988-qmb6b\" (UID: \"7ce2331b-8e2f-4249-b899-450b084743d6\") " pod="calico-apiserver/calico-apiserver-84ff55988-qmb6b" May 15 13:07:57.503932 kubelet[2762]: I0515 13:07:57.503050 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/627c03e7-e267-48fe-b4ed-2069e33dcd5c-tigera-ca-bundle\") pod \"calico-kube-controllers-6f97f99f64-zpxjv\" (UID: \"627c03e7-e267-48fe-b4ed-2069e33dcd5c\") " pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:07:57.503932 kubelet[2762]: I0515 13:07:57.503078 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94sms\" (UniqueName: \"kubernetes.io/projected/7ce2331b-8e2f-4249-b899-450b084743d6-kube-api-access-94sms\") pod \"calico-apiserver-84ff55988-qmb6b\" (UID: \"7ce2331b-8e2f-4249-b899-450b084743d6\") " pod="calico-apiserver/calico-apiserver-84ff55988-qmb6b" May 15 13:07:57.503932 kubelet[2762]: I0515 13:07:57.503108 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bce6dbe-21aa-444f-ac75-71dc3b47fb22-config-volume\") pod \"coredns-6f6b679f8f-ftdbf\" (UID: \"4bce6dbe-21aa-444f-ac75-71dc3b47fb22\") " pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:07:57.661493 containerd[1543]: time="2025-05-15T13:07:57.661439181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:07:57.676353 containerd[1543]: time="2025-05-15T13:07:57.676232084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ff55988-7cwzf,Uid:84ee8d18-97d2-488c-bd60-c81efc773f5c,Namespace:calico-apiserver,Attempt:0,}" May 15 13:07:57.688865 containerd[1543]: time="2025-05-15T13:07:57.688822233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ff55988-qmb6b,Uid:7ce2331b-8e2f-4249-b899-450b084743d6,Namespace:calico-apiserver,Attempt:0,}" May 15 13:07:57.838293 kubelet[2762]: E0515 13:07:57.838035 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:57.841251 containerd[1543]: time="2025-05-15T13:07:57.841168730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 13:07:57.914938 containerd[1543]: time="2025-05-15T13:07:57.914881322Z" level=error msg="Failed to destroy network for sandbox \"90b35424f8bbda31c23ce3b5c9309b694cf2fc08356fedcced58d947237e29f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.918535 containerd[1543]: time="2025-05-15T13:07:57.918403664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ff55988-7cwzf,Uid:84ee8d18-97d2-488c-bd60-c81efc773f5c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b35424f8bbda31c23ce3b5c9309b694cf2fc08356fedcced58d947237e29f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.919382 kubelet[2762]: E0515 13:07:57.919285 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b35424f8bbda31c23ce3b5c9309b694cf2fc08356fedcced58d947237e29f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.919482 kubelet[2762]: E0515 13:07:57.919439 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b35424f8bbda31c23ce3b5c9309b694cf2fc08356fedcced58d947237e29f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ff55988-7cwzf" May 15 13:07:57.919529 kubelet[2762]: E0515 13:07:57.919478 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b35424f8bbda31c23ce3b5c9309b694cf2fc08356fedcced58d947237e29f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ff55988-7cwzf" May 15 13:07:57.919821 kubelet[2762]: E0515 13:07:57.919527 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ff55988-7cwzf_calico-apiserver(84ee8d18-97d2-488c-bd60-c81efc773f5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ff55988-7cwzf_calico-apiserver(84ee8d18-97d2-488c-bd60-c81efc773f5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90b35424f8bbda31c23ce3b5c9309b694cf2fc08356fedcced58d947237e29f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ff55988-7cwzf" podUID="84ee8d18-97d2-488c-bd60-c81efc773f5c" May 15 13:07:57.928990 containerd[1543]: time="2025-05-15T13:07:57.928888210Z" level=error msg="Failed to destroy network for sandbox \"77038e220f78e547f878674bd0b358ef497cb395671eb5345636f6ae07a6bd7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.929878 containerd[1543]: time="2025-05-15T13:07:57.929822266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77038e220f78e547f878674bd0b358ef497cb395671eb5345636f6ae07a6bd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.930234 kubelet[2762]: E0515 13:07:57.930179 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77038e220f78e547f878674bd0b358ef497cb395671eb5345636f6ae07a6bd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.930285 kubelet[2762]: E0515 13:07:57.930243 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77038e220f78e547f878674bd0b358ef497cb395671eb5345636f6ae07a6bd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:07:57.930285 kubelet[2762]: E0515 13:07:57.930270 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77038e220f78e547f878674bd0b358ef497cb395671eb5345636f6ae07a6bd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:07:57.930363 kubelet[2762]: E0515 13:07:57.930313 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77038e220f78e547f878674bd0b358ef497cb395671eb5345636f6ae07a6bd7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:07:57.941131 containerd[1543]: time="2025-05-15T13:07:57.941089187Z" level=error msg="Failed to destroy network for sandbox \"468f0957f11c9eb04b5a7dece113c43c640bceb78f61c9bfc9522fddf2109a9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.943403 containerd[1543]: time="2025-05-15T13:07:57.943349131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84ff55988-qmb6b,Uid:7ce2331b-8e2f-4249-b899-450b084743d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"468f0957f11c9eb04b5a7dece113c43c640bceb78f61c9bfc9522fddf2109a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.945388 kubelet[2762]: E0515 13:07:57.945345 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468f0957f11c9eb04b5a7dece113c43c640bceb78f61c9bfc9522fddf2109a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:57.945722 kubelet[2762]: E0515 13:07:57.945400 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468f0957f11c9eb04b5a7dece113c43c640bceb78f61c9bfc9522fddf2109a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ff55988-qmb6b" May 15 13:07:57.945754 kubelet[2762]: E0515 13:07:57.945721 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468f0957f11c9eb04b5a7dece113c43c640bceb78f61c9bfc9522fddf2109a9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84ff55988-qmb6b" May 15 13:07:57.945808 kubelet[2762]: E0515 13:07:57.945752 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84ff55988-qmb6b_calico-apiserver(7ce2331b-8e2f-4249-b899-450b084743d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84ff55988-qmb6b_calico-apiserver(7ce2331b-8e2f-4249-b899-450b084743d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"468f0957f11c9eb04b5a7dece113c43c640bceb78f61c9bfc9522fddf2109a9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84ff55988-qmb6b" podUID="7ce2331b-8e2f-4249-b899-450b084743d6" May 15 13:07:58.520582 kubelet[2762]: E0515 13:07:58.520141 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:58.521104 containerd[1543]: time="2025-05-15T13:07:58.520965468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:07:58.539735 kubelet[2762]: E0515 13:07:58.539674 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:07:58.541586 containerd[1543]: time="2025-05-15T13:07:58.541409741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:07:58.639686 containerd[1543]: time="2025-05-15T13:07:58.639620532Z" level=error msg="Failed to destroy network for sandbox \"cbb694b6aa16a22870a2a8b6a57a61b438d323a33c656715a40ac752dbeac0f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:58.642764 systemd[1]: run-netns-cni\x2db796c7de\x2d80b6\x2d3419\x2daaf2\x2d0c15afa686d6.mount: Deactivated successfully. May 15 13:07:58.643955 containerd[1543]: time="2025-05-15T13:07:58.643832047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb694b6aa16a22870a2a8b6a57a61b438d323a33c656715a40ac752dbeac0f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:58.645203 kubelet[2762]: E0515 13:07:58.644816 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb694b6aa16a22870a2a8b6a57a61b438d323a33c656715a40ac752dbeac0f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:58.645203 kubelet[2762]: E0515 13:07:58.645254 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb694b6aa16a22870a2a8b6a57a61b438d323a33c656715a40ac752dbeac0f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:07:58.645433 kubelet[2762]: E0515 13:07:58.645276 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb694b6aa16a22870a2a8b6a57a61b438d323a33c656715a40ac752dbeac0f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:07:58.646053 kubelet[2762]: E0515 13:07:58.645714 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbb694b6aa16a22870a2a8b6a57a61b438d323a33c656715a40ac752dbeac0f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xfdz2" podUID="b53c6794-8ef1-4efd-9179-2e706d6227cb" May 15 13:07:58.654733 containerd[1543]: time="2025-05-15T13:07:58.654638753Z" level=error msg="Failed to destroy network for sandbox \"638d72318c80acf8778ec9ad5d86bf217617ced7dcc9f5645aecf0fc55b23c4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:58.658000 containerd[1543]: time="2025-05-15T13:07:58.657018816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"638d72318c80acf8778ec9ad5d86bf217617ced7dcc9f5645aecf0fc55b23c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:58.657957 systemd[1]: run-netns-cni\x2dfafdd671\x2d18a8\x2d498c\x2dd197\x2d708497e8c96c.mount: Deactivated successfully. May 15 13:07:58.658164 kubelet[2762]: E0515 13:07:58.657176 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638d72318c80acf8778ec9ad5d86bf217617ced7dcc9f5645aecf0fc55b23c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:58.658164 kubelet[2762]: E0515 13:07:58.657210 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638d72318c80acf8778ec9ad5d86bf217617ced7dcc9f5645aecf0fc55b23c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:07:58.658164 kubelet[2762]: E0515 13:07:58.657227 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638d72318c80acf8778ec9ad5d86bf217617ced7dcc9f5645aecf0fc55b23c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:07:58.658260 kubelet[2762]: E0515 13:07:58.657300 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"638d72318c80acf8778ec9ad5d86bf217617ced7dcc9f5645aecf0fc55b23c4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:07:58.979264 systemd[1]: Created slice kubepods-besteffort-pod85ebef63_264f_4ef9_b5f5_d3d0ecc23527.slice - libcontainer container kubepods-besteffort-pod85ebef63_264f_4ef9_b5f5_d3d0ecc23527.slice. May 15 13:07:58.992252 containerd[1543]: time="2025-05-15T13:07:58.992122350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:07:59.160218 containerd[1543]: time="2025-05-15T13:07:59.160020431Z" level=error msg="Failed to destroy network for sandbox \"6cddeeed1a86fff0ec7a7e29e9088b035c5ddfef0834704b4cc232da50b467b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:59.161783 containerd[1543]: time="2025-05-15T13:07:59.161741231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cddeeed1a86fff0ec7a7e29e9088b035c5ddfef0834704b4cc232da50b467b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:59.162615 kubelet[2762]: E0515 13:07:59.162417 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cddeeed1a86fff0ec7a7e29e9088b035c5ddfef0834704b4cc232da50b467b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:07:59.163756 kubelet[2762]: E0515 13:07:59.163189 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cddeeed1a86fff0ec7a7e29e9088b035c5ddfef0834704b4cc232da50b467b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:07:59.163756 kubelet[2762]: E0515 13:07:59.163282 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cddeeed1a86fff0ec7a7e29e9088b035c5ddfef0834704b4cc232da50b467b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:07:59.163756 kubelet[2762]: E0515 13:07:59.163397 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cddeeed1a86fff0ec7a7e29e9088b035c5ddfef0834704b4cc232da50b467b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:08:00.158294 kubelet[2762]: I0515 13:08:00.158227 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:00.158294 kubelet[2762]: I0515 13:08:00.158297 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:00.162743 kubelet[2762]: I0515 13:08:00.162702 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:00.191876 kubelet[2762]: I0515 13:08:00.191427 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:00.192177 kubelet[2762]: I0515 13:08:00.192150 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-84ff55988-qmb6b","calico-apiserver/calico-apiserver-84ff55988-7cwzf","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/csi-node-driver-fxxht","calico-system/calico-node-h5k9z","tigera-operator/tigera-operator-6f6897fdc5-gkqfs","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:00.202043 kubelet[2762]: I0515 13:08:00.201889 2762 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-84ff55988-qmb6b" May 15 13:08:00.202096 kubelet[2762]: I0515 13:08:00.202046 2762 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-84ff55988-qmb6b"] May 15 13:08:00.261879 kubelet[2762]: I0515 13:08:00.261821 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ce2331b-8e2f-4249-b899-450b084743d6-calico-apiserver-certs\") pod \"7ce2331b-8e2f-4249-b899-450b084743d6\" (UID: \"7ce2331b-8e2f-4249-b899-450b084743d6\") " May 15 13:08:00.262071 kubelet[2762]: I0515 13:08:00.261899 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94sms\" (UniqueName: \"kubernetes.io/projected/7ce2331b-8e2f-4249-b899-450b084743d6-kube-api-access-94sms\") pod \"7ce2331b-8e2f-4249-b899-450b084743d6\" (UID: \"7ce2331b-8e2f-4249-b899-450b084743d6\") " May 15 13:08:00.272601 kubelet[2762]: I0515 13:08:00.272537 2762 kubelet.go:2306] "Pod admission denied" podUID="f6555573-cb0f-4856-b2a4-d0bd5a661625" pod="calico-apiserver/calico-apiserver-84ff55988-klqq2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.279375 systemd[1]: var-lib-kubelet-pods-7ce2331b\x2d8e2f\x2d4249\x2db899\x2d450b084743d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94sms.mount: Deactivated successfully. May 15 13:08:00.283021 kubelet[2762]: I0515 13:08:00.282729 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce2331b-8e2f-4249-b899-450b084743d6-kube-api-access-94sms" (OuterVolumeSpecName: "kube-api-access-94sms") pod "7ce2331b-8e2f-4249-b899-450b084743d6" (UID: "7ce2331b-8e2f-4249-b899-450b084743d6"). InnerVolumeSpecName "kube-api-access-94sms". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 13:08:00.284842 systemd[1]: var-lib-kubelet-pods-7ce2331b\x2d8e2f\x2d4249\x2db899\x2d450b084743d6-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 13:08:00.288884 kubelet[2762]: I0515 13:08:00.288835 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce2331b-8e2f-4249-b899-450b084743d6-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7ce2331b-8e2f-4249-b899-450b084743d6" (UID: "7ce2331b-8e2f-4249-b899-450b084743d6"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 13:08:00.310379 kubelet[2762]: I0515 13:08:00.310071 2762 kubelet.go:2306] "Pod admission denied" podUID="6c871c4f-92cc-434b-96db-19efaf919fd1" pod="calico-apiserver/calico-apiserver-84ff55988-m44rq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.340022 kubelet[2762]: I0515 13:08:00.339289 2762 kubelet.go:2306] "Pod admission denied" podUID="0b5131ae-90b6-4682-8161-e8db5fdea2a6" pod="calico-apiserver/calico-apiserver-84ff55988-lrg5n" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.362734 kubelet[2762]: I0515 13:08:00.362637 2762 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ce2331b-8e2f-4249-b899-450b084743d6-calico-apiserver-certs\") on node \"172-236-109-179\" DevicePath \"\"" May 15 13:08:00.363004 kubelet[2762]: I0515 13:08:00.362938 2762 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-94sms\" (UniqueName: \"kubernetes.io/projected/7ce2331b-8e2f-4249-b899-450b084743d6-kube-api-access-94sms\") on node \"172-236-109-179\" DevicePath \"\"" May 15 13:08:00.384194 kubelet[2762]: I0515 13:08:00.381540 2762 kubelet.go:2306] "Pod admission denied" podUID="cace3819-6bb6-4d67-ad33-72ee5a58ed7d" pod="calico-apiserver/calico-apiserver-84ff55988-nmvpf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.447191 kubelet[2762]: I0515 13:08:00.446167 2762 kubelet.go:2306] "Pod admission denied" podUID="7e8fbf39-5c21-4a56-8674-89a20250d962" pod="calico-apiserver/calico-apiserver-84ff55988-dpbsw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.488223 kubelet[2762]: I0515 13:08:00.488092 2762 kubelet.go:2306] "Pod admission denied" podUID="fd4190a8-500d-4672-9e25-1ab5e9e5c34f" pod="calico-apiserver/calico-apiserver-84ff55988-hdsdc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.529831 kubelet[2762]: I0515 13:08:00.529680 2762 kubelet.go:2306] "Pod admission denied" podUID="248da267-ce51-40a4-b285-dd35e3a8aa16" pod="calico-apiserver/calico-apiserver-84ff55988-g6pfp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.588976 kubelet[2762]: I0515 13:08:00.588921 2762 kubelet.go:2306] "Pod admission denied" podUID="e3bb2115-dd89-437b-957b-cf56648cf61b" pod="calico-apiserver/calico-apiserver-84ff55988-g5dql" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.625595 kubelet[2762]: I0515 13:08:00.625337 2762 kubelet.go:2306] "Pod admission denied" podUID="2211efba-a3d5-4442-8bfe-b4bc407c3df4" pod="calico-apiserver/calico-apiserver-84ff55988-7cdnw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.821218 kubelet[2762]: I0515 13:08:00.820800 2762 kubelet.go:2306] "Pod admission denied" podUID="f5f9c68c-4968-4a7e-900d-2b540d10a802" pod="calico-apiserver/calico-apiserver-84ff55988-mdk7b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.879016 kubelet[2762]: I0515 13:08:00.878935 2762 kubelet.go:2306] "Pod admission denied" podUID="090fe992-c1c1-4a0e-9b1e-518bc424eb70" pod="calico-apiserver/calico-apiserver-84ff55988-ncjc6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:00.899535 systemd[1]: Removed slice kubepods-besteffort-pod7ce2331b_8e2f_4249_b899_450b084743d6.slice - libcontainer container kubepods-besteffort-pod7ce2331b_8e2f_4249_b899_450b084743d6.slice. May 15 13:08:00.994144 kubelet[2762]: I0515 13:08:00.993194 2762 kubelet.go:2306] "Pod admission denied" podUID="4175b485-001b-47d8-a8ff-4abfc4f5676e" pod="calico-apiserver/calico-apiserver-84ff55988-2n8jw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:01.203785 kubelet[2762]: I0515 13:08:01.202473 2762 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-84ff55988-qmb6b"] May 15 13:08:01.232234 kubelet[2762]: I0515 13:08:01.232208 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:01.232631 kubelet[2762]: I0515 13:08:01.232617 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:01.234860 kubelet[2762]: I0515 13:08:01.234832 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:01.253730 kubelet[2762]: I0515 13:08:01.253692 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:01.253892 kubelet[2762]: I0515 13:08:01.253874 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-84ff55988-7cwzf","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","tigera-operator/tigera-operator-6f6897fdc5-gkqfs","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:01.261610 kubelet[2762]: I0515 13:08:01.261445 2762 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-84ff55988-7cwzf" May 15 13:08:01.261610 kubelet[2762]: I0515 13:08:01.261463 2762 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-84ff55988-7cwzf"] May 15 13:08:01.330829 kubelet[2762]: I0515 13:08:01.330780 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr9d2\" (UniqueName: \"kubernetes.io/projected/84ee8d18-97d2-488c-bd60-c81efc773f5c-kube-api-access-wr9d2\") pod \"84ee8d18-97d2-488c-bd60-c81efc773f5c\" (UID: \"84ee8d18-97d2-488c-bd60-c81efc773f5c\") " May 15 13:08:01.332636 kubelet[2762]: I0515 13:08:01.332618 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84ee8d18-97d2-488c-bd60-c81efc773f5c-calico-apiserver-certs\") pod \"84ee8d18-97d2-488c-bd60-c81efc773f5c\" (UID: \"84ee8d18-97d2-488c-bd60-c81efc773f5c\") " May 15 13:08:01.352459 kubelet[2762]: I0515 13:08:01.349192 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84ee8d18-97d2-488c-bd60-c81efc773f5c-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "84ee8d18-97d2-488c-bd60-c81efc773f5c" (UID: "84ee8d18-97d2-488c-bd60-c81efc773f5c"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 13:08:01.351734 systemd[1]: var-lib-kubelet-pods-84ee8d18\x2d97d2\x2d488c\x2dbd60\x2dc81efc773f5c-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 13:08:01.359515 systemd[1]: var-lib-kubelet-pods-84ee8d18\x2d97d2\x2d488c\x2dbd60\x2dc81efc773f5c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwr9d2.mount: Deactivated successfully. May 15 13:08:01.361052 kubelet[2762]: I0515 13:08:01.360247 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ee8d18-97d2-488c-bd60-c81efc773f5c-kube-api-access-wr9d2" (OuterVolumeSpecName: "kube-api-access-wr9d2") pod "84ee8d18-97d2-488c-bd60-c81efc773f5c" (UID: "84ee8d18-97d2-488c-bd60-c81efc773f5c"). InnerVolumeSpecName "kube-api-access-wr9d2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 13:08:01.436646 kubelet[2762]: I0515 13:08:01.433361 2762 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84ee8d18-97d2-488c-bd60-c81efc773f5c-calico-apiserver-certs\") on node \"172-236-109-179\" DevicePath \"\"" May 15 13:08:01.436646 kubelet[2762]: I0515 13:08:01.433393 2762 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wr9d2\" (UniqueName: \"kubernetes.io/projected/84ee8d18-97d2-488c-bd60-c81efc773f5c-kube-api-access-wr9d2\") on node \"172-236-109-179\" DevicePath \"\"" May 15 13:08:01.878345 systemd[1]: Removed slice kubepods-besteffort-pod84ee8d18_97d2_488c_bd60_c81efc773f5c.slice - libcontainer container kubepods-besteffort-pod84ee8d18_97d2_488c_bd60_c81efc773f5c.slice. May 15 13:08:02.266130 kubelet[2762]: I0515 13:08:02.265948 2762 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-84ff55988-7cwzf"] May 15 13:08:02.300333 kubelet[2762]: I0515 13:08:02.300298 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:02.300731 kubelet[2762]: I0515 13:08:02.300717 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:02.308370 kubelet[2762]: I0515 13:08:02.308103 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:02.355932 kubelet[2762]: I0515 13:08:02.355879 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:02.356091 kubelet[2762]: I0515 13:08:02.355960 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/csi-node-driver-fxxht","calico-system/calico-node-h5k9z","tigera-operator/tigera-operator-6f6897fdc5-gkqfs","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:02.356091 kubelet[2762]: E0515 13:08:02.355988 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:02.356091 kubelet[2762]: E0515 13:08:02.356001 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:02.356091 kubelet[2762]: E0515 13:08:02.356013 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:02.356091 kubelet[2762]: E0515 13:08:02.356023 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:02.356091 kubelet[2762]: E0515 13:08:02.356034 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:08:02.359388 containerd[1543]: time="2025-05-15T13:08:02.359150482Z" level=info msg="StopContainer for \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" with timeout 2 (s)" May 15 13:08:02.362443 containerd[1543]: time="2025-05-15T13:08:02.362414259Z" level=info msg="Stop container \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" with signal terminated" May 15 13:08:02.960901 systemd[1]: cri-containerd-7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248.scope: Deactivated successfully. May 15 13:08:02.962137 systemd[1]: cri-containerd-7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248.scope: Consumed 2.118s CPU time, 29.2M memory peak. May 15 13:08:02.965354 containerd[1543]: time="2025-05-15T13:08:02.964881911Z" level=info msg="received exit event container_id:\"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" id:\"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" pid:3112 exited_at:{seconds:1747314482 nanos:963928196}" May 15 13:08:02.973210 containerd[1543]: time="2025-05-15T13:08:02.973161953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" id:\"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" pid:3112 exited_at:{seconds:1747314482 nanos:963928196}" May 15 13:08:03.019204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248-rootfs.mount: Deactivated successfully. May 15 13:08:03.039744 containerd[1543]: time="2025-05-15T13:08:03.039676075Z" level=info msg="StopContainer for \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" returns successfully" May 15 13:08:03.041358 containerd[1543]: time="2025-05-15T13:08:03.041051881Z" level=info msg="StopPodSandbox for \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\"" May 15 13:08:03.041358 containerd[1543]: time="2025-05-15T13:08:03.041139503Z" level=info msg="Container to stop \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 13:08:03.063227 systemd[1]: cri-containerd-4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d.scope: Deactivated successfully. May 15 13:08:03.066393 containerd[1543]: time="2025-05-15T13:08:03.066356246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" id:\"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" pid:2957 exit_status:137 exited_at:{seconds:1747314483 nanos:64000754}" May 15 13:08:03.110930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d-rootfs.mount: Deactivated successfully. May 15 13:08:03.117725 containerd[1543]: time="2025-05-15T13:08:03.117668178Z" level=info msg="shim disconnected" id=4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d namespace=k8s.io May 15 13:08:03.117725 containerd[1543]: time="2025-05-15T13:08:03.117715998Z" level=warning msg="cleaning up after shim disconnected" id=4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d namespace=k8s.io May 15 13:08:03.117924 containerd[1543]: time="2025-05-15T13:08:03.117724878Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 13:08:03.118767 containerd[1543]: time="2025-05-15T13:08:03.118730343Z" level=info msg="received exit event sandbox_id:\"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" exit_status:137 exited_at:{seconds:1747314483 nanos:64000754}" May 15 13:08:03.123027 containerd[1543]: time="2025-05-15T13:08:03.121014024Z" level=info msg="TearDown network for sandbox \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" successfully" May 15 13:08:03.123027 containerd[1543]: time="2025-05-15T13:08:03.123022763Z" level=info msg="StopPodSandbox for \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" returns successfully" May 15 13:08:03.124202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d-shm.mount: Deactivated successfully. May 15 13:08:03.132959 kubelet[2762]: I0515 13:08:03.132506 2762 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-6f6897fdc5-gkqfs" May 15 13:08:03.132959 kubelet[2762]: I0515 13:08:03.132572 2762 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-6f6897fdc5-gkqfs"] May 15 13:08:03.163593 kubelet[2762]: I0515 13:08:03.162057 2762 kubelet.go:2306] "Pod admission denied" podUID="5a4c2dcc-41ec-47dd-9525-78438a2d1a55" pod="tigera-operator/tigera-operator-6f6897fdc5-jz49s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.195387 kubelet[2762]: I0515 13:08:03.194438 2762 kubelet.go:2306] "Pod admission denied" podUID="53490f22-e40f-4fe7-8ed8-bb33c45c1971" pod="tigera-operator/tigera-operator-6f6897fdc5-jsnbr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.224842 kubelet[2762]: I0515 13:08:03.224442 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df892b94-50e9-44b7-bfad-3bb7cb8029a0-var-lib-calico\") pod \"df892b94-50e9-44b7-bfad-3bb7cb8029a0\" (UID: \"df892b94-50e9-44b7-bfad-3bb7cb8029a0\") " May 15 13:08:03.225280 kubelet[2762]: I0515 13:08:03.225106 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wds46\" (UniqueName: \"kubernetes.io/projected/df892b94-50e9-44b7-bfad-3bb7cb8029a0-kube-api-access-wds46\") pod \"df892b94-50e9-44b7-bfad-3bb7cb8029a0\" (UID: \"df892b94-50e9-44b7-bfad-3bb7cb8029a0\") " May 15 13:08:03.226316 kubelet[2762]: I0515 13:08:03.224186 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df892b94-50e9-44b7-bfad-3bb7cb8029a0-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "df892b94-50e9-44b7-bfad-3bb7cb8029a0" (UID: "df892b94-50e9-44b7-bfad-3bb7cb8029a0"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 13:08:03.235429 systemd[1]: var-lib-kubelet-pods-df892b94\x2d50e9\x2d44b7\x2dbfad\x2d3bb7cb8029a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwds46.mount: Deactivated successfully. May 15 13:08:03.238460 kubelet[2762]: I0515 13:08:03.238333 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df892b94-50e9-44b7-bfad-3bb7cb8029a0-kube-api-access-wds46" (OuterVolumeSpecName: "kube-api-access-wds46") pod "df892b94-50e9-44b7-bfad-3bb7cb8029a0" (UID: "df892b94-50e9-44b7-bfad-3bb7cb8029a0"). InnerVolumeSpecName "kube-api-access-wds46". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 13:08:03.246594 kubelet[2762]: I0515 13:08:03.246415 2762 kubelet.go:2306] "Pod admission denied" podUID="30e047a0-84c2-420f-ac04-12c903fd0eb3" pod="tigera-operator/tigera-operator-6f6897fdc5-pwlvg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.303460 kubelet[2762]: I0515 13:08:03.302973 2762 kubelet.go:2306] "Pod admission denied" podUID="abb0701d-4222-4310-9bc4-2783aa9d7894" pod="tigera-operator/tigera-operator-6f6897fdc5-s6cj5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.326977 kubelet[2762]: I0515 13:08:03.326495 2762 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df892b94-50e9-44b7-bfad-3bb7cb8029a0-var-lib-calico\") on node \"172-236-109-179\" DevicePath \"\"" May 15 13:08:03.326977 kubelet[2762]: I0515 13:08:03.326528 2762 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wds46\" (UniqueName: \"kubernetes.io/projected/df892b94-50e9-44b7-bfad-3bb7cb8029a0-kube-api-access-wds46\") on node \"172-236-109-179\" DevicePath \"\"" May 15 13:08:03.336187 kubelet[2762]: I0515 13:08:03.336148 2762 kubelet.go:2306] "Pod admission denied" podUID="d8a4c480-f7c5-42bd-b36e-2f7871fc9cbc" pod="tigera-operator/tigera-operator-6f6897fdc5-7sfs8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.368437 containerd[1543]: time="2025-05-15T13:08:03.368144636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273283772: write /var/lib/containerd/tmpmounts/containerd-mount1273283772/usr/lib/calico/bpf/from_nat_info.o: no space left on device" May 15 13:08:03.370097 containerd[1543]: time="2025-05-15T13:08:03.368194466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 13:08:03.372380 kubelet[2762]: E0515 13:08:03.370715 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273283772: write /var/lib/containerd/tmpmounts/containerd-mount1273283772/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 13:08:03.372380 kubelet[2762]: E0515 13:08:03.370781 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273283772: write /var/lib/containerd/tmpmounts/containerd-mount1273283772/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 13:08:03.371609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273283772.mount: Deactivated successfully. May 15 13:08:03.374373 kubelet[2762]: E0515 13:08:03.372940 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pg5bx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-h5k9z_calico-system(1a8a24dd-708e-4ec3-b972-4df98026b344): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273283772: write /var/lib/containerd/tmpmounts/containerd-mount1273283772/usr/lib/calico/bpf/from_nat_info.o: no space left on device" logger="UnhandledError" May 15 13:08:03.376154 kubelet[2762]: E0515 13:08:03.375897 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273283772: write /var/lib/containerd/tmpmounts/containerd-mount1273283772/usr/lib/calico/bpf/from_nat_info.o: no space left on device\"" pod="calico-system/calico-node-h5k9z" podUID="1a8a24dd-708e-4ec3-b972-4df98026b344" May 15 13:08:03.377778 kubelet[2762]: I0515 13:08:03.377755 2762 kubelet.go:2306] "Pod admission denied" podUID="65e1ff06-0b30-4ac3-96aa-63bf20a09e25" pod="tigera-operator/tigera-operator-6f6897fdc5-ndr22" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.398583 kubelet[2762]: I0515 13:08:03.398474 2762 kubelet.go:2306] "Pod admission denied" podUID="2a0a9312-1b9f-49ba-af7c-28c47774d690" pod="tigera-operator/tigera-operator-6f6897fdc5-ct8rh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.424903 kubelet[2762]: I0515 13:08:03.424864 2762 kubelet.go:2306] "Pod admission denied" podUID="64b12cdc-9a5f-49f9-abbc-30db6aaaf449" pod="tigera-operator/tigera-operator-6f6897fdc5-hqzd2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.458311 kubelet[2762]: I0515 13:08:03.458235 2762 kubelet.go:2306] "Pod admission denied" podUID="84d7220e-745b-4d16-b706-6327e9432c70" pod="tigera-operator/tigera-operator-6f6897fdc5-l2lh4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.618280 kubelet[2762]: I0515 13:08:03.616830 2762 kubelet.go:2306] "Pod admission denied" podUID="b73ee643-50db-4667-b965-ed5c6fa05340" pod="tigera-operator/tigera-operator-6f6897fdc5-sj54w" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.761277 kubelet[2762]: I0515 13:08:03.760904 2762 kubelet.go:2306] "Pod admission denied" podUID="139b0ee1-2319-490e-a506-7886efa79a2e" pod="tigera-operator/tigera-operator-6f6897fdc5-89wt2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.911419 kubelet[2762]: I0515 13:08:03.911235 2762 kubelet.go:2306] "Pod admission denied" podUID="8e3a75ff-f7ee-4d2e-bb77-969e41c457be" pod="tigera-operator/tigera-operator-6f6897fdc5-7zrrp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:03.930974 kubelet[2762]: I0515 13:08:03.930916 2762 scope.go:117] "RemoveContainer" containerID="7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248" May 15 13:08:03.935997 containerd[1543]: time="2025-05-15T13:08:03.935770770Z" level=info msg="RemoveContainer for \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\"" May 15 13:08:03.939088 systemd[1]: Removed slice kubepods-besteffort-poddf892b94_50e9_44b7_bfad_3bb7cb8029a0.slice - libcontainer container kubepods-besteffort-poddf892b94_50e9_44b7_bfad_3bb7cb8029a0.slice. May 15 13:08:03.939393 systemd[1]: kubepods-besteffort-poddf892b94_50e9_44b7_bfad_3bb7cb8029a0.slice: Consumed 2.148s CPU time, 29.4M memory peak. May 15 13:08:03.943280 containerd[1543]: time="2025-05-15T13:08:03.943207777Z" level=info msg="RemoveContainer for \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" returns successfully" May 15 13:08:03.943657 kubelet[2762]: I0515 13:08:03.943630 2762 scope.go:117] "RemoveContainer" containerID="7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248" May 15 13:08:03.944032 containerd[1543]: time="2025-05-15T13:08:03.943988331Z" level=error msg="ContainerStatus for \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\": not found" May 15 13:08:03.944847 kubelet[2762]: E0515 13:08:03.944816 2762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\": not found" containerID="7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248" May 15 13:08:03.944918 kubelet[2762]: I0515 13:08:03.944848 2762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248"} err="failed to get container status \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248\": not found" May 15 13:08:04.058324 kubelet[2762]: I0515 13:08:04.058265 2762 kubelet.go:2306] "Pod admission denied" podUID="7edc8454-51b0-4c65-a5ca-f895c1d63f42" pod="tigera-operator/tigera-operator-6f6897fdc5-qph29" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:04.132921 kubelet[2762]: I0515 13:08:04.132858 2762 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-6f6897fdc5-gkqfs"] May 15 13:08:04.150091 kubelet[2762]: I0515 13:08:04.150059 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:04.150601 kubelet[2762]: I0515 13:08:04.150215 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:04.155086 containerd[1543]: time="2025-05-15T13:08:04.154717827Z" level=info msg="StopPodSandbox for \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\"" May 15 13:08:04.155086 containerd[1543]: time="2025-05-15T13:08:04.155016488Z" level=info msg="TearDown network for sandbox \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" successfully" May 15 13:08:04.155086 containerd[1543]: time="2025-05-15T13:08:04.155038559Z" level=info msg="StopPodSandbox for \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" returns successfully" May 15 13:08:04.156387 containerd[1543]: time="2025-05-15T13:08:04.156363434Z" level=info msg="RemovePodSandbox for \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\"" May 15 13:08:04.156610 containerd[1543]: time="2025-05-15T13:08:04.156472015Z" level=info msg="Forcibly stopping sandbox \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\"" May 15 13:08:04.156860 containerd[1543]: time="2025-05-15T13:08:04.156800916Z" level=info msg="TearDown network for sandbox \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" successfully" May 15 13:08:04.158701 containerd[1543]: time="2025-05-15T13:08:04.158665965Z" level=info msg="Ensure that sandbox 4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d in task-service has been cleanup successfully" May 15 13:08:04.160903 containerd[1543]: time="2025-05-15T13:08:04.160874315Z" level=info msg="RemovePodSandbox \"4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d\" returns successfully" May 15 13:08:04.161760 kubelet[2762]: I0515 13:08:04.161588 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:04.172231 kubelet[2762]: I0515 13:08:04.172196 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:04.172330 kubelet[2762]: I0515 13:08:04.172279 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:04.172330 kubelet[2762]: E0515 13:08:04.172308 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:04.172330 kubelet[2762]: E0515 13:08:04.172317 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:04.172330 kubelet[2762]: E0515 13:08:04.172323 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:04.172330 kubelet[2762]: E0515 13:08:04.172330 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:08:04.172643 kubelet[2762]: E0515 13:08:04.172336 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:04.172643 kubelet[2762]: E0515 13:08:04.172349 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:08:04.172643 kubelet[2762]: E0515 13:08:04.172357 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:08:04.172643 kubelet[2762]: E0515 13:08:04.172365 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:08:04.172643 kubelet[2762]: E0515 13:08:04.172373 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:08:04.172643 kubelet[2762]: E0515 13:08:04.172381 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:08:04.172643 kubelet[2762]: I0515 13:08:04.172389 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:08:04.206706 kubelet[2762]: I0515 13:08:04.206670 2762 kubelet.go:2306] "Pod admission denied" podUID="0a8a6b1f-266d-40a5-9e94-2984f42a848b" pod="tigera-operator/tigera-operator-6f6897fdc5-2cmls" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:04.462456 kubelet[2762]: I0515 13:08:04.462107 2762 kubelet.go:2306] "Pod admission denied" podUID="8351c1dd-866b-4bfe-aa00-37b650413560" pod="tigera-operator/tigera-operator-6f6897fdc5-glchp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:04.613145 kubelet[2762]: I0515 13:08:04.613098 2762 kubelet.go:2306] "Pod admission denied" podUID="e338aaf1-2901-4a3d-81c2-f012a748edec" pod="tigera-operator/tigera-operator-6f6897fdc5-jjzzh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:04.711284 kubelet[2762]: I0515 13:08:04.711237 2762 kubelet.go:2306] "Pod admission denied" podUID="b7a962dd-fd76-4d3d-bf41-d623ba66395a" pod="tigera-operator/tigera-operator-6f6897fdc5-cmh6l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:04.865461 kubelet[2762]: I0515 13:08:04.865303 2762 kubelet.go:2306] "Pod admission denied" podUID="7a0fef5d-3fbf-48b5-8d83-55345d3b6c85" pod="tigera-operator/tigera-operator-6f6897fdc5-r2hbk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:05.010799 kubelet[2762]: I0515 13:08:05.010727 2762 kubelet.go:2306] "Pod admission denied" podUID="73d2b49a-4708-4e33-ae30-041a82b4167a" pod="tigera-operator/tigera-operator-6f6897fdc5-cp7vr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:05.164651 kubelet[2762]: I0515 13:08:05.163202 2762 kubelet.go:2306] "Pod admission denied" podUID="f9c544ab-e944-4abc-ade5-b022c25c1787" pod="tigera-operator/tigera-operator-6f6897fdc5-dcjk2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:05.310261 kubelet[2762]: I0515 13:08:05.309542 2762 kubelet.go:2306] "Pod admission denied" podUID="ae571d82-e911-4659-a5e2-858a6533e84a" pod="tigera-operator/tigera-operator-6f6897fdc5-f9xw2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:05.461076 kubelet[2762]: I0515 13:08:05.460942 2762 kubelet.go:2306] "Pod admission denied" podUID="351064a3-ee1d-4d14-8c26-74bd74311a82" pod="tigera-operator/tigera-operator-6f6897fdc5-6r4jc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:05.709789 kubelet[2762]: I0515 13:08:05.709729 2762 kubelet.go:2306] "Pod admission denied" podUID="3b7d0568-bc38-473b-8372-279162dc02c8" pod="tigera-operator/tigera-operator-6f6897fdc5-hrs7z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:05.860399 kubelet[2762]: I0515 13:08:05.860265 2762 kubelet.go:2306] "Pod admission denied" podUID="544d4336-1ae4-4b38-a4dd-fefee81e7193" pod="tigera-operator/tigera-operator-6f6897fdc5-pn6jz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.013816 kubelet[2762]: I0515 13:08:06.013749 2762 kubelet.go:2306] "Pod admission denied" podUID="ad2457b7-0e35-4311-a24f-3d20aafc5f77" pod="tigera-operator/tigera-operator-6f6897fdc5-lwf9d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.109691 kubelet[2762]: I0515 13:08:06.109534 2762 kubelet.go:2306] "Pod admission denied" podUID="3c0c9b49-a139-45e4-8e02-f242917d4700" pod="tigera-operator/tigera-operator-6f6897fdc5-4mqwv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.209302 kubelet[2762]: I0515 13:08:06.209253 2762 kubelet.go:2306] "Pod admission denied" podUID="efc78f87-8b02-4082-bba3-d50a03ee1ab8" pod="tigera-operator/tigera-operator-6f6897fdc5-lgchs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.311597 kubelet[2762]: I0515 13:08:06.311524 2762 kubelet.go:2306] "Pod admission denied" podUID="7ad1a810-eb27-4e03-807f-498049c6a38e" pod="tigera-operator/tigera-operator-6f6897fdc5-9p4j8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.366760 kubelet[2762]: I0515 13:08:06.366404 2762 kubelet.go:2306] "Pod admission denied" podUID="49d67f43-dd0b-4600-bdc1-6594996eb11b" pod="tigera-operator/tigera-operator-6f6897fdc5-qc7j6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.461456 kubelet[2762]: I0515 13:08:06.460454 2762 kubelet.go:2306] "Pod admission denied" podUID="2d6b7135-1a81-4c06-9104-1cd8c91391d5" pod="tigera-operator/tigera-operator-6f6897fdc5-bznz4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.558537 kubelet[2762]: I0515 13:08:06.558492 2762 kubelet.go:2306] "Pod admission denied" podUID="b59ba7ad-14ed-4210-8de3-9301e2ed1bf8" pod="tigera-operator/tigera-operator-6f6897fdc5-86dss" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.660327 kubelet[2762]: I0515 13:08:06.660266 2762 kubelet.go:2306] "Pod admission denied" podUID="713023a5-4144-457e-9f97-d7612f6e779a" pod="tigera-operator/tigera-operator-6f6897fdc5-ncgfj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.760308 kubelet[2762]: I0515 13:08:06.760174 2762 kubelet.go:2306] "Pod admission denied" podUID="8a09e3fa-c2d1-4ebe-9ae5-5eb568794ef0" pod="tigera-operator/tigera-operator-6f6897fdc5-xbl8w" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.810642 kubelet[2762]: I0515 13:08:06.810577 2762 kubelet.go:2306] "Pod admission denied" podUID="6d25db2a-02c6-4d73-b5f6-a1014d47d334" pod="tigera-operator/tigera-operator-6f6897fdc5-gmlps" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:06.911005 kubelet[2762]: I0515 13:08:06.910938 2762 kubelet.go:2306] "Pod admission denied" podUID="74b03fba-8bde-4008-9e14-6c6dc20cde30" pod="tigera-operator/tigera-operator-6f6897fdc5-zk87v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.008338 kubelet[2762]: I0515 13:08:07.008298 2762 kubelet.go:2306] "Pod admission denied" podUID="3aee6f5c-54fc-4c65-bee9-ce1cd4d8b038" pod="tigera-operator/tigera-operator-6f6897fdc5-57lz7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.110729 kubelet[2762]: I0515 13:08:07.110522 2762 kubelet.go:2306] "Pod admission denied" podUID="78bbf7e3-66b0-4763-a091-4745c69de85a" pod="tigera-operator/tigera-operator-6f6897fdc5-6r5p6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.313705 kubelet[2762]: I0515 13:08:07.313614 2762 kubelet.go:2306] "Pod admission denied" podUID="6f761c49-36a4-4179-8ed9-e5c580612299" pod="tigera-operator/tigera-operator-6f6897fdc5-h68k2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.411067 kubelet[2762]: I0515 13:08:07.411021 2762 kubelet.go:2306] "Pod admission denied" podUID="2f272ab2-994a-475e-a280-a93774d02b63" pod="tigera-operator/tigera-operator-6f6897fdc5-jpfgz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.509257 kubelet[2762]: I0515 13:08:07.509204 2762 kubelet.go:2306] "Pod admission denied" podUID="a64527a7-305d-41e6-8f68-948b9c75fc12" pod="tigera-operator/tigera-operator-6f6897fdc5-l6wb2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.610023 kubelet[2762]: I0515 13:08:07.609970 2762 kubelet.go:2306] "Pod admission denied" podUID="b4ca10d2-debe-4ac0-a374-a45ab353bb93" pod="tigera-operator/tigera-operator-6f6897fdc5-rq249" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.710953 kubelet[2762]: I0515 13:08:07.710821 2762 kubelet.go:2306] "Pod admission denied" podUID="a78af2cd-7584-4c41-bedc-4ce53a437306" pod="tigera-operator/tigera-operator-6f6897fdc5-57g6p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.811700 kubelet[2762]: I0515 13:08:07.811624 2762 kubelet.go:2306] "Pod admission denied" podUID="6085c536-9a62-488f-97c7-b4067e6d47ee" pod="tigera-operator/tigera-operator-6f6897fdc5-n2xjr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:07.911538 kubelet[2762]: I0515 13:08:07.911450 2762 kubelet.go:2306] "Pod admission denied" podUID="477f5d15-49c8-4a19-a617-e189c9c77852" pod="tigera-operator/tigera-operator-6f6897fdc5-m6ps4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.130229 kubelet[2762]: I0515 13:08:08.130157 2762 kubelet.go:2306] "Pod admission denied" podUID="f00631c9-41e3-4e24-aacc-d0d572e118bc" pod="tigera-operator/tigera-operator-6f6897fdc5-lzzhb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.215430 kubelet[2762]: I0515 13:08:08.215347 2762 kubelet.go:2306] "Pod admission denied" podUID="f8bbe0c5-5f36-454d-a7ba-59d48b140392" pod="tigera-operator/tigera-operator-6f6897fdc5-f552m" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.261334 kubelet[2762]: I0515 13:08:08.261259 2762 kubelet.go:2306] "Pod admission denied" podUID="45ff8dab-8498-4cc8-8b32-73af9c6c1661" pod="tigera-operator/tigera-operator-6f6897fdc5-pq58z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.364495 kubelet[2762]: I0515 13:08:08.364439 2762 kubelet.go:2306] "Pod admission denied" podUID="fb5ce64b-4e23-44bd-beb9-fd90a59bb8e4" pod="tigera-operator/tigera-operator-6f6897fdc5-gnj6d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.461475 kubelet[2762]: I0515 13:08:08.461217 2762 kubelet.go:2306] "Pod admission denied" podUID="623f3993-36ca-47e1-9a7c-a513da28b653" pod="tigera-operator/tigera-operator-6f6897fdc5-6nqfl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.510030 kubelet[2762]: I0515 13:08:08.509974 2762 kubelet.go:2306] "Pod admission denied" podUID="d7afff40-be66-4a14-805a-93662f942693" pod="tigera-operator/tigera-operator-6f6897fdc5-6rsvb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.612989 kubelet[2762]: I0515 13:08:08.612860 2762 kubelet.go:2306] "Pod admission denied" podUID="68f7c1ec-f00a-4e33-9370-6daa968e57d6" pod="tigera-operator/tigera-operator-6f6897fdc5-df7qf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.712502 kubelet[2762]: I0515 13:08:08.712111 2762 kubelet.go:2306] "Pod admission denied" podUID="f0d16095-012e-44d6-a61c-53c95f6cf10b" pod="tigera-operator/tigera-operator-6f6897fdc5-2h4ps" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.813710 kubelet[2762]: I0515 13:08:08.813511 2762 kubelet.go:2306] "Pod admission denied" podUID="fe140a02-e9dd-47e0-b1a8-59c3e3721c41" pod="tigera-operator/tigera-operator-6f6897fdc5-dhdh4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.913037 kubelet[2762]: I0515 13:08:08.912961 2762 kubelet.go:2306] "Pod admission denied" podUID="588e7d2f-77e7-4d07-ae8c-910936a1c83e" pod="tigera-operator/tigera-operator-6f6897fdc5-j8nl9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:08.959022 containerd[1543]: time="2025-05-15T13:08:08.958750217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:08:09.027840 kubelet[2762]: I0515 13:08:09.026125 2762 kubelet.go:2306] "Pod admission denied" podUID="4e4c6968-a5ad-4b59-919c-098794fafbee" pod="tigera-operator/tigera-operator-6f6897fdc5-wmnkr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.079988 containerd[1543]: time="2025-05-15T13:08:09.079899914Z" level=error msg="Failed to destroy network for sandbox \"aac3a3327f4c85761ea22699387951b24233ead67c37c5f91cf7bdd16dd2f450\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:09.082187 systemd[1]: run-netns-cni\x2d14837f11\x2dedb7\x2d4f8a\x2ddd5b\x2ddec00b51eb9a.mount: Deactivated successfully. May 15 13:08:09.084797 containerd[1543]: time="2025-05-15T13:08:09.084063721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac3a3327f4c85761ea22699387951b24233ead67c37c5f91cf7bdd16dd2f450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:09.086101 kubelet[2762]: E0515 13:08:09.085520 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac3a3327f4c85761ea22699387951b24233ead67c37c5f91cf7bdd16dd2f450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:09.086101 kubelet[2762]: E0515 13:08:09.085714 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac3a3327f4c85761ea22699387951b24233ead67c37c5f91cf7bdd16dd2f450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:09.086101 kubelet[2762]: E0515 13:08:09.085762 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac3a3327f4c85761ea22699387951b24233ead67c37c5f91cf7bdd16dd2f450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:09.086101 kubelet[2762]: E0515 13:08:09.085861 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aac3a3327f4c85761ea22699387951b24233ead67c37c5f91cf7bdd16dd2f450\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:08:09.110662 kubelet[2762]: I0515 13:08:09.110615 2762 kubelet.go:2306] "Pod admission denied" podUID="3e1bcb6f-9de1-4523-ba20-b7f8a036c1a0" pod="tigera-operator/tigera-operator-6f6897fdc5-q6rk6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.213982 kubelet[2762]: I0515 13:08:09.213933 2762 kubelet.go:2306] "Pod admission denied" podUID="6c8f852b-af5e-4ffe-aab4-8b418d25bc8a" pod="tigera-operator/tigera-operator-6f6897fdc5-t62rl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.311010 kubelet[2762]: I0515 13:08:09.310541 2762 kubelet.go:2306] "Pod admission denied" podUID="907a5b14-c51f-455b-8244-22db42dfcdd3" pod="tigera-operator/tigera-operator-6f6897fdc5-tn2tf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.415071 kubelet[2762]: I0515 13:08:09.415024 2762 kubelet.go:2306] "Pod admission denied" podUID="ac67e05b-cfe9-4d6c-9dde-baf43945f1c0" pod="tigera-operator/tigera-operator-6f6897fdc5-mkhr7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.519836 kubelet[2762]: I0515 13:08:09.519761 2762 kubelet.go:2306] "Pod admission denied" podUID="d547be32-6bbf-4981-a072-4969c6eda788" pod="tigera-operator/tigera-operator-6f6897fdc5-wkv7h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.612163 kubelet[2762]: I0515 13:08:09.611742 2762 kubelet.go:2306] "Pod admission denied" podUID="e442ab79-8bae-456d-9200-f201a2ecc560" pod="tigera-operator/tigera-operator-6f6897fdc5-2fkc7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.711591 kubelet[2762]: I0515 13:08:09.711401 2762 kubelet.go:2306] "Pod admission denied" podUID="a90e567f-e8a5-47a2-b38e-949db827f352" pod="tigera-operator/tigera-operator-6f6897fdc5-bbbln" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.811481 kubelet[2762]: I0515 13:08:09.811417 2762 kubelet.go:2306] "Pod admission denied" podUID="2b2b627d-214d-4210-8da1-897a29bd1bb2" pod="tigera-operator/tigera-operator-6f6897fdc5-77sfv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:09.912368 kubelet[2762]: I0515 13:08:09.912203 2762 kubelet.go:2306] "Pod admission denied" podUID="57eb3cc6-8b8a-4afb-b457-be70ea6f6f99" pod="tigera-operator/tigera-operator-6f6897fdc5-z8pgh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.013492 kubelet[2762]: I0515 13:08:10.013435 2762 kubelet.go:2306] "Pod admission denied" podUID="be533ce4-2caf-4ff3-98f7-dfdee367cb43" pod="tigera-operator/tigera-operator-6f6897fdc5-6hjb9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.212320 kubelet[2762]: I0515 13:08:10.212179 2762 kubelet.go:2306] "Pod admission denied" podUID="a8ae0b18-e102-48e5-8546-4a298af9269d" pod="tigera-operator/tigera-operator-6f6897fdc5-68k2t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.311305 kubelet[2762]: I0515 13:08:10.311219 2762 kubelet.go:2306] "Pod admission denied" podUID="0acdc305-d2be-498f-b447-b4266316ed1a" pod="tigera-operator/tigera-operator-6f6897fdc5-krk7s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.415715 kubelet[2762]: I0515 13:08:10.415656 2762 kubelet.go:2306] "Pod admission denied" podUID="455dc90f-a251-4728-b491-e9094d8d7576" pod="tigera-operator/tigera-operator-6f6897fdc5-j9gm5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.512955 kubelet[2762]: I0515 13:08:10.512651 2762 kubelet.go:2306] "Pod admission denied" podUID="cabcdf70-055d-4133-843a-2ac08d3ca287" pod="tigera-operator/tigera-operator-6f6897fdc5-vbcrt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.611544 kubelet[2762]: I0515 13:08:10.611476 2762 kubelet.go:2306] "Pod admission denied" podUID="1a9fa4cc-e041-428a-be47-f72d0f0963bc" pod="tigera-operator/tigera-operator-6f6897fdc5-9bshq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.710915 kubelet[2762]: I0515 13:08:10.710860 2762 kubelet.go:2306] "Pod admission denied" podUID="4e7b0a7c-7289-4a75-828c-06b156f1a392" pod="tigera-operator/tigera-operator-6f6897fdc5-dx4g6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.811966 kubelet[2762]: I0515 13:08:10.811642 2762 kubelet.go:2306] "Pod admission denied" podUID="02d290f0-74df-48f3-975d-2efc8833350e" pod="tigera-operator/tigera-operator-6f6897fdc5-ftzxd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:10.912680 kubelet[2762]: I0515 13:08:10.912637 2762 kubelet.go:2306] "Pod admission denied" podUID="ccfa49ad-5ceb-4504-b409-63f6bb4c0e8d" pod="tigera-operator/tigera-operator-6f6897fdc5-v6zs2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.010924 kubelet[2762]: I0515 13:08:11.010880 2762 kubelet.go:2306] "Pod admission denied" podUID="c14d4417-2641-4d4b-899b-302818ac1aec" pod="tigera-operator/tigera-operator-6f6897fdc5-kt79n" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.211169 kubelet[2762]: I0515 13:08:11.211116 2762 kubelet.go:2306] "Pod admission denied" podUID="6ab49f8a-9f91-4cca-bd26-6ddba8ef61d8" pod="tigera-operator/tigera-operator-6f6897fdc5-n9gqh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.330017 kubelet[2762]: I0515 13:08:11.328985 2762 kubelet.go:2306] "Pod admission denied" podUID="ec60aa7a-7d22-4a8c-a60c-b50ab4b0f187" pod="tigera-operator/tigera-operator-6f6897fdc5-c9t9r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.412838 kubelet[2762]: I0515 13:08:11.412795 2762 kubelet.go:2306] "Pod admission denied" podUID="697d790e-a5dc-4c32-a667-89d00b8714ca" pod="tigera-operator/tigera-operator-6f6897fdc5-9x9pp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.514160 kubelet[2762]: I0515 13:08:11.512809 2762 kubelet.go:2306] "Pod admission denied" podUID="9f999e70-f43b-4b5a-9c98-6ae2dec61ee1" pod="tigera-operator/tigera-operator-6f6897fdc5-lxmcn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.579631 kubelet[2762]: I0515 13:08:11.578543 2762 kubelet.go:2306] "Pod admission denied" podUID="2dd11152-4153-4938-a419-f3d46a72e430" pod="tigera-operator/tigera-operator-6f6897fdc5-gc774" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.664465 kubelet[2762]: I0515 13:08:11.664397 2762 kubelet.go:2306] "Pod admission denied" podUID="9d37ccf4-f5a9-43cc-b0c7-a8bfa104d945" pod="tigera-operator/tigera-operator-6f6897fdc5-knpkw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.769944 kubelet[2762]: I0515 13:08:11.768323 2762 kubelet.go:2306] "Pod admission denied" podUID="28efc96f-e438-4791-bbe6-2fd79a3ef675" pod="tigera-operator/tigera-operator-6f6897fdc5-v65tr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.866474 kubelet[2762]: I0515 13:08:11.866415 2762 kubelet.go:2306] "Pod admission denied" podUID="43c34502-1a39-4b23-a5c0-87512b8e2051" pod="tigera-operator/tigera-operator-6f6897fdc5-8dvbb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:11.960073 kubelet[2762]: E0515 13:08:11.959296 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:11.961930 containerd[1543]: time="2025-05-15T13:08:11.961811360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:08:11.971935 kubelet[2762]: I0515 13:08:11.971879 2762 kubelet.go:2306] "Pod admission denied" podUID="9b18ce0b-2849-48e8-894d-28bccb8d9952" pod="tigera-operator/tigera-operator-6f6897fdc5-qd8lf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.024691 kubelet[2762]: I0515 13:08:12.024324 2762 kubelet.go:2306] "Pod admission denied" podUID="5d3f474d-99e8-4c90-97c0-e2679eb3cf75" pod="tigera-operator/tigera-operator-6f6897fdc5-x9kdm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.063672 containerd[1543]: time="2025-05-15T13:08:12.063543043Z" level=error msg="Failed to destroy network for sandbox \"9549ddb0b736e949dc8161e1fe7537ffa1818aaeaa94209da51ecf6d62b57363\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:12.066429 systemd[1]: run-netns-cni\x2da540ad33\x2d75e4\x2d8b36\x2d8209\x2d7b1eb9e8bbbf.mount: Deactivated successfully. May 15 13:08:12.068938 containerd[1543]: time="2025-05-15T13:08:12.068865702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9549ddb0b736e949dc8161e1fe7537ffa1818aaeaa94209da51ecf6d62b57363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:12.069762 kubelet[2762]: E0515 13:08:12.069698 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9549ddb0b736e949dc8161e1fe7537ffa1818aaeaa94209da51ecf6d62b57363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:12.069859 kubelet[2762]: E0515 13:08:12.069785 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9549ddb0b736e949dc8161e1fe7537ffa1818aaeaa94209da51ecf6d62b57363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:12.069859 kubelet[2762]: E0515 13:08:12.069812 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9549ddb0b736e949dc8161e1fe7537ffa1818aaeaa94209da51ecf6d62b57363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:12.069929 kubelet[2762]: E0515 13:08:12.069867 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9549ddb0b736e949dc8161e1fe7537ffa1818aaeaa94209da51ecf6d62b57363\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:08:12.112391 kubelet[2762]: I0515 13:08:12.112325 2762 kubelet.go:2306] "Pod admission denied" podUID="87ecd675-c224-4af3-ab64-b781f20a4080" pod="tigera-operator/tigera-operator-6f6897fdc5-8qvvk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.211582 kubelet[2762]: I0515 13:08:12.211416 2762 kubelet.go:2306] "Pod admission denied" podUID="b9ee12aa-cd35-4d08-a7ef-a8bc9e6f30f8" pod="tigera-operator/tigera-operator-6f6897fdc5-22lpd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.315933 kubelet[2762]: I0515 13:08:12.311391 2762 kubelet.go:2306] "Pod admission denied" podUID="8d81588c-3572-4d92-bf84-bacacb7c2778" pod="tigera-operator/tigera-operator-6f6897fdc5-kw7xq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.414944 kubelet[2762]: I0515 13:08:12.414873 2762 kubelet.go:2306] "Pod admission denied" podUID="a31bf7fe-78d2-418b-9e18-b52945881622" pod="tigera-operator/tigera-operator-6f6897fdc5-q2hbj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.512814 kubelet[2762]: I0515 13:08:12.512755 2762 kubelet.go:2306] "Pod admission denied" podUID="e800d313-e5c2-4b37-b41c-e591dae6bbcb" pod="tigera-operator/tigera-operator-6f6897fdc5-w67lv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.615038 kubelet[2762]: I0515 13:08:12.614617 2762 kubelet.go:2306] "Pod admission denied" podUID="10c2b159-669b-4452-accf-017fc013d875" pod="tigera-operator/tigera-operator-6f6897fdc5-6869t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.660905 kubelet[2762]: I0515 13:08:12.660842 2762 kubelet.go:2306] "Pod admission denied" podUID="6cd769b4-195d-4e64-b388-b5417430c733" pod="tigera-operator/tigera-operator-6f6897fdc5-lzdkd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.769205 kubelet[2762]: I0515 13:08:12.769133 2762 kubelet.go:2306] "Pod admission denied" podUID="119a99db-f4c5-42b2-840d-80e31167cb8c" pod="tigera-operator/tigera-operator-6f6897fdc5-cj8mw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.862341 kubelet[2762]: I0515 13:08:12.862298 2762 kubelet.go:2306] "Pod admission denied" podUID="e597eada-d892-4fbe-af0b-29c2dfacfb1b" pod="tigera-operator/tigera-operator-6f6897fdc5-mcjh6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:12.965863 kubelet[2762]: I0515 13:08:12.965797 2762 kubelet.go:2306] "Pod admission denied" podUID="604e51ad-06b9-4891-870c-d8bdc804a221" pod="tigera-operator/tigera-operator-6f6897fdc5-7qdbr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.061099 kubelet[2762]: I0515 13:08:13.061050 2762 kubelet.go:2306] "Pod admission denied" podUID="fc8b0543-4312-49e3-8fc4-9bf2885f2f9d" pod="tigera-operator/tigera-operator-6f6897fdc5-hmwbx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.161490 kubelet[2762]: I0515 13:08:13.161375 2762 kubelet.go:2306] "Pod admission denied" podUID="cfa2c7a0-4129-4275-b6bb-bf38a1334828" pod="tigera-operator/tigera-operator-6f6897fdc5-nxr9k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.361128 kubelet[2762]: I0515 13:08:13.360704 2762 kubelet.go:2306] "Pod admission denied" podUID="510c23ab-1562-4bda-934c-fbf1ad357053" pod="tigera-operator/tigera-operator-6f6897fdc5-cjbjq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.460611 kubelet[2762]: I0515 13:08:13.460484 2762 kubelet.go:2306] "Pod admission denied" podUID="d2abaf5f-843c-4ff9-b51c-2f60bb391a3c" pod="tigera-operator/tigera-operator-6f6897fdc5-2kgmw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.561698 kubelet[2762]: I0515 13:08:13.561624 2762 kubelet.go:2306] "Pod admission denied" podUID="abe6336e-3220-4605-a921-7a7b586ef372" pod="tigera-operator/tigera-operator-6f6897fdc5-wjk84" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.663257 kubelet[2762]: I0515 13:08:13.663198 2762 kubelet.go:2306] "Pod admission denied" podUID="b591fdc7-2db9-436e-bbab-3cdf187d617c" pod="tigera-operator/tigera-operator-6f6897fdc5-jbf5z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.711393 kubelet[2762]: I0515 13:08:13.711293 2762 kubelet.go:2306] "Pod admission denied" podUID="7fbc4d16-73a1-4a11-967a-0f09196347ce" pod="tigera-operator/tigera-operator-6f6897fdc5-g9wfx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.813511 kubelet[2762]: I0515 13:08:13.813467 2762 kubelet.go:2306] "Pod admission denied" podUID="f5aa6e61-2243-4bc8-bf7c-620df86d47cd" pod="tigera-operator/tigera-operator-6f6897fdc5-6t9dx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.911706 kubelet[2762]: I0515 13:08:13.911646 2762 kubelet.go:2306] "Pod admission denied" podUID="868f6209-6c43-44bd-836e-dd754157edcb" pod="tigera-operator/tigera-operator-6f6897fdc5-sggbs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:13.957258 kubelet[2762]: E0515 13:08:13.957136 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:13.958195 containerd[1543]: time="2025-05-15T13:08:13.957989053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:08:13.960689 containerd[1543]: time="2025-05-15T13:08:13.960630883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:08:14.018097 kubelet[2762]: I0515 13:08:14.017984 2762 kubelet.go:2306] "Pod admission denied" podUID="2151219a-2149-4ec7-a0bd-5fbd69624712" pod="tigera-operator/tigera-operator-6f6897fdc5-6jdm9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:14.089102 containerd[1543]: time="2025-05-15T13:08:14.088890556Z" level=error msg="Failed to destroy network for sandbox \"536f21a1452cb6600698050831d0f93cf9032491dae52a7393680165d0e7d823\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:14.092531 systemd[1]: run-netns-cni\x2d4d3f243c\x2d7019\x2dc406\x2d965e\x2de710d0ef3f2d.mount: Deactivated successfully. May 15 13:08:14.097222 containerd[1543]: time="2025-05-15T13:08:14.097177135Z" level=error msg="Failed to destroy network for sandbox \"3a49571a2b605c396b033551791c4d2a9b82d03085505f62d64f874714073b39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:14.097424 containerd[1543]: time="2025-05-15T13:08:14.097200195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"536f21a1452cb6600698050831d0f93cf9032491dae52a7393680165d0e7d823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:14.098594 kubelet[2762]: E0515 13:08:14.097885 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536f21a1452cb6600698050831d0f93cf9032491dae52a7393680165d0e7d823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:14.098594 kubelet[2762]: E0515 13:08:14.097952 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536f21a1452cb6600698050831d0f93cf9032491dae52a7393680165d0e7d823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:14.098594 kubelet[2762]: E0515 13:08:14.097976 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536f21a1452cb6600698050831d0f93cf9032491dae52a7393680165d0e7d823\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:14.098594 kubelet[2762]: E0515 13:08:14.098077 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"536f21a1452cb6600698050831d0f93cf9032491dae52a7393680165d0e7d823\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xfdz2" podUID="b53c6794-8ef1-4efd-9179-2e706d6227cb" May 15 13:08:14.101516 systemd[1]: run-netns-cni\x2d21e4b428\x2dcf23\x2db996\x2d54ff\x2d4905694946d0.mount: Deactivated successfully. May 15 13:08:14.102646 containerd[1543]: time="2025-05-15T13:08:14.102526512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a49571a2b605c396b033551791c4d2a9b82d03085505f62d64f874714073b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:14.103347 kubelet[2762]: E0515 13:08:14.103234 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a49571a2b605c396b033551791c4d2a9b82d03085505f62d64f874714073b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:14.103347 kubelet[2762]: E0515 13:08:14.103303 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a49571a2b605c396b033551791c4d2a9b82d03085505f62d64f874714073b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:14.103347 kubelet[2762]: E0515 13:08:14.103322 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a49571a2b605c396b033551791c4d2a9b82d03085505f62d64f874714073b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:14.104239 kubelet[2762]: E0515 13:08:14.104204 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a49571a2b605c396b033551791c4d2a9b82d03085505f62d64f874714073b39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:08:14.124204 kubelet[2762]: I0515 13:08:14.124172 2762 kubelet.go:2306] "Pod admission denied" podUID="1538719d-a85b-4a74-811c-05e78ccafb1c" pod="tigera-operator/tigera-operator-6f6897fdc5-9xnmt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:14.196602 kubelet[2762]: I0515 13:08:14.196538 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:14.196735 kubelet[2762]: I0515 13:08:14.196630 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:14.198383 kubelet[2762]: I0515 13:08:14.198327 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:14.218739 kubelet[2762]: I0515 13:08:14.218380 2762 kubelet.go:2306] "Pod admission denied" podUID="9c8e3959-3723-44ee-a9cd-b7d17e1cf2e7" pod="tigera-operator/tigera-operator-6f6897fdc5-z6wmw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:14.221293 kubelet[2762]: I0515 13:08:14.221253 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:14.221436 kubelet[2762]: I0515 13:08:14.221373 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:14.221436 kubelet[2762]: E0515 13:08:14.221423 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:14.221436 kubelet[2762]: E0515 13:08:14.221432 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:14.221547 kubelet[2762]: E0515 13:08:14.221439 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:14.221547 kubelet[2762]: E0515 13:08:14.221447 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:08:14.221547 kubelet[2762]: E0515 13:08:14.221453 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:14.221547 kubelet[2762]: E0515 13:08:14.221471 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:08:14.221547 kubelet[2762]: E0515 13:08:14.221480 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:08:14.221547 kubelet[2762]: E0515 13:08:14.221488 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:08:14.221547 kubelet[2762]: E0515 13:08:14.221496 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:08:14.221547 kubelet[2762]: E0515 13:08:14.221504 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:08:14.221547 kubelet[2762]: I0515 13:08:14.221517 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:08:14.313049 kubelet[2762]: I0515 13:08:14.312995 2762 kubelet.go:2306] "Pod admission denied" podUID="2b827dd8-fdb1-415a-a297-fd73d10b1554" pod="tigera-operator/tigera-operator-6f6897fdc5-x5bdb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:14.415457 kubelet[2762]: I0515 13:08:14.415412 2762 kubelet.go:2306] "Pod admission denied" podUID="8931b37b-7e6a-499b-81c5-db508db411fc" pod="tigera-operator/tigera-operator-6f6897fdc5-65knl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:14.616730 kubelet[2762]: I0515 13:08:14.616473 2762 kubelet.go:2306] "Pod admission denied" podUID="4c1dbeef-3f99-41cb-846e-98ec7de46413" pod="tigera-operator/tigera-operator-6f6897fdc5-jfw6h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:14.712404 kubelet[2762]: I0515 13:08:14.712122 2762 kubelet.go:2306] "Pod admission denied" podUID="6302430a-7d4b-42b7-b585-d0d26c5ed315" pod="tigera-operator/tigera-operator-6f6897fdc5-fns65" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:14.828257 kubelet[2762]: I0515 13:08:14.828188 2762 kubelet.go:2306] "Pod admission denied" podUID="1e2aa212-14ad-4c15-a576-4fee92b7c1e9" pod="tigera-operator/tigera-operator-6f6897fdc5-dxwcj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:14.911968 kubelet[2762]: I0515 13:08:14.911898 2762 kubelet.go:2306] "Pod admission denied" podUID="77520266-6f71-4eb6-8c95-e85995737583" pod="tigera-operator/tigera-operator-6f6897fdc5-t2rzd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.013991 kubelet[2762]: I0515 13:08:15.013933 2762 kubelet.go:2306] "Pod admission denied" podUID="ef4917cb-ae24-4201-824c-657365c49470" pod="tigera-operator/tigera-operator-6f6897fdc5-9qclv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.114593 kubelet[2762]: I0515 13:08:15.114526 2762 kubelet.go:2306] "Pod admission denied" podUID="5539fb10-9e62-46b6-8341-5b7b9b03ecc3" pod="tigera-operator/tigera-operator-6f6897fdc5-92h4z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.164217 kubelet[2762]: I0515 13:08:15.164086 2762 kubelet.go:2306] "Pod admission denied" podUID="58768a8c-3147-4c22-be1b-216255eb870f" pod="tigera-operator/tigera-operator-6f6897fdc5-cmf59" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.261081 kubelet[2762]: I0515 13:08:15.260953 2762 kubelet.go:2306] "Pod admission denied" podUID="7f9130f8-610e-487b-b3d7-b48053a0e70a" pod="tigera-operator/tigera-operator-6f6897fdc5-g5pg7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.463308 kubelet[2762]: I0515 13:08:15.462847 2762 kubelet.go:2306] "Pod admission denied" podUID="9e70598c-058f-4ac7-a25a-9ad59fef8069" pod="tigera-operator/tigera-operator-6f6897fdc5-6cwpc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.561454 kubelet[2762]: I0515 13:08:15.561399 2762 kubelet.go:2306] "Pod admission denied" podUID="abcf8997-7c80-41ad-92e8-8a0bcbe23a82" pod="tigera-operator/tigera-operator-6f6897fdc5-trvnh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.665333 kubelet[2762]: I0515 13:08:15.665267 2762 kubelet.go:2306] "Pod admission denied" podUID="dd449e05-3309-48f4-b8c1-c358be2d5750" pod="tigera-operator/tigera-operator-6f6897fdc5-f9mmz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.765231 kubelet[2762]: I0515 13:08:15.765067 2762 kubelet.go:2306] "Pod admission denied" podUID="57786615-2a60-451f-8240-644f39361771" pod="tigera-operator/tigera-operator-6f6897fdc5-stppz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.861596 kubelet[2762]: I0515 13:08:15.861535 2762 kubelet.go:2306] "Pod admission denied" podUID="2a6ed759-c39c-4996-92ef-7bc6ba1678a0" pod="tigera-operator/tigera-operator-6f6897fdc5-cmxdc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:15.965041 kubelet[2762]: I0515 13:08:15.964987 2762 kubelet.go:2306] "Pod admission denied" podUID="d5180891-8f0c-4a16-afd0-12e02e9249c2" pod="tigera-operator/tigera-operator-6f6897fdc5-b7xzb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:16.063707 kubelet[2762]: I0515 13:08:16.063580 2762 kubelet.go:2306] "Pod admission denied" podUID="154b963a-fe05-4c47-87ca-74bdd8907ac7" pod="tigera-operator/tigera-operator-6f6897fdc5-4fv6k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:16.163539 kubelet[2762]: I0515 13:08:16.163467 2762 kubelet.go:2306] "Pod admission denied" podUID="184ec2a7-353e-4811-b2e3-865e45add0c6" pod="tigera-operator/tigera-operator-6f6897fdc5-7br5h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:16.266041 kubelet[2762]: I0515 13:08:16.265876 2762 kubelet.go:2306] "Pod admission denied" podUID="aacf4017-964e-4904-927f-3d3d4c513f2f" pod="tigera-operator/tigera-operator-6f6897fdc5-6ck4d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:16.465535 kubelet[2762]: I0515 13:08:16.465483 2762 kubelet.go:2306] "Pod admission denied" podUID="a6f10561-0d32-496c-8052-c568bdeba7d4" pod="tigera-operator/tigera-operator-6f6897fdc5-zm7mg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:16.564462 kubelet[2762]: I0515 13:08:16.564220 2762 kubelet.go:2306] "Pod admission denied" podUID="b2370b52-dfcf-4361-9b2b-a44a04025deb" pod="tigera-operator/tigera-operator-6f6897fdc5-229fb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:16.665210 kubelet[2762]: I0515 13:08:16.665157 2762 kubelet.go:2306] "Pod admission denied" podUID="de2f4f59-8450-4ace-9852-44ed7798838a" pod="tigera-operator/tigera-operator-6f6897fdc5-m5jnj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:16.863004 kubelet[2762]: I0515 13:08:16.862874 2762 kubelet.go:2306] "Pod admission denied" podUID="ceab4fa4-fde5-46cb-8efd-c0bca4f9e1b5" pod="tigera-operator/tigera-operator-6f6897fdc5-6v8n5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:16.964209 kubelet[2762]: I0515 13:08:16.964150 2762 kubelet.go:2306] "Pod admission denied" podUID="39449045-c549-464c-be7a-c134243c0e08" pod="tigera-operator/tigera-operator-6f6897fdc5-qnw8r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.012155 kubelet[2762]: I0515 13:08:17.012101 2762 kubelet.go:2306] "Pod admission denied" podUID="0be198c1-a290-4c3a-ba68-ba67aee3c249" pod="tigera-operator/tigera-operator-6f6897fdc5-9kxth" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.119310 kubelet[2762]: I0515 13:08:17.119268 2762 kubelet.go:2306] "Pod admission denied" podUID="770ebf94-d832-48de-a6d1-4893e27d7b47" pod="tigera-operator/tigera-operator-6f6897fdc5-5qxvd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.212283 kubelet[2762]: I0515 13:08:17.212240 2762 kubelet.go:2306] "Pod admission denied" podUID="cd6f1fb9-d1f0-4d26-b1aa-d8ca1ed973bf" pod="tigera-operator/tigera-operator-6f6897fdc5-jds48" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.313813 kubelet[2762]: I0515 13:08:17.313762 2762 kubelet.go:2306] "Pod admission denied" podUID="139fe929-ff62-4b94-b29a-ddbb872038b1" pod="tigera-operator/tigera-operator-6f6897fdc5-g42rn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.514331 kubelet[2762]: I0515 13:08:17.513868 2762 kubelet.go:2306] "Pod admission denied" podUID="f398e27d-388d-4a2f-a719-c5afffcda4b2" pod="tigera-operator/tigera-operator-6f6897fdc5-x79vb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.636158 kubelet[2762]: I0515 13:08:17.636098 2762 kubelet.go:2306] "Pod admission denied" podUID="0e0367f0-bcee-457c-bee4-0af8b0c99a37" pod="tigera-operator/tigera-operator-6f6897fdc5-7mhcc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.715069 kubelet[2762]: I0515 13:08:17.714708 2762 kubelet.go:2306] "Pod admission denied" podUID="e9cca673-9dce-4e84-a421-e6f1338ec794" pod="tigera-operator/tigera-operator-6f6897fdc5-nlwfl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.823079 kubelet[2762]: I0515 13:08:17.821679 2762 kubelet.go:2306] "Pod admission denied" podUID="10281c1b-9cac-405f-865b-114412157508" pod="tigera-operator/tigera-operator-6f6897fdc5-lb5jk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:17.912039 kubelet[2762]: I0515 13:08:17.911986 2762 kubelet.go:2306] "Pod admission denied" podUID="0ab33b3b-ccd3-4cbc-893c-a266dca1b453" pod="tigera-operator/tigera-operator-6f6897fdc5-2rkrn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.013482 kubelet[2762]: I0515 13:08:18.013430 2762 kubelet.go:2306] "Pod admission denied" podUID="5b9c6e84-0dcf-480d-8e72-e378c7def47c" pod="tigera-operator/tigera-operator-6f6897fdc5-rs4dz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.119670 kubelet[2762]: I0515 13:08:18.119594 2762 kubelet.go:2306] "Pod admission denied" podUID="306fe715-8406-4995-a2bc-d37c29f2b577" pod="tigera-operator/tigera-operator-6f6897fdc5-dwgk2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.212491 kubelet[2762]: I0515 13:08:18.212439 2762 kubelet.go:2306] "Pod admission denied" podUID="0db6d4a4-472a-4585-a43a-d43e0821b77f" pod="tigera-operator/tigera-operator-6f6897fdc5-4qqhv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.311716 kubelet[2762]: I0515 13:08:18.311669 2762 kubelet.go:2306] "Pod admission denied" podUID="93587d2b-6b52-4cc2-b16e-9323c652ad3b" pod="tigera-operator/tigera-operator-6f6897fdc5-6wstp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.512985 kubelet[2762]: I0515 13:08:18.512526 2762 kubelet.go:2306] "Pod admission denied" podUID="e62e350a-7a43-4bb5-b895-bdc53f42cb36" pod="tigera-operator/tigera-operator-6f6897fdc5-97h7x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.617218 kubelet[2762]: I0515 13:08:18.617104 2762 kubelet.go:2306] "Pod admission denied" podUID="740aa16b-94c8-4e7e-b222-c67badd6b5d7" pod="tigera-operator/tigera-operator-6f6897fdc5-dntk9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.714149 kubelet[2762]: I0515 13:08:18.714101 2762 kubelet.go:2306] "Pod admission denied" podUID="8127cd0d-c499-4f36-b6d1-1940e43c201f" pod="tigera-operator/tigera-operator-6f6897fdc5-jshd9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.815685 kubelet[2762]: I0515 13:08:18.814684 2762 kubelet.go:2306] "Pod admission denied" podUID="bb154545-8503-413e-a6c0-b1de3b2a6df1" pod="tigera-operator/tigera-operator-6f6897fdc5-mxbzp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.913483 kubelet[2762]: I0515 13:08:18.913438 2762 kubelet.go:2306] "Pod admission denied" podUID="dd8aa433-c15c-4c7d-8eda-ccef94a6d5aa" pod="tigera-operator/tigera-operator-6f6897fdc5-gdmql" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:18.958010 kubelet[2762]: E0515 13:08:18.957869 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:18.962089 containerd[1543]: time="2025-05-15T13:08:18.961974197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 13:08:19.013397 kubelet[2762]: I0515 13:08:19.013349 2762 kubelet.go:2306] "Pod admission denied" podUID="5e79d537-79a2-41c2-81bb-c2710eaed448" pod="tigera-operator/tigera-operator-6f6897fdc5-2b8hh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:19.114511 kubelet[2762]: I0515 13:08:19.114202 2762 kubelet.go:2306] "Pod admission denied" podUID="153f8280-11bf-4f7f-aec9-942863b12987" pod="tigera-operator/tigera-operator-6f6897fdc5-77tgd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:19.222584 kubelet[2762]: I0515 13:08:19.222034 2762 kubelet.go:2306] "Pod admission denied" podUID="090885fe-5b4b-4407-9c16-ff41af3dcfeb" pod="tigera-operator/tigera-operator-6f6897fdc5-kfgjw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:19.318130 kubelet[2762]: I0515 13:08:19.318068 2762 kubelet.go:2306] "Pod admission denied" podUID="12175c1f-5a2d-44aa-8dc8-0d566f779b57" pod="tigera-operator/tigera-operator-6f6897fdc5-j79c9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:19.516534 kubelet[2762]: I0515 13:08:19.516472 2762 kubelet.go:2306] "Pod admission denied" podUID="d8f5f76b-3825-426b-a705-b0b6f906c4a7" pod="tigera-operator/tigera-operator-6f6897fdc5-gbr4c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:19.616909 kubelet[2762]: I0515 13:08:19.616811 2762 kubelet.go:2306] "Pod admission denied" podUID="e78125ff-afa1-46e0-983c-b0175cbbe204" pod="tigera-operator/tigera-operator-6f6897fdc5-mltvr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:19.725421 kubelet[2762]: I0515 13:08:19.725357 2762 kubelet.go:2306] "Pod admission denied" podUID="a907ab9d-054b-4f7b-9879-f2da4d45e74b" pod="tigera-operator/tigera-operator-6f6897fdc5-smdl2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:19.820452 kubelet[2762]: I0515 13:08:19.820251 2762 kubelet.go:2306] "Pod admission denied" podUID="ef362efb-4e84-4fc2-84d9-50e20a225629" pod="tigera-operator/tigera-operator-6f6897fdc5-wdknc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:19.923971 kubelet[2762]: I0515 13:08:19.923731 2762 kubelet.go:2306] "Pod admission denied" podUID="0784ee53-013b-4ef5-a290-2caf7192cd6a" pod="tigera-operator/tigera-operator-6f6897fdc5-cwbmz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.047128 kubelet[2762]: I0515 13:08:20.043401 2762 kubelet.go:2306] "Pod admission denied" podUID="c9a06209-6122-4e8d-b7e3-669c9e48697f" pod="tigera-operator/tigera-operator-6f6897fdc5-r6nb8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.168136 kubelet[2762]: I0515 13:08:20.168088 2762 kubelet.go:2306] "Pod admission denied" podUID="e69c983f-ac17-4e85-8e56-2a384f2310a2" pod="tigera-operator/tigera-operator-6f6897fdc5-dw669" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.298980 kubelet[2762]: I0515 13:08:20.298674 2762 kubelet.go:2306] "Pod admission denied" podUID="27b2c08d-4191-4cff-a503-198a6af312b5" pod="tigera-operator/tigera-operator-6f6897fdc5-8b8km" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.374514 kubelet[2762]: I0515 13:08:20.374366 2762 kubelet.go:2306] "Pod admission denied" podUID="54029e5a-f94d-477e-9acd-03185b89ca4c" pod="tigera-operator/tigera-operator-6f6897fdc5-q576h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.473012 kubelet[2762]: I0515 13:08:20.472875 2762 kubelet.go:2306] "Pod admission denied" podUID="07f4c706-7d02-49f0-b54e-e1b5222d4c54" pod="tigera-operator/tigera-operator-6f6897fdc5-zhg8k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.575270 kubelet[2762]: I0515 13:08:20.575211 2762 kubelet.go:2306] "Pod admission denied" podUID="997a9254-e9a5-4ab6-9d61-8ff5a6dae9f3" pod="tigera-operator/tigera-operator-6f6897fdc5-h2w4j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.677161 kubelet[2762]: I0515 13:08:20.676248 2762 kubelet.go:2306] "Pod admission denied" podUID="f6c13039-cbdf-43f6-ab1d-0f3af08e3082" pod="tigera-operator/tigera-operator-6f6897fdc5-9fdp7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.769961 kubelet[2762]: I0515 13:08:20.769814 2762 kubelet.go:2306] "Pod admission denied" podUID="a4a40c55-cb0a-449b-9834-ea8eabbcff01" pod="tigera-operator/tigera-operator-6f6897fdc5-h2m24" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.875936 kubelet[2762]: I0515 13:08:20.875885 2762 kubelet.go:2306] "Pod admission denied" podUID="86e76afe-5996-44e0-9c20-6d59b10942b5" pod="tigera-operator/tigera-operator-6f6897fdc5-5cs9g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:20.973413 kubelet[2762]: I0515 13:08:20.972438 2762 kubelet.go:2306] "Pod admission denied" podUID="d4b86045-8515-4c34-bc66-1e00240b0fe0" pod="tigera-operator/tigera-operator-6f6897fdc5-jmmlf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.020482 kubelet[2762]: I0515 13:08:21.020364 2762 kubelet.go:2306] "Pod admission denied" podUID="b11e882f-70cc-433c-abc6-b61cb64824b1" pod="tigera-operator/tigera-operator-6f6897fdc5-97gtm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.129147 kubelet[2762]: I0515 13:08:21.128192 2762 kubelet.go:2306] "Pod admission denied" podUID="af892ca4-bc48-413a-ac22-129e51baaf36" pod="tigera-operator/tigera-operator-6f6897fdc5-ltfdg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.230836 kubelet[2762]: I0515 13:08:21.230752 2762 kubelet.go:2306] "Pod admission denied" podUID="51c4e930-1bbe-4f74-8ff3-6e6b31494822" pod="tigera-operator/tigera-operator-6f6897fdc5-5249f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.271918 kubelet[2762]: I0515 13:08:21.270695 2762 kubelet.go:2306] "Pod admission denied" podUID="7efd023a-45ba-4982-bd1e-f77e1af99ce9" pod="tigera-operator/tigera-operator-6f6897fdc5-km7nc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.372180 kubelet[2762]: I0515 13:08:21.372121 2762 kubelet.go:2306] "Pod admission denied" podUID="2aaa9282-c04f-465a-875e-d43284df71be" pod="tigera-operator/tigera-operator-6f6897fdc5-pwbkx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.479326 kubelet[2762]: I0515 13:08:21.479266 2762 kubelet.go:2306] "Pod admission denied" podUID="5d2be3f9-41ef-42e1-98e8-a86bae633c6a" pod="tigera-operator/tigera-operator-6f6897fdc5-mj9sh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.576808 kubelet[2762]: I0515 13:08:21.576497 2762 kubelet.go:2306] "Pod admission denied" podUID="2f517b3d-5f87-453a-960c-bb9d963f4175" pod="tigera-operator/tigera-operator-6f6897fdc5-4mm4t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.675038 kubelet[2762]: I0515 13:08:21.674366 2762 kubelet.go:2306] "Pod admission denied" podUID="c6d34504-0992-4257-95e1-cc9fc92acb62" pod="tigera-operator/tigera-operator-6f6897fdc5-xvt5j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.718611 kubelet[2762]: I0515 13:08:21.718571 2762 kubelet.go:2306] "Pod admission denied" podUID="cecaf48a-f424-4ec2-b785-a7dd67f19f9f" pod="tigera-operator/tigera-operator-6f6897fdc5-7d8tw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.827413 kubelet[2762]: I0515 13:08:21.827274 2762 kubelet.go:2306] "Pod admission denied" podUID="b1cbbf14-7fc6-431f-9e95-8244d178913d" pod="tigera-operator/tigera-operator-6f6897fdc5-gwr79" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:21.920597 kubelet[2762]: I0515 13:08:21.920467 2762 kubelet.go:2306] "Pod admission denied" podUID="e04edf88-fd94-4d92-b1e2-3123fed2edf5" pod="tigera-operator/tigera-operator-6f6897fdc5-wkjw6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.020244 kubelet[2762]: I0515 13:08:22.020177 2762 kubelet.go:2306] "Pod admission denied" podUID="ca226762-c8fb-4bc4-92e5-45557fb4161c" pod="tigera-operator/tigera-operator-6f6897fdc5-f9bzs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.120987 kubelet[2762]: I0515 13:08:22.120940 2762 kubelet.go:2306] "Pod admission denied" podUID="9247278f-bcef-4176-badc-4005bb7c0388" pod="tigera-operator/tigera-operator-6f6897fdc5-hmq2h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.227289 kubelet[2762]: I0515 13:08:22.225902 2762 kubelet.go:2306] "Pod admission denied" podUID="1d96382a-616f-444b-a552-98aa9261a308" pod="tigera-operator/tigera-operator-6f6897fdc5-b58hj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.420705 kubelet[2762]: I0515 13:08:22.420270 2762 kubelet.go:2306] "Pod admission denied" podUID="df4b4c2c-980a-491b-9019-67adb736bc22" pod="tigera-operator/tigera-operator-6f6897fdc5-p4pdt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.513121 containerd[1543]: time="2025-05-15T13:08:22.512939180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount693010128: write /var/lib/containerd/tmpmounts/containerd-mount693010128/usr/lib/calico/bpf/from_nat_info_co-re.o: no space left on device" May 15 13:08:22.514162 containerd[1543]: time="2025-05-15T13:08:22.513056800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 13:08:22.515423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693010128.mount: Deactivated successfully. May 15 13:08:22.516123 kubelet[2762]: E0515 13:08:22.515738 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount693010128: write /var/lib/containerd/tmpmounts/containerd-mount693010128/usr/lib/calico/bpf/from_nat_info_co-re.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 13:08:22.516123 kubelet[2762]: E0515 13:08:22.515827 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount693010128: write /var/lib/containerd/tmpmounts/containerd-mount693010128/usr/lib/calico/bpf/from_nat_info_co-re.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 13:08:22.516254 kubelet[2762]: E0515 13:08:22.516175 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pg5bx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-h5k9z_calico-system(1a8a24dd-708e-4ec3-b972-4df98026b344): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount693010128: write /var/lib/containerd/tmpmounts/containerd-mount693010128/usr/lib/calico/bpf/from_nat_info_co-re.o: no space left on device" logger="UnhandledError" May 15 13:08:22.517608 kubelet[2762]: E0515 13:08:22.517524 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount693010128: write /var/lib/containerd/tmpmounts/containerd-mount693010128/usr/lib/calico/bpf/from_nat_info_co-re.o: no space left on device\"" pod="calico-system/calico-node-h5k9z" podUID="1a8a24dd-708e-4ec3-b972-4df98026b344" May 15 13:08:22.536446 kubelet[2762]: I0515 13:08:22.536248 2762 kubelet.go:2306] "Pod admission denied" podUID="72dc73bd-6719-44a9-b7cb-ecce18d254b8" pod="tigera-operator/tigera-operator-6f6897fdc5-hk8zv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.620898 kubelet[2762]: I0515 13:08:22.620631 2762 kubelet.go:2306] "Pod admission denied" podUID="0e94d08e-0922-46ec-bca6-86c25d238c9a" pod="tigera-operator/tigera-operator-6f6897fdc5-rpj87" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.716099 kubelet[2762]: I0515 13:08:22.715963 2762 kubelet.go:2306] "Pod admission denied" podUID="04438ea3-59bb-4c0f-84f9-e240ea9d1f5e" pod="tigera-operator/tigera-operator-6f6897fdc5-rpsbk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.814758 kubelet[2762]: I0515 13:08:22.814700 2762 kubelet.go:2306] "Pod admission denied" podUID="1d96c7df-1079-4f0f-8189-fce0a1026d21" pod="tigera-operator/tigera-operator-6f6897fdc5-fbkt9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:22.956838 kubelet[2762]: E0515 13:08:22.956646 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:22.957365 containerd[1543]: time="2025-05-15T13:08:22.957186958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:08:22.957681 containerd[1543]: time="2025-05-15T13:08:22.957654899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:08:23.023865 kubelet[2762]: I0515 13:08:23.023635 2762 kubelet.go:2306] "Pod admission denied" podUID="a0e67d2c-eef3-444a-9522-6d8dc3c80a76" pod="tigera-operator/tigera-operator-6f6897fdc5-q6ms7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:23.046985 containerd[1543]: time="2025-05-15T13:08:23.046919861Z" level=error msg="Failed to destroy network for sandbox \"f3caff266b9f468d7d5370097a0d3c8a260f714b4fcd9906d27fd20d69b1c051\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:23.051361 systemd[1]: run-netns-cni\x2d287f3b94\x2d0910\x2dec38\x2dd1b0\x2de72d1ce1d22a.mount: Deactivated successfully. May 15 13:08:23.054786 containerd[1543]: time="2025-05-15T13:08:23.054698352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3caff266b9f468d7d5370097a0d3c8a260f714b4fcd9906d27fd20d69b1c051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:23.055322 kubelet[2762]: E0515 13:08:23.055259 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3caff266b9f468d7d5370097a0d3c8a260f714b4fcd9906d27fd20d69b1c051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:23.056091 kubelet[2762]: E0515 13:08:23.055478 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3caff266b9f468d7d5370097a0d3c8a260f714b4fcd9906d27fd20d69b1c051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:23.056091 kubelet[2762]: E0515 13:08:23.055729 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3caff266b9f468d7d5370097a0d3c8a260f714b4fcd9906d27fd20d69b1c051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:23.056091 kubelet[2762]: E0515 13:08:23.055842 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3caff266b9f468d7d5370097a0d3c8a260f714b4fcd9906d27fd20d69b1c051\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:08:23.072349 containerd[1543]: time="2025-05-15T13:08:23.072276502Z" level=error msg="Failed to destroy network for sandbox \"d91a78e5d3a69440a6e4ec65e5054e990efa755dd954d70e7d95dca84166802f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:23.075703 containerd[1543]: time="2025-05-15T13:08:23.074516668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d91a78e5d3a69440a6e4ec65e5054e990efa755dd954d70e7d95dca84166802f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:23.076152 systemd[1]: run-netns-cni\x2d3ff9dcd8\x2d88f5\x2d6f1e\x2dac6e\x2d8c7cf78311f3.mount: Deactivated successfully. May 15 13:08:23.076258 kubelet[2762]: E0515 13:08:23.076197 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d91a78e5d3a69440a6e4ec65e5054e990efa755dd954d70e7d95dca84166802f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:23.076334 kubelet[2762]: E0515 13:08:23.076287 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d91a78e5d3a69440a6e4ec65e5054e990efa755dd954d70e7d95dca84166802f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:23.076380 kubelet[2762]: E0515 13:08:23.076336 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d91a78e5d3a69440a6e4ec65e5054e990efa755dd954d70e7d95dca84166802f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:23.076846 kubelet[2762]: E0515 13:08:23.076412 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d91a78e5d3a69440a6e4ec65e5054e990efa755dd954d70e7d95dca84166802f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:08:23.114014 kubelet[2762]: I0515 13:08:23.113793 2762 kubelet.go:2306] "Pod admission denied" podUID="868b06f3-d720-463f-806e-d7bdea72fdd2" pod="tigera-operator/tigera-operator-6f6897fdc5-fv8b8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:23.215519 kubelet[2762]: I0515 13:08:23.215454 2762 kubelet.go:2306] "Pod admission denied" podUID="ac1a4aac-6d1d-43c5-9e19-3c419a63a3bd" pod="tigera-operator/tigera-operator-6f6897fdc5-8pnzn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:23.436526 kubelet[2762]: I0515 13:08:23.436317 2762 kubelet.go:2306] "Pod admission denied" podUID="963cb2db-9bbc-4d65-8ebb-058946b27aed" pod="tigera-operator/tigera-operator-6f6897fdc5-kbgjm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:23.514385 kubelet[2762]: I0515 13:08:23.514321 2762 kubelet.go:2306] "Pod admission denied" podUID="c4cadd43-c3b5-45ee-a53c-5d4304f37930" pod="tigera-operator/tigera-operator-6f6897fdc5-jq9f2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:23.621177 kubelet[2762]: I0515 13:08:23.621121 2762 kubelet.go:2306] "Pod admission denied" podUID="ed9db09f-0681-4a8b-b22b-12cfe759c7f7" pod="tigera-operator/tigera-operator-6f6897fdc5-kf52c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:23.818340 kubelet[2762]: I0515 13:08:23.817765 2762 kubelet.go:2306] "Pod admission denied" podUID="087dbddc-1ea1-4e06-930d-ced1008ab212" pod="tigera-operator/tigera-operator-6f6897fdc5-kcs2s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:23.916737 kubelet[2762]: I0515 13:08:23.916671 2762 kubelet.go:2306] "Pod admission denied" podUID="e30a2ceb-256b-4c6c-b1a4-26848600ef81" pod="tigera-operator/tigera-operator-6f6897fdc5-5q45p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:23.985610 kubelet[2762]: I0515 13:08:23.985540 2762 kubelet.go:2306] "Pod admission denied" podUID="33a43da7-a176-46d9-bbeb-86c4fba77982" pod="tigera-operator/tigera-operator-6f6897fdc5-csl6p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.066468 kubelet[2762]: I0515 13:08:24.066418 2762 kubelet.go:2306] "Pod admission denied" podUID="60d31d56-9100-4968-b5d4-a81f94205240" pod="tigera-operator/tigera-operator-6f6897fdc5-jwtcn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.179001 kubelet[2762]: I0515 13:08:24.178944 2762 kubelet.go:2306] "Pod admission denied" podUID="328707ea-d2b1-414f-8dcd-2347fbc4fe90" pod="tigera-operator/tigera-operator-6f6897fdc5-zfkbk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.259604 kubelet[2762]: I0515 13:08:24.259532 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:24.260303 kubelet[2762]: I0515 13:08:24.259827 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:24.264782 kubelet[2762]: I0515 13:08:24.264764 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:24.282379 kubelet[2762]: I0515 13:08:24.282228 2762 kubelet.go:2306] "Pod admission denied" podUID="ccdcf2b9-860c-4388-85a3-369d59c9b800" pod="tigera-operator/tigera-operator-6f6897fdc5-jg2sg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.290986 kubelet[2762]: I0515 13:08:24.290923 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:24.291419 kubelet[2762]: I0515 13:08:24.291390 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:24.291591 kubelet[2762]: E0515 13:08:24.291576 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:24.291665 kubelet[2762]: E0515 13:08:24.291655 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:24.291831 kubelet[2762]: E0515 13:08:24.291720 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:24.291831 kubelet[2762]: E0515 13:08:24.291732 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:08:24.291831 kubelet[2762]: E0515 13:08:24.291739 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:24.291831 kubelet[2762]: E0515 13:08:24.291760 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:08:24.291831 kubelet[2762]: E0515 13:08:24.291771 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:08:24.291831 kubelet[2762]: E0515 13:08:24.291781 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:08:24.291831 kubelet[2762]: E0515 13:08:24.291795 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:08:24.291831 kubelet[2762]: E0515 13:08:24.291804 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:08:24.291831 kubelet[2762]: I0515 13:08:24.291814 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:08:24.370677 kubelet[2762]: I0515 13:08:24.370404 2762 kubelet.go:2306] "Pod admission denied" podUID="53a3deda-67dc-460a-87c0-bee05c81c9de" pod="tigera-operator/tigera-operator-6f6897fdc5-hqtgd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.465547 kubelet[2762]: I0515 13:08:24.464773 2762 kubelet.go:2306] "Pod admission denied" podUID="400737a3-eeb2-491c-a978-ec295f84c146" pod="tigera-operator/tigera-operator-6f6897fdc5-7xktl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.667991 kubelet[2762]: I0515 13:08:24.667936 2762 kubelet.go:2306] "Pod admission denied" podUID="84f0f624-51aa-4b69-b929-3224c5a63b0b" pod="tigera-operator/tigera-operator-6f6897fdc5-rdzrm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.762758 kubelet[2762]: I0515 13:08:24.762629 2762 kubelet.go:2306] "Pod admission denied" podUID="3b49f8bd-9539-433a-a696-df2731f0c903" pod="tigera-operator/tigera-operator-6f6897fdc5-r74lp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.868512 kubelet[2762]: I0515 13:08:24.868389 2762 kubelet.go:2306] "Pod admission denied" podUID="c04e8895-32f8-4a87-ba09-848df560bcda" pod="tigera-operator/tigera-operator-6f6897fdc5-6jz87" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:24.965401 kubelet[2762]: I0515 13:08:24.965331 2762 kubelet.go:2306] "Pod admission denied" podUID="ae521e7b-0bbd-48b7-9c55-243d2e61a969" pod="tigera-operator/tigera-operator-6f6897fdc5-v6dz7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.066456 kubelet[2762]: I0515 13:08:25.066245 2762 kubelet.go:2306] "Pod admission denied" podUID="54ef1e89-8ebe-46fc-b929-98d0758b3d5e" pod="tigera-operator/tigera-operator-6f6897fdc5-kn9nr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.264703 kubelet[2762]: I0515 13:08:25.264652 2762 kubelet.go:2306] "Pod admission denied" podUID="6829c383-ff93-4ffc-9893-8c438fd16f45" pod="tigera-operator/tigera-operator-6f6897fdc5-m66h5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.366257 kubelet[2762]: I0515 13:08:25.365620 2762 kubelet.go:2306] "Pod admission denied" podUID="e0300834-066f-4f29-889c-c752d237df5a" pod="tigera-operator/tigera-operator-6f6897fdc5-vl2mb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.420362 kubelet[2762]: I0515 13:08:25.420312 2762 kubelet.go:2306] "Pod admission denied" podUID="761854b2-97f0-4b8b-8d5d-8ae5e5b6daba" pod="tigera-operator/tigera-operator-6f6897fdc5-56284" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.517487 kubelet[2762]: I0515 13:08:25.517436 2762 kubelet.go:2306] "Pod admission denied" podUID="50e1c01f-3f7b-4945-9e48-703517e34619" pod="tigera-operator/tigera-operator-6f6897fdc5-l8dzw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.613954 kubelet[2762]: I0515 13:08:25.613892 2762 kubelet.go:2306] "Pod admission denied" podUID="85a581ef-4fe2-48f7-9ae1-350d2acb2352" pod="tigera-operator/tigera-operator-6f6897fdc5-cvsx2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.716417 kubelet[2762]: I0515 13:08:25.716351 2762 kubelet.go:2306] "Pod admission denied" podUID="1f0b2402-f9e7-41f9-8e75-3c459ccc6765" pod="tigera-operator/tigera-operator-6f6897fdc5-xhzlw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.814830 kubelet[2762]: I0515 13:08:25.814771 2762 kubelet.go:2306] "Pod admission denied" podUID="02c2ec3b-90f9-46df-b557-be7f973c916a" pod="tigera-operator/tigera-operator-6f6897fdc5-9tq69" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.917333 kubelet[2762]: I0515 13:08:25.917275 2762 kubelet.go:2306] "Pod admission denied" podUID="1ebf6cab-3a60-429a-bb5f-3daf1e0fc6a6" pod="tigera-operator/tigera-operator-6f6897fdc5-djbkl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:25.959590 containerd[1543]: time="2025-05-15T13:08:25.959297627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:08:26.027484 kubelet[2762]: I0515 13:08:26.027350 2762 kubelet.go:2306] "Pod admission denied" podUID="4b71616f-1f72-42ba-b60b-9c46c2c13dca" pod="tigera-operator/tigera-operator-6f6897fdc5-425jl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.060779 containerd[1543]: time="2025-05-15T13:08:26.060711379Z" level=error msg="Failed to destroy network for sandbox \"256b34f89de192d6df70b63ca3d691d3b854a51cf13463a6262fa9d607662a42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:26.063832 containerd[1543]: time="2025-05-15T13:08:26.063764477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"256b34f89de192d6df70b63ca3d691d3b854a51cf13463a6262fa9d607662a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:26.064309 kubelet[2762]: E0515 13:08:26.064214 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256b34f89de192d6df70b63ca3d691d3b854a51cf13463a6262fa9d607662a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:26.064469 kubelet[2762]: E0515 13:08:26.064264 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256b34f89de192d6df70b63ca3d691d3b854a51cf13463a6262fa9d607662a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:26.064469 kubelet[2762]: E0515 13:08:26.064429 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256b34f89de192d6df70b63ca3d691d3b854a51cf13463a6262fa9d607662a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:26.065315 kubelet[2762]: E0515 13:08:26.064594 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"256b34f89de192d6df70b63ca3d691d3b854a51cf13463a6262fa9d607662a42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:08:26.066235 systemd[1]: run-netns-cni\x2dc5cfa5b6\x2d0899\x2d7646\x2de972\x2db281fe25efb6.mount: Deactivated successfully. May 15 13:08:26.115403 kubelet[2762]: I0515 13:08:26.115350 2762 kubelet.go:2306] "Pod admission denied" podUID="6fec814b-75fe-4792-9e50-1b3623f1714a" pod="tigera-operator/tigera-operator-6f6897fdc5-7nxpf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.218336 kubelet[2762]: I0515 13:08:26.218272 2762 kubelet.go:2306] "Pod admission denied" podUID="d6597543-26a8-4cf9-b908-49249261b310" pod="tigera-operator/tigera-operator-6f6897fdc5-7dvsh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.321007 kubelet[2762]: I0515 13:08:26.320842 2762 kubelet.go:2306] "Pod admission denied" podUID="69002f1f-54b7-4328-b6ca-cb1e832226f6" pod="tigera-operator/tigera-operator-6f6897fdc5-xcjvh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.414734 kubelet[2762]: I0515 13:08:26.414673 2762 kubelet.go:2306] "Pod admission denied" podUID="9d8f4432-83f0-4294-a314-ffcfb3dd4e8b" pod="tigera-operator/tigera-operator-6f6897fdc5-4kbbm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.519618 kubelet[2762]: I0515 13:08:26.519546 2762 kubelet.go:2306] "Pod admission denied" podUID="74638e2f-4309-4bdc-a457-989deee22868" pod="tigera-operator/tigera-operator-6f6897fdc5-wb2bg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.614894 kubelet[2762]: I0515 13:08:26.614742 2762 kubelet.go:2306] "Pod admission denied" podUID="ef4fee3e-3520-40af-a9ea-8ecb45c77fd2" pod="tigera-operator/tigera-operator-6f6897fdc5-fzl2g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.719597 kubelet[2762]: I0515 13:08:26.719506 2762 kubelet.go:2306] "Pod admission denied" podUID="f3eafae7-a7b3-4a1c-83f5-3059698a4b59" pod="tigera-operator/tigera-operator-6f6897fdc5-9nh6z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.815343 kubelet[2762]: I0515 13:08:26.815281 2762 kubelet.go:2306] "Pod admission denied" podUID="724de651-0dec-424d-9e3f-1aba7635aeaa" pod="tigera-operator/tigera-operator-6f6897fdc5-67l8c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:26.914249 kubelet[2762]: I0515 13:08:26.914194 2762 kubelet.go:2306] "Pod admission denied" podUID="073cf42f-696d-4cd3-a6c2-555de18c3edc" pod="tigera-operator/tigera-operator-6f6897fdc5-j8gg4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.016264 kubelet[2762]: I0515 13:08:27.016165 2762 kubelet.go:2306] "Pod admission denied" podUID="5546e618-a9de-4d28-8a7a-4e7477b58f81" pod="tigera-operator/tigera-operator-6f6897fdc5-cdvk2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.116174 kubelet[2762]: I0515 13:08:27.116123 2762 kubelet.go:2306] "Pod admission denied" podUID="59c6a050-ffb6-4948-8e96-d3de017f7e75" pod="tigera-operator/tigera-operator-6f6897fdc5-frhtb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.214116 kubelet[2762]: I0515 13:08:27.213732 2762 kubelet.go:2306] "Pod admission denied" podUID="086a0d76-9135-4d4a-87f8-5f4d1a114dc3" pod="tigera-operator/tigera-operator-6f6897fdc5-mgh9b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.316416 kubelet[2762]: I0515 13:08:27.316359 2762 kubelet.go:2306] "Pod admission denied" podUID="fc646d56-4965-4efe-adcf-77ae578f586a" pod="tigera-operator/tigera-operator-6f6897fdc5-27rm9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.416929 kubelet[2762]: I0515 13:08:27.416861 2762 kubelet.go:2306] "Pod admission denied" podUID="f403e9c5-799e-42c7-ae8a-d166ec793df1" pod="tigera-operator/tigera-operator-6f6897fdc5-47w5z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.471702 kubelet[2762]: I0515 13:08:27.471272 2762 kubelet.go:2306] "Pod admission denied" podUID="71f09270-4fb2-48b8-8b0e-fe0c386b4174" pod="tigera-operator/tigera-operator-6f6897fdc5-zdwv9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.565006 kubelet[2762]: I0515 13:08:27.564943 2762 kubelet.go:2306] "Pod admission denied" podUID="972b4d75-7cf5-4db6-9ce4-185423dc68b4" pod="tigera-operator/tigera-operator-6f6897fdc5-hj98z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.667732 kubelet[2762]: I0515 13:08:27.667520 2762 kubelet.go:2306] "Pod admission denied" podUID="e48926ba-c55c-4524-93af-3f32f509088e" pod="tigera-operator/tigera-operator-6f6897fdc5-5vh6m" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.718628 kubelet[2762]: I0515 13:08:27.717118 2762 kubelet.go:2306] "Pod admission denied" podUID="6c9c0a01-80d8-4be7-ac30-2a16efc8f944" pod="tigera-operator/tigera-operator-6f6897fdc5-pzcp5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.818854 kubelet[2762]: I0515 13:08:27.817725 2762 kubelet.go:2306] "Pod admission denied" podUID="319ee2b6-826f-4649-8d7e-34f34e017dd6" pod="tigera-operator/tigera-operator-6f6897fdc5-fcdmd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.918521 kubelet[2762]: I0515 13:08:27.918473 2762 kubelet.go:2306] "Pod admission denied" podUID="3aba9bfa-727f-4920-ae07-edc5c845db1e" pod="tigera-operator/tigera-operator-6f6897fdc5-wczmw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:27.965577 kubelet[2762]: I0515 13:08:27.965512 2762 kubelet.go:2306] "Pod admission denied" podUID="2f4ed3ba-2d7b-4bb5-b8c0-36cc7a0d8c55" pod="tigera-operator/tigera-operator-6f6897fdc5-zrksq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.066815 kubelet[2762]: I0515 13:08:28.066774 2762 kubelet.go:2306] "Pod admission denied" podUID="cc6701d4-6d0a-4cdc-b2a3-eed5f0f485d1" pod="tigera-operator/tigera-operator-6f6897fdc5-dfzng" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.165790 kubelet[2762]: I0515 13:08:28.165729 2762 kubelet.go:2306] "Pod admission denied" podUID="deb995c4-2b0d-4c9b-bf2d-8f36541a977d" pod="tigera-operator/tigera-operator-6f6897fdc5-g2sgj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.266835 kubelet[2762]: I0515 13:08:28.266773 2762 kubelet.go:2306] "Pod admission denied" podUID="18b92080-5120-4caa-963c-14d03b6a6153" pod="tigera-operator/tigera-operator-6f6897fdc5-95x9w" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.364720 kubelet[2762]: I0515 13:08:28.364640 2762 kubelet.go:2306] "Pod admission denied" podUID="0db95170-a4e9-4f0f-9c00-24f1aa000a26" pod="tigera-operator/tigera-operator-6f6897fdc5-94ttg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.466324 kubelet[2762]: I0515 13:08:28.466159 2762 kubelet.go:2306] "Pod admission denied" podUID="6348e6fb-bef7-4d48-b2b0-3258d760be4b" pod="tigera-operator/tigera-operator-6f6897fdc5-kc54x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.566417 kubelet[2762]: I0515 13:08:28.566356 2762 kubelet.go:2306] "Pod admission denied" podUID="d0189f30-e60a-452f-ac28-2ba95ff11a21" pod="tigera-operator/tigera-operator-6f6897fdc5-8dbps" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.663545 kubelet[2762]: I0515 13:08:28.663478 2762 kubelet.go:2306] "Pod admission denied" podUID="422ef384-fc41-4309-bdbc-31641c0efe51" pod="tigera-operator/tigera-operator-6f6897fdc5-hnb2h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.866876 kubelet[2762]: I0515 13:08:28.866369 2762 kubelet.go:2306] "Pod admission denied" podUID="f795fba0-eb03-4753-8b73-bdafe6c60ce9" pod="tigera-operator/tigera-operator-6f6897fdc5-mbklx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:28.957365 kubelet[2762]: E0515 13:08:28.957249 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:28.958533 containerd[1543]: time="2025-05-15T13:08:28.958347165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:08:28.977502 kubelet[2762]: I0515 13:08:28.977433 2762 kubelet.go:2306] "Pod admission denied" podUID="186e81c4-f810-4a29-b993-fff212d66783" pod="tigera-operator/tigera-operator-6f6897fdc5-llz58" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.030366 kubelet[2762]: I0515 13:08:29.030306 2762 kubelet.go:2306] "Pod admission denied" podUID="8fedb744-c4ef-44b1-8130-19f7ffdc13f4" pod="tigera-operator/tigera-operator-6f6897fdc5-njkgw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.048737 containerd[1543]: time="2025-05-15T13:08:29.048592737Z" level=error msg="Failed to destroy network for sandbox \"2a4031f9c40335221a008174f980af606c0b1352747eff3aa716cab35df353e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:29.053146 containerd[1543]: time="2025-05-15T13:08:29.051700314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4031f9c40335221a008174f980af606c0b1352747eff3aa716cab35df353e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:29.053393 kubelet[2762]: E0515 13:08:29.052801 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4031f9c40335221a008174f980af606c0b1352747eff3aa716cab35df353e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:29.053393 kubelet[2762]: E0515 13:08:29.052862 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4031f9c40335221a008174f980af606c0b1352747eff3aa716cab35df353e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:29.053393 kubelet[2762]: E0515 13:08:29.052886 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4031f9c40335221a008174f980af606c0b1352747eff3aa716cab35df353e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:29.053393 kubelet[2762]: E0515 13:08:29.052943 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a4031f9c40335221a008174f980af606c0b1352747eff3aa716cab35df353e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xfdz2" podUID="b53c6794-8ef1-4efd-9179-2e706d6227cb" May 15 13:08:29.055713 systemd[1]: run-netns-cni\x2d52e37d67\x2d6bab\x2dc7d3\x2d99c2\x2d40af7e780bf3.mount: Deactivated successfully. May 15 13:08:29.113892 kubelet[2762]: I0515 13:08:29.113839 2762 kubelet.go:2306] "Pod admission denied" podUID="8721d37b-8d1e-4746-8793-c76691cb2c95" pod="tigera-operator/tigera-operator-6f6897fdc5-lr9r4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.218055 kubelet[2762]: I0515 13:08:29.217880 2762 kubelet.go:2306] "Pod admission denied" podUID="3c88ee09-6fc8-4cfc-a20b-b509b6c9830f" pod="tigera-operator/tigera-operator-6f6897fdc5-wbpvn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.318402 kubelet[2762]: I0515 13:08:29.318346 2762 kubelet.go:2306] "Pod admission denied" podUID="b520a212-5091-4075-89f0-3469f21eb981" pod="tigera-operator/tigera-operator-6f6897fdc5-sczs5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.418356 kubelet[2762]: I0515 13:08:29.418307 2762 kubelet.go:2306] "Pod admission denied" podUID="7c2343a1-09cd-4ec1-9e1a-acaf40796e3d" pod="tigera-operator/tigera-operator-6f6897fdc5-xjgkw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.515209 kubelet[2762]: I0515 13:08:29.515077 2762 kubelet.go:2306] "Pod admission denied" podUID="171bedff-d379-4706-a4a6-dd7c7afb5773" pod="tigera-operator/tigera-operator-6f6897fdc5-dhtbv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.614646 kubelet[2762]: I0515 13:08:29.614580 2762 kubelet.go:2306] "Pod admission denied" podUID="dc0a2c2b-61b1-402c-84e0-672f26f0ee24" pod="tigera-operator/tigera-operator-6f6897fdc5-pvdwc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.721099 kubelet[2762]: I0515 13:08:29.720855 2762 kubelet.go:2306] "Pod admission denied" podUID="0bae33e9-1886-4103-85c4-9fc3d00bab3e" pod="tigera-operator/tigera-operator-6f6897fdc5-224bd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:29.917732 kubelet[2762]: I0515 13:08:29.917652 2762 kubelet.go:2306] "Pod admission denied" podUID="54958ad6-e124-449f-9a56-b698fc330f74" pod="tigera-operator/tigera-operator-6f6897fdc5-8cmc5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.021175 kubelet[2762]: I0515 13:08:30.021130 2762 kubelet.go:2306] "Pod admission denied" podUID="fde3e20c-a62d-4337-b097-4b2c1dcf2dc3" pod="tigera-operator/tigera-operator-6f6897fdc5-cq2cq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.116975 kubelet[2762]: I0515 13:08:30.116911 2762 kubelet.go:2306] "Pod admission denied" podUID="1e8a50a6-89bc-4764-9454-9ea00eaa547d" pod="tigera-operator/tigera-operator-6f6897fdc5-vxhsq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.315113 kubelet[2762]: I0515 13:08:30.314964 2762 kubelet.go:2306] "Pod admission denied" podUID="b5dd727e-a35f-4ef5-a7ec-6b104fbd1ae9" pod="tigera-operator/tigera-operator-6f6897fdc5-dhb2t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.432594 kubelet[2762]: I0515 13:08:30.432536 2762 kubelet.go:2306] "Pod admission denied" podUID="e32d79df-0da8-4894-8bee-696074504cb6" pod="tigera-operator/tigera-operator-6f6897fdc5-b85z5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.515431 kubelet[2762]: I0515 13:08:30.515377 2762 kubelet.go:2306] "Pod admission denied" podUID="4e6a84cf-b643-4a8a-adad-b69e71febcd7" pod="tigera-operator/tigera-operator-6f6897fdc5-pq2jk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.618256 kubelet[2762]: I0515 13:08:30.618208 2762 kubelet.go:2306] "Pod admission denied" podUID="caaf9c69-078f-434d-9a71-e75e9be3ea1b" pod="tigera-operator/tigera-operator-6f6897fdc5-x9vx7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.665937 kubelet[2762]: I0515 13:08:30.665874 2762 kubelet.go:2306] "Pod admission denied" podUID="729ca470-ac76-4c7d-b2d1-efd4a71d7817" pod="tigera-operator/tigera-operator-6f6897fdc5-lwdjc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.764270 kubelet[2762]: I0515 13:08:30.764213 2762 kubelet.go:2306] "Pod admission denied" podUID="ccca2d70-f2da-40da-892d-b0cd98568515" pod="tigera-operator/tigera-operator-6f6897fdc5-n72j5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:30.965350 kubelet[2762]: I0515 13:08:30.965195 2762 kubelet.go:2306] "Pod admission denied" podUID="835df6ed-5cc6-45d9-9fc1-dfc1cb347b0d" pod="tigera-operator/tigera-operator-6f6897fdc5-tmwbs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.066020 kubelet[2762]: I0515 13:08:31.065971 2762 kubelet.go:2306] "Pod admission denied" podUID="b2ca29d1-c3c6-44b3-8f92-1555e2ccef33" pod="tigera-operator/tigera-operator-6f6897fdc5-t2bm4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.115280 kubelet[2762]: I0515 13:08:31.115229 2762 kubelet.go:2306] "Pod admission denied" podUID="242219a4-f35b-445e-a62c-4a6a4d959abc" pod="tigera-operator/tigera-operator-6f6897fdc5-6f5mw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.216918 kubelet[2762]: I0515 13:08:31.216766 2762 kubelet.go:2306] "Pod admission denied" podUID="ed89c0ed-3c5e-45cc-9331-752f103344f8" pod="tigera-operator/tigera-operator-6f6897fdc5-xb4ph" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.335259 kubelet[2762]: I0515 13:08:31.334318 2762 kubelet.go:2306] "Pod admission denied" podUID="6b6a6f8e-123b-4b68-9095-4a130a27de97" pod="tigera-operator/tigera-operator-6f6897fdc5-j5898" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.419832 kubelet[2762]: I0515 13:08:31.419407 2762 kubelet.go:2306] "Pod admission denied" podUID="0c583839-473d-45d7-948a-05049f9b6f75" pod="tigera-operator/tigera-operator-6f6897fdc5-r8clc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.516624 kubelet[2762]: I0515 13:08:31.516467 2762 kubelet.go:2306] "Pod admission denied" podUID="97f33b58-fc89-485d-97d6-538f3e357678" pod="tigera-operator/tigera-operator-6f6897fdc5-w9dp8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.616331 kubelet[2762]: I0515 13:08:31.616263 2762 kubelet.go:2306] "Pod admission denied" podUID="2dec3d21-df07-4502-8374-ebd52c50611e" pod="tigera-operator/tigera-operator-6f6897fdc5-xl7z6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.816363 kubelet[2762]: I0515 13:08:31.816229 2762 kubelet.go:2306] "Pod admission denied" podUID="b637ea55-2331-4953-8747-1d01db039971" pod="tigera-operator/tigera-operator-6f6897fdc5-8pntx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:31.915708 kubelet[2762]: I0515 13:08:31.915648 2762 kubelet.go:2306] "Pod admission denied" podUID="9ef7b573-3f96-4464-8e3a-8a4b480d1816" pod="tigera-operator/tigera-operator-6f6897fdc5-6z5sp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.017971 kubelet[2762]: I0515 13:08:32.017912 2762 kubelet.go:2306] "Pod admission denied" podUID="0742fa35-4d1e-42b6-947b-df7247d9fa50" pod="tigera-operator/tigera-operator-6f6897fdc5-gtrvc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.117333 kubelet[2762]: I0515 13:08:32.117289 2762 kubelet.go:2306] "Pod admission denied" podUID="acb4792b-55f4-47ef-b247-f8517e096915" pod="tigera-operator/tigera-operator-6f6897fdc5-c8j5q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.217117 kubelet[2762]: I0515 13:08:32.217064 2762 kubelet.go:2306] "Pod admission denied" podUID="e6f6a913-0022-4be8-b00b-df277e0b3cb1" pod="tigera-operator/tigera-operator-6f6897fdc5-7ghl8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.314372 kubelet[2762]: I0515 13:08:32.314330 2762 kubelet.go:2306] "Pod admission denied" podUID="63ad24d6-1da6-42a5-8331-aaadc72fc17a" pod="tigera-operator/tigera-operator-6f6897fdc5-wzgtd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.417587 kubelet[2762]: I0515 13:08:32.417242 2762 kubelet.go:2306] "Pod admission denied" podUID="e56691db-521f-4eea-a410-f4928ef05979" pod="tigera-operator/tigera-operator-6f6897fdc5-t75pg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.519528 kubelet[2762]: I0515 13:08:32.519476 2762 kubelet.go:2306] "Pod admission denied" podUID="cf3c1837-a93d-4373-871e-0c0bba732f45" pod="tigera-operator/tigera-operator-6f6897fdc5-bf9cc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.615690 kubelet[2762]: I0515 13:08:32.615646 2762 kubelet.go:2306] "Pod admission denied" podUID="3c284a42-b872-4085-a97c-e70b94d57702" pod="tigera-operator/tigera-operator-6f6897fdc5-8l986" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.717109 kubelet[2762]: I0515 13:08:32.716791 2762 kubelet.go:2306] "Pod admission denied" podUID="482a1973-b64f-4a0d-b9b0-27f47bc6c2e0" pod="tigera-operator/tigera-operator-6f6897fdc5-tfxpw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.815475 kubelet[2762]: I0515 13:08:32.815424 2762 kubelet.go:2306] "Pod admission denied" podUID="92f23012-3aa2-4934-bc89-7cd3299741a2" pod="tigera-operator/tigera-operator-6f6897fdc5-bq4sm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:32.917062 kubelet[2762]: I0515 13:08:32.917012 2762 kubelet.go:2306] "Pod admission denied" podUID="b05b9723-6a5a-4ac8-b312-b35e8146db58" pod="tigera-operator/tigera-operator-6f6897fdc5-52fwz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:33.017693 kubelet[2762]: I0515 13:08:33.016684 2762 kubelet.go:2306] "Pod admission denied" podUID="dfb62bd8-332b-455f-af02-c3f6ba561173" pod="tigera-operator/tigera-operator-6f6897fdc5-xzg8f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:33.217242 kubelet[2762]: I0515 13:08:33.217174 2762 kubelet.go:2306] "Pod admission denied" podUID="f4629e78-f683-4b51-9742-7c1ee8b856a0" pod="tigera-operator/tigera-operator-6f6897fdc5-jbqhg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:33.315855 kubelet[2762]: I0515 13:08:33.315721 2762 kubelet.go:2306] "Pod admission denied" podUID="c62bad52-b210-4fad-9494-d995a9c81574" pod="tigera-operator/tigera-operator-6f6897fdc5-9bnwz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:33.418302 kubelet[2762]: I0515 13:08:33.417852 2762 kubelet.go:2306] "Pod admission denied" podUID="01a1e2b1-8b71-4cd9-8669-9568ce10834c" pod="tigera-operator/tigera-operator-6f6897fdc5-5vrbs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:33.523347 kubelet[2762]: I0515 13:08:33.523298 2762 kubelet.go:2306] "Pod admission denied" podUID="3b29a2c3-3654-4bd8-9d94-49e2315acd2a" pod="tigera-operator/tigera-operator-6f6897fdc5-mjtbf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:33.616923 kubelet[2762]: I0515 13:08:33.616778 2762 kubelet.go:2306] "Pod admission denied" podUID="ac82d980-22a3-424d-9eec-3eb8d675fb9c" pod="tigera-operator/tigera-operator-6f6897fdc5-87576" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:33.816657 kubelet[2762]: I0515 13:08:33.816600 2762 kubelet.go:2306] "Pod admission denied" podUID="561aa410-2839-48a2-9700-83fd193ab9b9" pod="tigera-operator/tigera-operator-6f6897fdc5-2ddfv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:33.917399 kubelet[2762]: I0515 13:08:33.917095 2762 kubelet.go:2306] "Pod admission denied" podUID="45ff8e7a-e396-4f7f-b1f6-1a2448d5d300" pod="tigera-operator/tigera-operator-6f6897fdc5-7bjfc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.026636 kubelet[2762]: I0515 13:08:34.026582 2762 kubelet.go:2306] "Pod admission denied" podUID="c6c2f4ce-5fc6-4833-a6ea-fcc68631a7cf" pod="tigera-operator/tigera-operator-6f6897fdc5-bw64w" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.116025 kubelet[2762]: I0515 13:08:34.115962 2762 kubelet.go:2306] "Pod admission denied" podUID="38c6cb10-bb26-4063-befe-c41c8072d86c" pod="tigera-operator/tigera-operator-6f6897fdc5-hfdlc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.219485 kubelet[2762]: I0515 13:08:34.218679 2762 kubelet.go:2306] "Pod admission denied" podUID="ceda13b1-ede8-4e6b-8333-265de35c6ab7" pod="tigera-operator/tigera-operator-6f6897fdc5-zvkkz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.329601 kubelet[2762]: I0515 13:08:34.329146 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:34.329601 kubelet[2762]: I0515 13:08:34.329205 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:34.329866 kubelet[2762]: I0515 13:08:34.329716 2762 kubelet.go:2306] "Pod admission denied" podUID="a25cb4c5-f184-4cb9-9515-126ca7851836" pod="tigera-operator/tigera-operator-6f6897fdc5-ncgcx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.333660 kubelet[2762]: I0515 13:08:34.333616 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:34.350458 kubelet[2762]: I0515 13:08:34.350404 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:34.350838 kubelet[2762]: I0515 13:08:34.350502 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350573 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350584 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350592 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350599 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350606 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350626 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350637 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350645 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350654 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:08:34.350838 kubelet[2762]: E0515 13:08:34.350670 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:08:34.350838 kubelet[2762]: I0515 13:08:34.350680 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:08:34.420937 kubelet[2762]: I0515 13:08:34.420879 2762 kubelet.go:2306] "Pod admission denied" podUID="447ae8db-e29b-4d66-976e-cd0fdcaf18bf" pod="tigera-operator/tigera-operator-6f6897fdc5-5g9v6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.625697 kubelet[2762]: I0515 13:08:34.625633 2762 kubelet.go:2306] "Pod admission denied" podUID="f0dae309-ff82-4210-aacb-718013557900" pod="tigera-operator/tigera-operator-6f6897fdc5-tpcv9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.726584 kubelet[2762]: I0515 13:08:34.725896 2762 kubelet.go:2306] "Pod admission denied" podUID="3c3f4316-8464-4cb9-81e8-0c6d99f64a4d" pod="tigera-operator/tigera-operator-6f6897fdc5-njn4k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.817498 kubelet[2762]: I0515 13:08:34.817420 2762 kubelet.go:2306] "Pod admission denied" podUID="0cd22951-8695-4fd6-9805-3ffb11f46a65" pod="tigera-operator/tigera-operator-6f6897fdc5-7czzl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.920087 kubelet[2762]: I0515 13:08:34.918812 2762 kubelet.go:2306] "Pod admission denied" podUID="b6179d9f-0bd7-42cf-a4a9-23b8c8e7944d" pod="tigera-operator/tigera-operator-6f6897fdc5-nqjq7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:34.964920 kubelet[2762]: I0515 13:08:34.964869 2762 kubelet.go:2306] "Pod admission denied" podUID="894c5efa-9045-424e-833b-1b425ca12656" pod="tigera-operator/tigera-operator-6f6897fdc5-vczd6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:35.076577 kubelet[2762]: I0515 13:08:35.076338 2762 kubelet.go:2306] "Pod admission denied" podUID="464e72b0-1208-455e-94b6-b7409285d0bc" pod="tigera-operator/tigera-operator-6f6897fdc5-dgsd7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:35.267518 kubelet[2762]: I0515 13:08:35.267353 2762 kubelet.go:2306] "Pod admission denied" podUID="79746233-7d94-4b47-bc03-f5349ffcc9df" pod="tigera-operator/tigera-operator-6f6897fdc5-trtdq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:35.366195 kubelet[2762]: I0515 13:08:35.366136 2762 kubelet.go:2306] "Pod admission denied" podUID="a93540fc-25f6-4618-9890-28c3c30e234d" pod="tigera-operator/tigera-operator-6f6897fdc5-9mgpb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:35.467626 kubelet[2762]: I0515 13:08:35.467548 2762 kubelet.go:2306] "Pod admission denied" podUID="7185d871-220b-47ab-85e5-d6f28715c490" pod="tigera-operator/tigera-operator-6f6897fdc5-7cbtf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:35.663434 kubelet[2762]: I0515 13:08:35.663386 2762 kubelet.go:2306] "Pod admission denied" podUID="dd9bb65a-d4e4-4c8f-947b-654c29343426" pod="tigera-operator/tigera-operator-6f6897fdc5-72dqj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:35.767062 kubelet[2762]: I0515 13:08:35.767005 2762 kubelet.go:2306] "Pod admission denied" podUID="9e5a15ff-d191-4cdb-bf1e-06378db1311a" pod="tigera-operator/tigera-operator-6f6897fdc5-9brm7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:35.867339 kubelet[2762]: I0515 13:08:35.867268 2762 kubelet.go:2306] "Pod admission denied" podUID="b1a84edb-8e62-4e51-8b2f-60ce1672e29c" pod="tigera-operator/tigera-operator-6f6897fdc5-98sfz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:35.957658 kubelet[2762]: E0515 13:08:35.957430 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:35.959891 kubelet[2762]: E0515 13:08:35.959038 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:35.959969 containerd[1543]: time="2025-05-15T13:08:35.959827911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:08:35.960880 kubelet[2762]: E0515 13:08:35.960832 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-h5k9z" podUID="1a8a24dd-708e-4ec3-b972-4df98026b344" May 15 13:08:36.048093 containerd[1543]: time="2025-05-15T13:08:36.048012018Z" level=error msg="Failed to destroy network for sandbox \"066b48e7b145b68eaff833d64faea1cdbef59c73cadffc41c41ee41ed32a293f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:36.050784 containerd[1543]: time="2025-05-15T13:08:36.050720445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"066b48e7b145b68eaff833d64faea1cdbef59c73cadffc41c41ee41ed32a293f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:36.052505 kubelet[2762]: E0515 13:08:36.052448 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066b48e7b145b68eaff833d64faea1cdbef59c73cadffc41c41ee41ed32a293f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:36.053087 kubelet[2762]: E0515 13:08:36.053046 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066b48e7b145b68eaff833d64faea1cdbef59c73cadffc41c41ee41ed32a293f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:36.053735 kubelet[2762]: E0515 13:08:36.053207 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066b48e7b145b68eaff833d64faea1cdbef59c73cadffc41c41ee41ed32a293f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:36.053298 systemd[1]: run-netns-cni\x2ddf99b50c\x2db3a7\x2d0e48\x2dc89a\x2d76c032be1d26.mount: Deactivated successfully. May 15 13:08:36.054095 kubelet[2762]: E0515 13:08:36.053734 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"066b48e7b145b68eaff833d64faea1cdbef59c73cadffc41c41ee41ed32a293f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:08:36.071302 kubelet[2762]: I0515 13:08:36.071213 2762 kubelet.go:2306] "Pod admission denied" podUID="d1c741c4-042e-4df4-b85e-9e37233a3018" pod="tigera-operator/tigera-operator-6f6897fdc5-w52n6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.166975 kubelet[2762]: I0515 13:08:36.166913 2762 kubelet.go:2306] "Pod admission denied" podUID="955b1d2c-a9ce-4ecd-87fa-e55101e5e5c1" pod="tigera-operator/tigera-operator-6f6897fdc5-4nnwb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.265887 kubelet[2762]: I0515 13:08:36.265744 2762 kubelet.go:2306] "Pod admission denied" podUID="2138069f-e349-4d0f-92a3-ca1ca3b45c2c" pod="tigera-operator/tigera-operator-6f6897fdc5-jx8c8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.378135 kubelet[2762]: I0515 13:08:36.377546 2762 kubelet.go:2306] "Pod admission denied" podUID="7da34b20-e349-4c5c-91a1-560934e5410f" pod="tigera-operator/tigera-operator-6f6897fdc5-rj5tl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.468071 kubelet[2762]: I0515 13:08:36.468015 2762 kubelet.go:2306] "Pod admission denied" podUID="dec60784-d1b2-42f2-a174-acdb31813aff" pod="tigera-operator/tigera-operator-6f6897fdc5-n8mhz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.571171 kubelet[2762]: I0515 13:08:36.571031 2762 kubelet.go:2306] "Pod admission denied" podUID="261207f5-f574-4a76-8af5-31a6328d9b43" pod="tigera-operator/tigera-operator-6f6897fdc5-qvcgx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.667265 kubelet[2762]: I0515 13:08:36.667203 2762 kubelet.go:2306] "Pod admission denied" podUID="6461f16e-4342-4aea-9230-74d6e612e7ca" pod="tigera-operator/tigera-operator-6f6897fdc5-xd8jv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.766608 kubelet[2762]: I0515 13:08:36.766542 2762 kubelet.go:2306] "Pod admission denied" podUID="4ff20b56-6fbb-44a2-bd21-df1b1101c033" pod="tigera-operator/tigera-operator-6f6897fdc5-7nb85" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.868866 kubelet[2762]: I0515 13:08:36.868828 2762 kubelet.go:2306] "Pod admission denied" podUID="eb80c28d-8cc8-45c6-837b-6220ab7bfccd" pod="tigera-operator/tigera-operator-6f6897fdc5-jn6dv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:36.956695 containerd[1543]: time="2025-05-15T13:08:36.956627098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:08:36.976584 kubelet[2762]: I0515 13:08:36.976328 2762 kubelet.go:2306] "Pod admission denied" podUID="0372e0c9-2c7b-4ccd-80a1-2e35dcca1786" pod="tigera-operator/tigera-operator-6f6897fdc5-rlhqh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.033646 containerd[1543]: time="2025-05-15T13:08:37.033588538Z" level=error msg="Failed to destroy network for sandbox \"a626219c1ec529650be8ad46fbb6b5cad8ef45a2aa17dd789cd95f40724efcb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:37.036852 systemd[1]: run-netns-cni\x2dc07141bc\x2df430\x2d86b5\x2dd4d6\x2d316626f65af5.mount: Deactivated successfully. May 15 13:08:37.038251 containerd[1543]: time="2025-05-15T13:08:37.038096688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a626219c1ec529650be8ad46fbb6b5cad8ef45a2aa17dd789cd95f40724efcb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:37.038434 kubelet[2762]: E0515 13:08:37.038394 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a626219c1ec529650be8ad46fbb6b5cad8ef45a2aa17dd789cd95f40724efcb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:37.038497 kubelet[2762]: E0515 13:08:37.038461 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a626219c1ec529650be8ad46fbb6b5cad8ef45a2aa17dd789cd95f40724efcb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:37.038497 kubelet[2762]: E0515 13:08:37.038484 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a626219c1ec529650be8ad46fbb6b5cad8ef45a2aa17dd789cd95f40724efcb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:37.038650 kubelet[2762]: E0515 13:08:37.038527 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a626219c1ec529650be8ad46fbb6b5cad8ef45a2aa17dd789cd95f40724efcb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:08:37.081480 kubelet[2762]: I0515 13:08:37.079302 2762 kubelet.go:2306] "Pod admission denied" podUID="c0db891a-e7d9-4a95-80a2-a2938dddc9a2" pod="tigera-operator/tigera-operator-6f6897fdc5-94755" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.172464 kubelet[2762]: I0515 13:08:37.171748 2762 kubelet.go:2306] "Pod admission denied" podUID="c4c23f4a-28fa-4dc1-831e-83668b3fba12" pod="tigera-operator/tigera-operator-6f6897fdc5-86s2s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.270784 kubelet[2762]: I0515 13:08:37.270728 2762 kubelet.go:2306] "Pod admission denied" podUID="c0f45bd5-80b8-4ece-9c56-4fa36cb04c1e" pod="tigera-operator/tigera-operator-6f6897fdc5-9f4j4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.373876 kubelet[2762]: I0515 13:08:37.373817 2762 kubelet.go:2306] "Pod admission denied" podUID="aa037394-a652-4431-b010-906564452395" pod="tigera-operator/tigera-operator-6f6897fdc5-67xdh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.468269 kubelet[2762]: I0515 13:08:37.468120 2762 kubelet.go:2306] "Pod admission denied" podUID="88f7b741-a0dc-440a-8029-65546b119543" pod="tigera-operator/tigera-operator-6f6897fdc5-lzqcr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.674669 kubelet[2762]: I0515 13:08:37.674607 2762 kubelet.go:2306] "Pod admission denied" podUID="563f84da-d7c0-430e-9f5b-0163f8112a23" pod="tigera-operator/tigera-operator-6f6897fdc5-2zqfc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.769096 kubelet[2762]: I0515 13:08:37.768747 2762 kubelet.go:2306] "Pod admission denied" podUID="eb87060f-8afc-453c-8da9-8d08e0e6bfbd" pod="tigera-operator/tigera-operator-6f6897fdc5-52s56" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.874441 kubelet[2762]: I0515 13:08:37.874395 2762 kubelet.go:2306] "Pod admission denied" podUID="62eee899-6015-4f56-9c7d-6e5287469669" pod="tigera-operator/tigera-operator-6f6897fdc5-b8xnf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:37.968717 kubelet[2762]: I0515 13:08:37.968655 2762 kubelet.go:2306] "Pod admission denied" podUID="e7ca1968-d18a-4c26-8d45-05fd5e27e321" pod="tigera-operator/tigera-operator-6f6897fdc5-m4wqp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.070425 kubelet[2762]: I0515 13:08:38.070019 2762 kubelet.go:2306] "Pod admission denied" podUID="ea9c11f8-e682-49ff-a2fe-1166796c7f84" pod="tigera-operator/tigera-operator-6f6897fdc5-gg8gn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.267495 kubelet[2762]: I0515 13:08:38.267442 2762 kubelet.go:2306] "Pod admission denied" podUID="4004e7ab-69ab-423f-ac63-5673db949494" pod="tigera-operator/tigera-operator-6f6897fdc5-bsg9c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.366315 kubelet[2762]: I0515 13:08:38.366187 2762 kubelet.go:2306] "Pod admission denied" podUID="5750cd0f-4a45-4049-a4e5-1a210767cd1d" pod="tigera-operator/tigera-operator-6f6897fdc5-fq8sh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.418923 kubelet[2762]: I0515 13:08:38.418857 2762 kubelet.go:2306] "Pod admission denied" podUID="27695826-522d-4606-ac69-ffaf0b5f21f9" pod="tigera-operator/tigera-operator-6f6897fdc5-rn6ml" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.526418 kubelet[2762]: I0515 13:08:38.526359 2762 kubelet.go:2306] "Pod admission denied" podUID="9083fc62-6f30-4291-85de-088d92ad4669" pod="tigera-operator/tigera-operator-6f6897fdc5-5lmkr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.620809 kubelet[2762]: I0515 13:08:38.620762 2762 kubelet.go:2306] "Pod admission denied" podUID="43cb06fe-c51d-4b77-b02e-403d734f85ca" pod="tigera-operator/tigera-operator-6f6897fdc5-jkr6x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.664311 kubelet[2762]: I0515 13:08:38.664256 2762 kubelet.go:2306] "Pod admission denied" podUID="e9d462a1-02b1-4fc6-a9ca-3cd3ff726b90" pod="tigera-operator/tigera-operator-6f6897fdc5-f99cl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.766477 kubelet[2762]: I0515 13:08:38.766420 2762 kubelet.go:2306] "Pod admission denied" podUID="f436afef-3b8b-4a2d-aade-da12e0e15067" pod="tigera-operator/tigera-operator-6f6897fdc5-s6b2p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:38.969759 kubelet[2762]: I0515 13:08:38.969079 2762 kubelet.go:2306] "Pod admission denied" podUID="2ab0f3e2-f491-4fe7-9e14-19e0a2ecccbb" pod="tigera-operator/tigera-operator-6f6897fdc5-gpqph" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.067906 kubelet[2762]: I0515 13:08:39.067091 2762 kubelet.go:2306] "Pod admission denied" podUID="57b57fb1-1bb9-4284-9847-1149f9f0986f" pod="tigera-operator/tigera-operator-6f6897fdc5-8c88q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.168236 kubelet[2762]: I0515 13:08:39.168172 2762 kubelet.go:2306] "Pod admission denied" podUID="24a12d33-babb-49d3-9a32-d7f4b3529909" pod="tigera-operator/tigera-operator-6f6897fdc5-l97r5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.269625 kubelet[2762]: I0515 13:08:39.269462 2762 kubelet.go:2306] "Pod admission denied" podUID="cf3f35d6-b980-41a5-b495-77291ec8bbd5" pod="tigera-operator/tigera-operator-6f6897fdc5-mtpc7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.373133 kubelet[2762]: I0515 13:08:39.373068 2762 kubelet.go:2306] "Pod admission denied" podUID="a3d1b9b9-09d5-4f57-b0e8-25e5a01f34e4" pod="tigera-operator/tigera-operator-6f6897fdc5-jpc9v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.469308 kubelet[2762]: I0515 13:08:39.469236 2762 kubelet.go:2306] "Pod admission denied" podUID="094316b8-71f0-4a8b-b50d-97e7f1ceb52d" pod="tigera-operator/tigera-operator-6f6897fdc5-st87v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.567591 kubelet[2762]: I0515 13:08:39.566465 2762 kubelet.go:2306] "Pod admission denied" podUID="f5cf831e-2d03-4a2f-acb2-150bde406c48" pod="tigera-operator/tigera-operator-6f6897fdc5-bw8wr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.669180 kubelet[2762]: I0515 13:08:39.669117 2762 kubelet.go:2306] "Pod admission denied" podUID="24bbc2bc-e85c-4190-a8f6-8ead66f7943a" pod="tigera-operator/tigera-operator-6f6897fdc5-jfxsf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.772509 kubelet[2762]: I0515 13:08:39.772439 2762 kubelet.go:2306] "Pod admission denied" podUID="9121d88b-4c60-409f-9d6d-8528a85dd5df" pod="tigera-operator/tigera-operator-6f6897fdc5-jfzkb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:39.961690 kubelet[2762]: E0515 13:08:39.961305 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:39.963772 containerd[1543]: time="2025-05-15T13:08:39.963694660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:08:39.973368 kubelet[2762]: I0515 13:08:39.973316 2762 kubelet.go:2306] "Pod admission denied" podUID="f870d6e3-b9f3-4183-934e-efd55c9bd184" pod="tigera-operator/tigera-operator-6f6897fdc5-jkf8w" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.038278 containerd[1543]: time="2025-05-15T13:08:40.038207287Z" level=error msg="Failed to destroy network for sandbox \"4d5a1effb9d6d120ba43a51e998693914d5a3dd1f6b178b4ca005bf342fdd827\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:40.040851 systemd[1]: run-netns-cni\x2da26a2a64\x2d6e90\x2dbe1a\x2d89cc\x2dface277183d5.mount: Deactivated successfully. May 15 13:08:40.041873 containerd[1543]: time="2025-05-15T13:08:40.041825073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d5a1effb9d6d120ba43a51e998693914d5a3dd1f6b178b4ca005bf342fdd827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:40.042601 kubelet[2762]: E0515 13:08:40.042348 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d5a1effb9d6d120ba43a51e998693914d5a3dd1f6b178b4ca005bf342fdd827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:40.042601 kubelet[2762]: E0515 13:08:40.042424 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d5a1effb9d6d120ba43a51e998693914d5a3dd1f6b178b4ca005bf342fdd827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:40.042601 kubelet[2762]: E0515 13:08:40.042457 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d5a1effb9d6d120ba43a51e998693914d5a3dd1f6b178b4ca005bf342fdd827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:40.042601 kubelet[2762]: E0515 13:08:40.042523 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d5a1effb9d6d120ba43a51e998693914d5a3dd1f6b178b4ca005bf342fdd827\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xfdz2" podUID="b53c6794-8ef1-4efd-9179-2e706d6227cb" May 15 13:08:40.068880 kubelet[2762]: I0515 13:08:40.068828 2762 kubelet.go:2306] "Pod admission denied" podUID="828f2c57-8432-4e1e-9f55-d95558bfddd3" pod="tigera-operator/tigera-operator-6f6897fdc5-46zr9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.166776 kubelet[2762]: I0515 13:08:40.166714 2762 kubelet.go:2306] "Pod admission denied" podUID="38adf5a0-8f8e-4b6f-98b6-de34c74111b5" pod="tigera-operator/tigera-operator-6f6897fdc5-k2zjt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.369241 kubelet[2762]: I0515 13:08:40.369159 2762 kubelet.go:2306] "Pod admission denied" podUID="9eb7d224-fb22-4cbd-8db8-bf628b23de9e" pod="tigera-operator/tigera-operator-6f6897fdc5-c89l8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.470747 kubelet[2762]: I0515 13:08:40.470679 2762 kubelet.go:2306] "Pod admission denied" podUID="8010be34-9d90-4272-870d-16c442496ec3" pod="tigera-operator/tigera-operator-6f6897fdc5-lrhqn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.578413 kubelet[2762]: I0515 13:08:40.578324 2762 kubelet.go:2306] "Pod admission denied" podUID="514098bb-35d2-43ec-8f68-2c254252a8dc" pod="tigera-operator/tigera-operator-6f6897fdc5-kc9fs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.669368 kubelet[2762]: I0515 13:08:40.669241 2762 kubelet.go:2306] "Pod admission denied" podUID="d713fbe5-ce35-4059-8549-af2247ca0e4c" pod="tigera-operator/tigera-operator-6f6897fdc5-s8wpc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.719849 kubelet[2762]: I0515 13:08:40.719787 2762 kubelet.go:2306] "Pod admission denied" podUID="07f2f8b1-088f-4a71-8801-e94297e2e2a4" pod="tigera-operator/tigera-operator-6f6897fdc5-f6stk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.818648 kubelet[2762]: I0515 13:08:40.818597 2762 kubelet.go:2306] "Pod admission denied" podUID="536ae2e3-a373-4ef6-8bfe-7fb9883fc21c" pod="tigera-operator/tigera-operator-6f6897fdc5-pk8rd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:40.957815 containerd[1543]: time="2025-05-15T13:08:40.957650314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:08:41.027323 containerd[1543]: time="2025-05-15T13:08:41.027038513Z" level=error msg="Failed to destroy network for sandbox \"9dc875e2a56e261db08de3d11f6cb6f7da58dc010d119f88e5c0cc96f242fca2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:41.030276 kubelet[2762]: I0515 13:08:41.029700 2762 kubelet.go:2306] "Pod admission denied" podUID="376fb9bd-2990-46c0-9fe2-082fd7067a9d" pod="tigera-operator/tigera-operator-6f6897fdc5-9kl2z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.030944 systemd[1]: run-netns-cni\x2d08db255b\x2d109e\x2d8fd1\x2dacc2\x2d246a4ac937df.mount: Deactivated successfully. May 15 13:08:41.032544 containerd[1543]: time="2025-05-15T13:08:41.032486763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc875e2a56e261db08de3d11f6cb6f7da58dc010d119f88e5c0cc96f242fca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:41.033123 kubelet[2762]: E0515 13:08:41.033058 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc875e2a56e261db08de3d11f6cb6f7da58dc010d119f88e5c0cc96f242fca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:41.033283 kubelet[2762]: E0515 13:08:41.033239 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc875e2a56e261db08de3d11f6cb6f7da58dc010d119f88e5c0cc96f242fca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:41.033460 kubelet[2762]: E0515 13:08:41.033381 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc875e2a56e261db08de3d11f6cb6f7da58dc010d119f88e5c0cc96f242fca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:41.033587 kubelet[2762]: E0515 13:08:41.033531 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dc875e2a56e261db08de3d11f6cb6f7da58dc010d119f88e5c0cc96f242fca2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:08:41.121576 kubelet[2762]: I0515 13:08:41.121511 2762 kubelet.go:2306] "Pod admission denied" podUID="8fccfeaf-f8ca-4d1a-9f35-701d1743120d" pod="tigera-operator/tigera-operator-6f6897fdc5-5pxzf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.219785 kubelet[2762]: I0515 13:08:41.219336 2762 kubelet.go:2306] "Pod admission denied" podUID="f8bc9987-7653-4199-919f-c078d371e28f" pod="tigera-operator/tigera-operator-6f6897fdc5-bvhl6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.332315 kubelet[2762]: I0515 13:08:41.331234 2762 kubelet.go:2306] "Pod admission denied" podUID="d3717181-b9a9-41d5-88a1-3dd0901fb269" pod="tigera-operator/tigera-operator-6f6897fdc5-zqqwd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.423919 kubelet[2762]: I0515 13:08:41.423871 2762 kubelet.go:2306] "Pod admission denied" podUID="92d7fd1c-9974-497d-942b-0ccef231d785" pod="tigera-operator/tigera-operator-6f6897fdc5-vwsf6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.619749 kubelet[2762]: I0515 13:08:41.619705 2762 kubelet.go:2306] "Pod admission denied" podUID="96f68cb3-6f37-432c-8589-e2a988b1314d" pod="tigera-operator/tigera-operator-6f6897fdc5-rdhwq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.717029 kubelet[2762]: I0515 13:08:41.716969 2762 kubelet.go:2306] "Pod admission denied" podUID="f732cc9a-bc28-4704-a632-b6df0c1d6914" pod="tigera-operator/tigera-operator-6f6897fdc5-5bl9d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.768028 kubelet[2762]: I0515 13:08:41.767955 2762 kubelet.go:2306] "Pod admission denied" podUID="25e5eae6-fc1a-4bab-86d4-80e727331c96" pod="tigera-operator/tigera-operator-6f6897fdc5-9gfrd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.867192 kubelet[2762]: I0515 13:08:41.867113 2762 kubelet.go:2306] "Pod admission denied" podUID="98d2f4f8-5dc0-46a8-96ba-83127ad46a34" pod="tigera-operator/tigera-operator-6f6897fdc5-tsnbf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:41.972091 kubelet[2762]: I0515 13:08:41.971774 2762 kubelet.go:2306] "Pod admission denied" podUID="acd5236d-81cc-41b0-b71f-ec768f437a4c" pod="tigera-operator/tigera-operator-6f6897fdc5-k7wzb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.077868 kubelet[2762]: I0515 13:08:42.075826 2762 kubelet.go:2306] "Pod admission denied" podUID="f7b3bbb5-db9d-4368-a9a9-7557afa62502" pod="tigera-operator/tigera-operator-6f6897fdc5-ldbgg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.169366 kubelet[2762]: I0515 13:08:42.169300 2762 kubelet.go:2306] "Pod admission denied" podUID="cf092ef5-78a0-4239-b02b-8f1faa7ff763" pod="tigera-operator/tigera-operator-6f6897fdc5-cr8t9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.265179 kubelet[2762]: I0515 13:08:42.265038 2762 kubelet.go:2306] "Pod admission denied" podUID="ab72c4e8-7b12-4dac-8da9-b2db5e42bc90" pod="tigera-operator/tigera-operator-6f6897fdc5-ds98b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.364834 kubelet[2762]: I0515 13:08:42.364786 2762 kubelet.go:2306] "Pod admission denied" podUID="d38399b0-430d-4b47-b46e-154775b64b79" pod="tigera-operator/tigera-operator-6f6897fdc5-g2lkp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.469599 kubelet[2762]: I0515 13:08:42.469545 2762 kubelet.go:2306] "Pod admission denied" podUID="fa9f3023-1af7-432c-8ebb-1a204502a7d7" pod="tigera-operator/tigera-operator-6f6897fdc5-zqpwf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.568502 kubelet[2762]: I0515 13:08:42.568034 2762 kubelet.go:2306] "Pod admission denied" podUID="9f2161f4-11e6-42eb-ae75-f0d80382d023" pod="tigera-operator/tigera-operator-6f6897fdc5-ctf5b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.668347 kubelet[2762]: I0515 13:08:42.668291 2762 kubelet.go:2306] "Pod admission denied" podUID="eae39e95-23fa-4ad4-9410-2d14e364e8cb" pod="tigera-operator/tigera-operator-6f6897fdc5-wk2x5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.887114 kubelet[2762]: I0515 13:08:42.887043 2762 kubelet.go:2306] "Pod admission denied" podUID="0ceba32e-9eea-4513-aecc-9c634acff383" pod="tigera-operator/tigera-operator-6f6897fdc5-cz2lq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:42.968186 kubelet[2762]: I0515 13:08:42.968125 2762 kubelet.go:2306] "Pod admission denied" podUID="a477feab-62a3-4492-8b0e-f17bdf0fbe54" pod="tigera-operator/tigera-operator-6f6897fdc5-cc9f8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:43.068822 kubelet[2762]: I0515 13:08:43.068757 2762 kubelet.go:2306] "Pod admission denied" podUID="1e299c25-c906-4f44-8247-8bac852a6c0f" pod="tigera-operator/tigera-operator-6f6897fdc5-557ff" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:43.275006 kubelet[2762]: I0515 13:08:43.273837 2762 kubelet.go:2306] "Pod admission denied" podUID="27ffbbcf-3261-4fd4-a84a-ba513bc062cb" pod="tigera-operator/tigera-operator-6f6897fdc5-wlt75" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:43.369957 kubelet[2762]: I0515 13:08:43.369890 2762 kubelet.go:2306] "Pod admission denied" podUID="d7080a90-8416-4f41-81e4-28bc6d12cb86" pod="tigera-operator/tigera-operator-6f6897fdc5-z2grz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:43.418199 kubelet[2762]: I0515 13:08:43.418136 2762 kubelet.go:2306] "Pod admission denied" podUID="0eeee9c9-a1d3-4fe6-982e-d5659f2c7af1" pod="tigera-operator/tigera-operator-6f6897fdc5-2gjrm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:43.524883 kubelet[2762]: I0515 13:08:43.523921 2762 kubelet.go:2306] "Pod admission denied" podUID="89cf7790-c096-46fa-8045-e5eee6744211" pod="tigera-operator/tigera-operator-6f6897fdc5-m2c89" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:43.622967 kubelet[2762]: I0515 13:08:43.622919 2762 kubelet.go:2306] "Pod admission denied" podUID="222f3d59-3b6c-4b52-a11f-51423709700d" pod="tigera-operator/tigera-operator-6f6897fdc5-ttsft" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:43.723353 kubelet[2762]: I0515 13:08:43.723280 2762 kubelet.go:2306] "Pod admission denied" podUID="99eaf885-70d1-4b1e-b6ab-f74809d3b721" pod="tigera-operator/tigera-operator-6f6897fdc5-hcvht" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:43.924583 kubelet[2762]: I0515 13:08:43.922573 2762 kubelet.go:2306] "Pod admission denied" podUID="1298c933-8651-4cd8-a23a-25b71be1f73a" pod="tigera-operator/tigera-operator-6f6897fdc5-9k8v8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.017541 kubelet[2762]: I0515 13:08:44.017467 2762 kubelet.go:2306] "Pod admission denied" podUID="0e9696eb-2d66-408d-aece-74d1e1126a19" pod="tigera-operator/tigera-operator-6f6897fdc5-nrp7q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.121451 kubelet[2762]: I0515 13:08:44.121176 2762 kubelet.go:2306] "Pod admission denied" podUID="4969b643-0f79-45d6-9d44-2f492bd4f085" pod="tigera-operator/tigera-operator-6f6897fdc5-ltrcv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.330676 kubelet[2762]: I0515 13:08:44.328401 2762 kubelet.go:2306] "Pod admission denied" podUID="5c938806-6f09-4d26-a9d1-36fa4f5ab48e" pod="tigera-operator/tigera-operator-6f6897fdc5-9kg6x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.380212 kubelet[2762]: I0515 13:08:44.380171 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:44.380212 kubelet[2762]: I0515 13:08:44.380216 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:44.383495 kubelet[2762]: I0515 13:08:44.383029 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:44.395899 kubelet[2762]: I0515 13:08:44.395869 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:44.395986 kubelet[2762]: I0515 13:08:44.395940 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:44.395986 kubelet[2762]: E0515 13:08:44.395978 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:44.395986 kubelet[2762]: E0515 13:08:44.395988 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:44.396119 kubelet[2762]: E0515 13:08:44.395996 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:44.396119 kubelet[2762]: E0515 13:08:44.396003 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:08:44.396119 kubelet[2762]: E0515 13:08:44.396011 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:44.396119 kubelet[2762]: E0515 13:08:44.396022 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:08:44.396119 kubelet[2762]: E0515 13:08:44.396030 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:08:44.396119 kubelet[2762]: E0515 13:08:44.396038 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:08:44.396119 kubelet[2762]: E0515 13:08:44.396046 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:08:44.396119 kubelet[2762]: E0515 13:08:44.396060 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:08:44.396119 kubelet[2762]: I0515 13:08:44.396069 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:08:44.417326 kubelet[2762]: I0515 13:08:44.417282 2762 kubelet.go:2306] "Pod admission denied" podUID="367424e8-6035-47dc-875b-6569c4aaaaf7" pod="tigera-operator/tigera-operator-6f6897fdc5-5z7v6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.521233 kubelet[2762]: I0515 13:08:44.521171 2762 kubelet.go:2306] "Pod admission denied" podUID="f25b4035-682a-4290-9872-cc5587e81d79" pod="tigera-operator/tigera-operator-6f6897fdc5-fdlnj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.628187 kubelet[2762]: I0515 13:08:44.628116 2762 kubelet.go:2306] "Pod admission denied" podUID="1a9a2f13-440f-4b13-9c5b-4db00b5ad90b" pod="tigera-operator/tigera-operator-6f6897fdc5-84kcr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.718294 kubelet[2762]: I0515 13:08:44.718246 2762 kubelet.go:2306] "Pod admission denied" podUID="afd00ade-3934-4d65-8bcb-7a6cf56d3e33" pod="tigera-operator/tigera-operator-6f6897fdc5-s6n8x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.819340 kubelet[2762]: I0515 13:08:44.819291 2762 kubelet.go:2306] "Pod admission denied" podUID="d47b4f47-a434-4f1f-b6e6-8fbbc40b10eb" pod="tigera-operator/tigera-operator-6f6897fdc5-fhrdz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.867924 kubelet[2762]: I0515 13:08:44.867880 2762 kubelet.go:2306] "Pod admission denied" podUID="047cb314-1b4a-48c1-a3d7-160028bd955e" pod="tigera-operator/tigera-operator-6f6897fdc5-bldm9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:44.968352 kubelet[2762]: I0515 13:08:44.968031 2762 kubelet.go:2306] "Pod admission denied" podUID="be44c54e-5e05-4abb-b589-e68ef6c2bfcc" pod="tigera-operator/tigera-operator-6f6897fdc5-nm7d4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.074112 kubelet[2762]: I0515 13:08:45.074054 2762 kubelet.go:2306] "Pod admission denied" podUID="d417d52f-840d-4ffc-ac78-2c70e77560dd" pod="tigera-operator/tigera-operator-6f6897fdc5-s8df8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.172291 kubelet[2762]: I0515 13:08:45.172231 2762 kubelet.go:2306] "Pod admission denied" podUID="4d1757b9-7a75-4b37-a787-7e3dd3a1a716" pod="tigera-operator/tigera-operator-6f6897fdc5-gfqmq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.281650 kubelet[2762]: I0515 13:08:45.281495 2762 kubelet.go:2306] "Pod admission denied" podUID="f7133007-34bb-4818-8dc8-33173b63509c" pod="tigera-operator/tigera-operator-6f6897fdc5-k2n74" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.326585 kubelet[2762]: I0515 13:08:45.326299 2762 kubelet.go:2306] "Pod admission denied" podUID="4daf0fc9-988e-4836-abf4-4c86622e56dd" pod="tigera-operator/tigera-operator-6f6897fdc5-tbx28" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.420322 kubelet[2762]: I0515 13:08:45.420061 2762 kubelet.go:2306] "Pod admission denied" podUID="155fe265-4bdd-40a2-950d-ac5dc4a71471" pod="tigera-operator/tigera-operator-6f6897fdc5-q4h4c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.520672 kubelet[2762]: I0515 13:08:45.520621 2762 kubelet.go:2306] "Pod admission denied" podUID="6f0913d8-7472-424a-8334-7499182feae0" pod="tigera-operator/tigera-operator-6f6897fdc5-2q7pt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.622937 kubelet[2762]: I0515 13:08:45.622874 2762 kubelet.go:2306] "Pod admission denied" podUID="d778b712-228e-42b7-9ed5-803a986391b8" pod="tigera-operator/tigera-operator-6f6897fdc5-9kkn8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.718187 kubelet[2762]: I0515 13:08:45.718121 2762 kubelet.go:2306] "Pod admission denied" podUID="8e01efdf-7b2d-4bef-8ec9-390dea6fcea5" pod="tigera-operator/tigera-operator-6f6897fdc5-pgkkf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.821641 kubelet[2762]: I0515 13:08:45.821539 2762 kubelet.go:2306] "Pod admission denied" podUID="82517dae-8d5e-4ddc-9d2a-1e60bc2a1166" pod="tigera-operator/tigera-operator-6f6897fdc5-nj9tj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.943588 kubelet[2762]: I0515 13:08:45.941797 2762 kubelet.go:2306] "Pod admission denied" podUID="3bc386e6-370e-4478-9fcd-95db28e6fabb" pod="tigera-operator/tigera-operator-6f6897fdc5-6wsnb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:45.956587 kubelet[2762]: E0515 13:08:45.956349 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:46.074251 kubelet[2762]: I0515 13:08:46.074194 2762 kubelet.go:2306] "Pod admission denied" podUID="f1435d54-7fd8-4206-a551-310eebb032a4" pod="tigera-operator/tigera-operator-6f6897fdc5-pv2bn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.121473 kubelet[2762]: I0515 13:08:46.121403 2762 kubelet.go:2306] "Pod admission denied" podUID="3ce34bec-baaa-42ed-93ae-3bf0cabd7f94" pod="tigera-operator/tigera-operator-6f6897fdc5-4ldvx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.230674 kubelet[2762]: I0515 13:08:46.230071 2762 kubelet.go:2306] "Pod admission denied" podUID="7852a1d9-8021-47d7-b684-456ae2a81074" pod="tigera-operator/tigera-operator-6f6897fdc5-jlf6p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.321336 kubelet[2762]: I0515 13:08:46.321253 2762 kubelet.go:2306] "Pod admission denied" podUID="93cfc51e-7480-4084-aff0-2d041548db7c" pod="tigera-operator/tigera-operator-6f6897fdc5-hhmkm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.419577 kubelet[2762]: I0515 13:08:46.419176 2762 kubelet.go:2306] "Pod admission denied" podUID="35903e62-255c-41a5-b914-7a1688704406" pod="tigera-operator/tigera-operator-6f6897fdc5-s2h9j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.519567 kubelet[2762]: I0515 13:08:46.519408 2762 kubelet.go:2306] "Pod admission denied" podUID="9c21fb2e-b8b0-487e-8a2c-e4204d48fe89" pod="tigera-operator/tigera-operator-6f6897fdc5-vgj49" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.571766 kubelet[2762]: I0515 13:08:46.571698 2762 kubelet.go:2306] "Pod admission denied" podUID="994e0a18-6bc4-4af3-853f-d5dc8630d4a4" pod="tigera-operator/tigera-operator-6f6897fdc5-pzlnn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.670215 kubelet[2762]: I0515 13:08:46.670157 2762 kubelet.go:2306] "Pod admission denied" podUID="35765c9c-77b3-4bf9-9faf-64757cda16c5" pod="tigera-operator/tigera-operator-6f6897fdc5-kgqbl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.884916 kubelet[2762]: I0515 13:08:46.884716 2762 kubelet.go:2306] "Pod admission denied" podUID="2ad69768-9e79-4adc-9b97-fbb4a874501c" pod="tigera-operator/tigera-operator-6f6897fdc5-46npk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:46.957727 kubelet[2762]: E0515 13:08:46.957679 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:46.959816 containerd[1543]: time="2025-05-15T13:08:46.959740754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:08:46.961820 kubelet[2762]: E0515 13:08:46.960768 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:46.964457 containerd[1543]: time="2025-05-15T13:08:46.964389292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 13:08:47.009092 kubelet[2762]: I0515 13:08:47.007070 2762 kubelet.go:2306] "Pod admission denied" podUID="7bef66a4-1aba-4cbf-82f1-e07309b9e349" pod="tigera-operator/tigera-operator-6f6897fdc5-kz57r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.070827 containerd[1543]: time="2025-05-15T13:08:47.070714159Z" level=error msg="Failed to destroy network for sandbox \"2a3062eb78a5694277b26c933cc064df0f0603874d7f3fc64ea088262f3241b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:47.073162 systemd[1]: run-netns-cni\x2db4ebb563\x2d5dfd\x2d4ed6\x2d1119\x2d693ba2e7522b.mount: Deactivated successfully. May 15 13:08:47.073879 containerd[1543]: time="2025-05-15T13:08:47.073615573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3062eb78a5694277b26c933cc064df0f0603874d7f3fc64ea088262f3241b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:47.073999 kubelet[2762]: E0515 13:08:47.073851 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3062eb78a5694277b26c933cc064df0f0603874d7f3fc64ea088262f3241b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:47.073999 kubelet[2762]: E0515 13:08:47.073912 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3062eb78a5694277b26c933cc064df0f0603874d7f3fc64ea088262f3241b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:47.073999 kubelet[2762]: E0515 13:08:47.073935 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3062eb78a5694277b26c933cc064df0f0603874d7f3fc64ea088262f3241b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:47.073999 kubelet[2762]: E0515 13:08:47.073981 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a3062eb78a5694277b26c933cc064df0f0603874d7f3fc64ea088262f3241b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:08:47.120714 kubelet[2762]: I0515 13:08:47.120656 2762 kubelet.go:2306] "Pod admission denied" podUID="affcb3e6-89c6-4141-add5-b7b5e626d7eb" pod="tigera-operator/tigera-operator-6f6897fdc5-g87l9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.219594 kubelet[2762]: I0515 13:08:47.218837 2762 kubelet.go:2306] "Pod admission denied" podUID="6dc47d85-dab8-4a6a-aeac-b0a20456826d" pod="tigera-operator/tigera-operator-6f6897fdc5-5lf9h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.423719 kubelet[2762]: I0515 13:08:47.423667 2762 kubelet.go:2306] "Pod admission denied" podUID="e5c6a2d2-0639-4601-9913-9c535964d6e3" pod="tigera-operator/tigera-operator-6f6897fdc5-j5qsj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.522603 kubelet[2762]: I0515 13:08:47.522442 2762 kubelet.go:2306] "Pod admission denied" podUID="03c1f541-c376-489e-b3f1-bc84af541d73" pod="tigera-operator/tigera-operator-6f6897fdc5-vx5jl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.636901 kubelet[2762]: I0515 13:08:47.636834 2762 kubelet.go:2306] "Pod admission denied" podUID="9cc9121b-8264-4201-80c3-3390fb30a2ef" pod="tigera-operator/tigera-operator-6f6897fdc5-hnztr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.719195 kubelet[2762]: I0515 13:08:47.719138 2762 kubelet.go:2306] "Pod admission denied" podUID="7498f5d7-410d-4816-a005-44266c9bd020" pod="tigera-operator/tigera-operator-6f6897fdc5-tmqqn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.770836 kubelet[2762]: I0515 13:08:47.769616 2762 kubelet.go:2306] "Pod admission denied" podUID="b47861ea-b34d-438f-bbb2-5757602e4f6d" pod="tigera-operator/tigera-operator-6f6897fdc5-rt7q4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.875970 kubelet[2762]: I0515 13:08:47.875924 2762 kubelet.go:2306] "Pod admission denied" podUID="82d2e33a-e389-4e42-906c-724af37a102c" pod="tigera-operator/tigera-operator-6f6897fdc5-z5cg4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:47.959437 containerd[1543]: time="2025-05-15T13:08:47.959397661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:08:47.978582 kubelet[2762]: I0515 13:08:47.978402 2762 kubelet.go:2306] "Pod admission denied" podUID="267c50ec-36ed-4d2f-bc52-2d9d30e83679" pod="tigera-operator/tigera-operator-6f6897fdc5-pr4fc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.036758 containerd[1543]: time="2025-05-15T13:08:48.036709417Z" level=error msg="Failed to destroy network for sandbox \"3725a4b49bfa29c1176f7411d323891eb7ba70971891fd309a266cf8d4944adc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:48.039730 containerd[1543]: time="2025-05-15T13:08:48.039629952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3725a4b49bfa29c1176f7411d323891eb7ba70971891fd309a266cf8d4944adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:48.040003 kubelet[2762]: E0515 13:08:48.039965 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3725a4b49bfa29c1176f7411d323891eb7ba70971891fd309a266cf8d4944adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:48.040052 kubelet[2762]: E0515 13:08:48.040029 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3725a4b49bfa29c1176f7411d323891eb7ba70971891fd309a266cf8d4944adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:48.040077 kubelet[2762]: E0515 13:08:48.040051 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3725a4b49bfa29c1176f7411d323891eb7ba70971891fd309a266cf8d4944adc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:48.040321 kubelet[2762]: E0515 13:08:48.040095 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3725a4b49bfa29c1176f7411d323891eb7ba70971891fd309a266cf8d4944adc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:08:48.041890 systemd[1]: run-netns-cni\x2d5a0f2630\x2d6437\x2d9f30\x2db1c3\x2d83ed00596233.mount: Deactivated successfully. May 15 13:08:48.070614 kubelet[2762]: I0515 13:08:48.069604 2762 kubelet.go:2306] "Pod admission denied" podUID="76910b87-153d-4183-a626-7cbae5ec4b24" pod="tigera-operator/tigera-operator-6f6897fdc5-sn7ss" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.186396 kubelet[2762]: I0515 13:08:48.185696 2762 kubelet.go:2306] "Pod admission denied" podUID="49f0665e-c3ff-4a8d-8390-31e2eb735355" pod="tigera-operator/tigera-operator-6f6897fdc5-db995" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.275649 kubelet[2762]: I0515 13:08:48.275604 2762 kubelet.go:2306] "Pod admission denied" podUID="ecfaa8b7-c366-4303-923c-436a041d7d7a" pod="tigera-operator/tigera-operator-6f6897fdc5-rxp8n" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.374350 kubelet[2762]: I0515 13:08:48.374125 2762 kubelet.go:2306] "Pod admission denied" podUID="e3f133b0-4584-4b44-8c48-339e7a146de2" pod="tigera-operator/tigera-operator-6f6897fdc5-87hkb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.495972 kubelet[2762]: I0515 13:08:48.495590 2762 kubelet.go:2306] "Pod admission denied" podUID="1553a6b1-a06c-4947-a893-8918127367e1" pod="tigera-operator/tigera-operator-6f6897fdc5-5qhmv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.572026 kubelet[2762]: I0515 13:08:48.571953 2762 kubelet.go:2306] "Pod admission denied" podUID="87c822f7-3852-4d5e-a62a-e13a5f63341b" pod="tigera-operator/tigera-operator-6f6897fdc5-tmtrf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.671127 kubelet[2762]: I0515 13:08:48.670760 2762 kubelet.go:2306] "Pod admission denied" podUID="66946d8a-f553-4e9f-b205-374f493255de" pod="tigera-operator/tigera-operator-6f6897fdc5-7z845" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.886731 kubelet[2762]: I0515 13:08:48.884942 2762 kubelet.go:2306] "Pod admission denied" podUID="8bbd49ad-f3dc-43b2-9718-65365788d62e" pod="tigera-operator/tigera-operator-6f6897fdc5-7t8hk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:48.978770 kubelet[2762]: I0515 13:08:48.978710 2762 kubelet.go:2306] "Pod admission denied" podUID="1daab2f0-a0a7-47f9-86ed-669985126317" pod="tigera-operator/tigera-operator-6f6897fdc5-5qz8q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:49.081459 kubelet[2762]: I0515 13:08:49.081194 2762 kubelet.go:2306] "Pod admission denied" podUID="7e1d659c-a2e5-4f99-ba45-712b63afb993" pod="tigera-operator/tigera-operator-6f6897fdc5-874hn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:49.288766 kubelet[2762]: I0515 13:08:49.288360 2762 kubelet.go:2306] "Pod admission denied" podUID="74bf4d0c-9fac-42a0-8b4e-7fdaf44ede31" pod="tigera-operator/tigera-operator-6f6897fdc5-f5ksv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:49.384581 kubelet[2762]: I0515 13:08:49.384325 2762 kubelet.go:2306] "Pod admission denied" podUID="426b0d29-3aba-4e56-b77d-48d436764f2c" pod="tigera-operator/tigera-operator-6f6897fdc5-xd9dz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:49.493982 kubelet[2762]: I0515 13:08:49.493735 2762 kubelet.go:2306] "Pod admission denied" podUID="eae16aed-9fdb-4796-af68-4e69a654a2dd" pod="tigera-operator/tigera-operator-6f6897fdc5-b6bx8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:49.583660 kubelet[2762]: I0515 13:08:49.583528 2762 kubelet.go:2306] "Pod admission denied" podUID="194a731e-9474-48c3-857e-cb82dd8d5496" pod="tigera-operator/tigera-operator-6f6897fdc5-fpwfk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:49.702895 kubelet[2762]: I0515 13:08:49.702835 2762 kubelet.go:2306] "Pod admission denied" podUID="db188f2c-c811-4fcb-b258-7d35927da20f" pod="tigera-operator/tigera-operator-6f6897fdc5-459jt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:49.884154 kubelet[2762]: I0515 13:08:49.884103 2762 kubelet.go:2306] "Pod admission denied" podUID="ad7cc68b-ab9a-464a-8a7f-6f030a8c0e50" pod="tigera-operator/tigera-operator-6f6897fdc5-5ng97" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:49.986922 kubelet[2762]: I0515 13:08:49.986833 2762 kubelet.go:2306] "Pod admission denied" podUID="1a7eeb29-161a-40f7-9547-2eb78a77a7c6" pod="tigera-operator/tigera-operator-6f6897fdc5-rmqx4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.062086 kubelet[2762]: I0515 13:08:50.060920 2762 kubelet.go:2306] "Pod admission denied" podUID="6e96280b-e5fc-4352-b895-157560e30bfb" pod="tigera-operator/tigera-operator-6f6897fdc5-nmbpn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.178821 kubelet[2762]: I0515 13:08:50.178341 2762 kubelet.go:2306] "Pod admission denied" podUID="c0e0c9b5-5577-4d2b-abc9-04b52b542427" pod="tigera-operator/tigera-operator-6f6897fdc5-g8t78" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.276054 kubelet[2762]: I0515 13:08:50.275985 2762 kubelet.go:2306] "Pod admission denied" podUID="e7c8025b-677b-41ae-a651-3321479f1e6e" pod="tigera-operator/tigera-operator-6f6897fdc5-vwv8l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.493724 kubelet[2762]: I0515 13:08:50.492913 2762 kubelet.go:2306] "Pod admission denied" podUID="359ef346-d094-4fcd-8f84-766f7f3dca9a" pod="tigera-operator/tigera-operator-6f6897fdc5-pvd6x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.579047 kubelet[2762]: I0515 13:08:50.578997 2762 kubelet.go:2306] "Pod admission denied" podUID="af06b55b-c538-448c-b248-cfba83eccc91" pod="tigera-operator/tigera-operator-6f6897fdc5-sp4p4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.682759 kubelet[2762]: I0515 13:08:50.682711 2762 kubelet.go:2306] "Pod admission denied" podUID="926e6c8c-4f3c-4cf3-a499-29cadfe1f981" pod="tigera-operator/tigera-operator-6f6897fdc5-7hsrs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.815137 kubelet[2762]: I0515 13:08:50.814210 2762 kubelet.go:2306] "Pod admission denied" podUID="1491fde7-b99e-43db-9914-d346f1064a7a" pod="tigera-operator/tigera-operator-6f6897fdc5-bkb84" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.893054 kubelet[2762]: I0515 13:08:50.893010 2762 kubelet.go:2306] "Pod admission denied" podUID="cfefed6d-110e-49be-914f-0b89ad03b69e" pod="tigera-operator/tigera-operator-6f6897fdc5-xvtpc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:50.974120 kubelet[2762]: I0515 13:08:50.974072 2762 kubelet.go:2306] "Pod admission denied" podUID="51b28633-4928-47f9-92d8-f3a63763ac84" pod="tigera-operator/tigera-operator-6f6897fdc5-5q828" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:51.089871 kubelet[2762]: I0515 13:08:51.088982 2762 kubelet.go:2306] "Pod admission denied" podUID="8dc4c114-109d-4313-9462-86565e9a0108" pod="tigera-operator/tigera-operator-6f6897fdc5-rqjqq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:51.177789 kubelet[2762]: I0515 13:08:51.177704 2762 kubelet.go:2306] "Pod admission denied" podUID="b3e8c531-ac51-440e-97cc-ef05acc7926b" pod="tigera-operator/tigera-operator-6f6897fdc5-fprlt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:51.281292 kubelet[2762]: I0515 13:08:51.281171 2762 kubelet.go:2306] "Pod admission denied" podUID="e9dc7a39-f943-408d-9092-c5e7210e6086" pod="tigera-operator/tigera-operator-6f6897fdc5-c4gb7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:51.384588 kubelet[2762]: I0515 13:08:51.383504 2762 kubelet.go:2306] "Pod admission denied" podUID="f746e235-e86f-4714-958a-99cb0fe5d001" pod="tigera-operator/tigera-operator-6f6897fdc5-8x4f8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:51.477939 kubelet[2762]: I0515 13:08:51.474413 2762 kubelet.go:2306] "Pod admission denied" podUID="936acf5f-5cd0-4a09-8452-8a4685a41eba" pod="tigera-operator/tigera-operator-6f6897fdc5-zwccq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:51.577036 kubelet[2762]: I0515 13:08:51.576987 2762 kubelet.go:2306] "Pod admission denied" podUID="ebd6c0f2-7051-4482-963e-a7d9b12fb970" pod="tigera-operator/tigera-operator-6f6897fdc5-2kh9z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:51.674129 kubelet[2762]: I0515 13:08:51.674012 2762 kubelet.go:2306] "Pod admission denied" podUID="ff1cb2ad-19f1-4f5d-bc2f-23000e17e5e8" pod="tigera-operator/tigera-operator-6f6897fdc5-c7lt2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:51.846720 containerd[1543]: time="2025-05-15T13:08:51.846542523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2079196234: write /var/lib/containerd/tmpmounts/containerd-mount2079196234/usr/lib/calico/bpf/from_nat_info.o: no space left on device" May 15 13:08:51.848260 containerd[1543]: time="2025-05-15T13:08:51.846629773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 13:08:51.848578 kubelet[2762]: E0515 13:08:51.847067 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2079196234: write /var/lib/containerd/tmpmounts/containerd-mount2079196234/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 13:08:51.848578 kubelet[2762]: E0515 13:08:51.847130 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2079196234: write /var/lib/containerd/tmpmounts/containerd-mount2079196234/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 13:08:51.848750 kubelet[2762]: E0515 13:08:51.847914 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pg5bx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-h5k9z_calico-system(1a8a24dd-708e-4ec3-b972-4df98026b344): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2079196234: write /var/lib/containerd/tmpmounts/containerd-mount2079196234/usr/lib/calico/bpf/from_nat_info.o: no space left on device" logger="UnhandledError" May 15 13:08:51.849127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079196234.mount: Deactivated successfully. May 15 13:08:51.850338 kubelet[2762]: E0515 13:08:51.849671 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2079196234: write /var/lib/containerd/tmpmounts/containerd-mount2079196234/usr/lib/calico/bpf/from_nat_info.o: no space left on device\"" pod="calico-system/calico-node-h5k9z" podUID="1a8a24dd-708e-4ec3-b972-4df98026b344" May 15 13:08:51.912619 kubelet[2762]: I0515 13:08:51.911122 2762 kubelet.go:2306] "Pod admission denied" podUID="6dc19806-0dea-4cfe-a2e7-90c298fa750b" pod="tigera-operator/tigera-operator-6f6897fdc5-7phjd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.075675 kubelet[2762]: I0515 13:08:52.073975 2762 kubelet.go:2306] "Pod admission denied" podUID="680bab54-ab82-44a5-bffd-7f851154f07e" pod="tigera-operator/tigera-operator-6f6897fdc5-bwt4f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.122854 kubelet[2762]: I0515 13:08:52.122790 2762 kubelet.go:2306] "Pod admission denied" podUID="afe24110-6c2f-4c45-bbdd-09d81ec4d8c7" pod="tigera-operator/tigera-operator-6f6897fdc5-cc4m8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.232583 kubelet[2762]: I0515 13:08:52.231998 2762 kubelet.go:2306] "Pod admission denied" podUID="cef3a52c-9b4c-4400-8789-360f68b9a5d9" pod="tigera-operator/tigera-operator-6f6897fdc5-lfch4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.321323 kubelet[2762]: I0515 13:08:52.321227 2762 kubelet.go:2306] "Pod admission denied" podUID="9bc7c282-09c7-4ff5-b1dd-7db9b0730c51" pod="tigera-operator/tigera-operator-6f6897fdc5-rh5hs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.422941 kubelet[2762]: I0515 13:08:52.422879 2762 kubelet.go:2306] "Pod admission denied" podUID="0aea555d-46eb-40ab-b9d1-184a1ce02db7" pod="tigera-operator/tigera-operator-6f6897fdc5-w58qj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.525290 kubelet[2762]: I0515 13:08:52.524686 2762 kubelet.go:2306] "Pod admission denied" podUID="e0dcbb95-245d-4e53-bc86-24f428b633d2" pod="tigera-operator/tigera-operator-6f6897fdc5-wmt4s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.622256 kubelet[2762]: I0515 13:08:52.622174 2762 kubelet.go:2306] "Pod admission denied" podUID="a4c37297-762a-479b-84fa-0ea97455c926" pod="tigera-operator/tigera-operator-6f6897fdc5-dcktg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.720033 kubelet[2762]: I0515 13:08:52.719891 2762 kubelet.go:2306] "Pod admission denied" podUID="b6f35ebd-25c4-4f90-b90b-830833da12e8" pod="tigera-operator/tigera-operator-6f6897fdc5-h9qvj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.784665 kubelet[2762]: I0515 13:08:52.783491 2762 kubelet.go:2306] "Pod admission denied" podUID="e753b677-2c51-43fd-9578-f462926ef6f7" pod="tigera-operator/tigera-operator-6f6897fdc5-gb5pv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.871981 kubelet[2762]: I0515 13:08:52.871924 2762 kubelet.go:2306] "Pod admission denied" podUID="1a92ab04-2ccc-4f84-9cac-bb59955b5b26" pod="tigera-operator/tigera-operator-6f6897fdc5-8fk7j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:52.957108 kubelet[2762]: E0515 13:08:52.957071 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:53.073645 kubelet[2762]: I0515 13:08:53.072450 2762 kubelet.go:2306] "Pod admission denied" podUID="6d936a84-1da7-4ef0-b3fe-62c9ac0576f5" pod="tigera-operator/tigera-operator-6f6897fdc5-tqg2s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:53.175870 kubelet[2762]: I0515 13:08:53.174944 2762 kubelet.go:2306] "Pod admission denied" podUID="c83fe447-6036-4eec-b7c7-47c61525f081" pod="tigera-operator/tigera-operator-6f6897fdc5-8pf85" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:53.269915 kubelet[2762]: I0515 13:08:53.269866 2762 kubelet.go:2306] "Pod admission denied" podUID="7cbffc52-51b8-4171-995d-27387ae9cbdd" pod="tigera-operator/tigera-operator-6f6897fdc5-jvbmb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:53.371590 kubelet[2762]: I0515 13:08:53.371525 2762 kubelet.go:2306] "Pod admission denied" podUID="e15d8622-aae4-4dcd-a997-96133d272e82" pod="tigera-operator/tigera-operator-6f6897fdc5-jz4vs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:53.506924 kubelet[2762]: I0515 13:08:53.506862 2762 kubelet.go:2306] "Pod admission denied" podUID="7810d0c2-bb8c-4905-9b38-7592f8a0b0b7" pod="tigera-operator/tigera-operator-6f6897fdc5-8v9r4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:53.573899 kubelet[2762]: I0515 13:08:53.573826 2762 kubelet.go:2306] "Pod admission denied" podUID="329a3cf8-2170-436a-8477-ab9e8d13bfdf" pod="tigera-operator/tigera-operator-6f6897fdc5-v85n2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:53.668255 kubelet[2762]: I0515 13:08:53.667970 2762 kubelet.go:2306] "Pod admission denied" podUID="8b744ad4-d546-43e6-9722-a7a42e63aec3" pod="tigera-operator/tigera-operator-6f6897fdc5-hbjc8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:53.876298 kubelet[2762]: I0515 13:08:53.876237 2762 kubelet.go:2306] "Pod admission denied" podUID="5c9af231-4ce9-4f2b-ab4a-e6149f1e0764" pod="tigera-operator/tigera-operator-6f6897fdc5-hdkln" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:53.959223 kubelet[2762]: E0515 13:08:53.959083 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:53.960674 containerd[1543]: time="2025-05-15T13:08:53.959944428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:08:53.976307 kubelet[2762]: I0515 13:08:53.976268 2762 kubelet.go:2306] "Pod admission denied" podUID="51e97c77-c9c7-45e8-b994-8090c831a715" pod="tigera-operator/tigera-operator-6f6897fdc5-dqh2b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:54.037856 containerd[1543]: time="2025-05-15T13:08:54.037786098Z" level=error msg="Failed to destroy network for sandbox \"225d95fa3e7221746147f0c5e59301add7fdeba2f301484e1cbba59a4a6289db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:54.042709 containerd[1543]: time="2025-05-15T13:08:54.038920350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"225d95fa3e7221746147f0c5e59301add7fdeba2f301484e1cbba59a4a6289db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:54.042798 kubelet[2762]: E0515 13:08:54.039211 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225d95fa3e7221746147f0c5e59301add7fdeba2f301484e1cbba59a4a6289db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:54.042798 kubelet[2762]: E0515 13:08:54.039363 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225d95fa3e7221746147f0c5e59301add7fdeba2f301484e1cbba59a4a6289db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:54.042798 kubelet[2762]: E0515 13:08:54.039409 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225d95fa3e7221746147f0c5e59301add7fdeba2f301484e1cbba59a4a6289db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:54.042798 kubelet[2762]: E0515 13:08:54.039497 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"225d95fa3e7221746147f0c5e59301add7fdeba2f301484e1cbba59a4a6289db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:08:54.043147 systemd[1]: run-netns-cni\x2d5051c5ec\x2dc3e6\x2d2fcc\x2d8ef6\x2d86aafc687dbc.mount: Deactivated successfully. May 15 13:08:54.071532 kubelet[2762]: I0515 13:08:54.071486 2762 kubelet.go:2306] "Pod admission denied" podUID="8c512658-2e6a-4a71-8632-d169a54c9108" pod="tigera-operator/tigera-operator-6f6897fdc5-pxqq2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:54.304017 kubelet[2762]: I0515 13:08:54.303362 2762 kubelet.go:2306] "Pod admission denied" podUID="4943173e-502f-4a60-be33-fd1b86f2fdf7" pod="tigera-operator/tigera-operator-6f6897fdc5-8rg9r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:54.370154 kubelet[2762]: I0515 13:08:54.370074 2762 kubelet.go:2306] "Pod admission denied" podUID="0e4de17f-f2b0-484b-a3b0-646c645f8ef2" pod="tigera-operator/tigera-operator-6f6897fdc5-89m5h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:54.426975 kubelet[2762]: I0515 13:08:54.426932 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:08:54.426975 kubelet[2762]: I0515 13:08:54.426972 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:08:54.429968 kubelet[2762]: I0515 13:08:54.429945 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:08:54.443895 kubelet[2762]: I0515 13:08:54.443871 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:08:54.443972 kubelet[2762]: I0515 13:08:54.443930 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/csi-node-driver-fxxht","calico-system/calico-node-h5k9z","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:08:54.443972 kubelet[2762]: E0515 13:08:54.443963 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:08:54.443972 kubelet[2762]: E0515 13:08:54.443974 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:08:54.444133 kubelet[2762]: E0515 13:08:54.443981 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:54.444133 kubelet[2762]: E0515 13:08:54.443988 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:08:54.444133 kubelet[2762]: E0515 13:08:54.443996 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:08:54.444133 kubelet[2762]: E0515 13:08:54.444007 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:08:54.444133 kubelet[2762]: E0515 13:08:54.444017 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:08:54.444133 kubelet[2762]: E0515 13:08:54.444026 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:08:54.444133 kubelet[2762]: E0515 13:08:54.444034 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:08:54.444133 kubelet[2762]: E0515 13:08:54.444043 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:08:54.444133 kubelet[2762]: I0515 13:08:54.444052 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:08:54.473975 kubelet[2762]: I0515 13:08:54.473927 2762 kubelet.go:2306] "Pod admission denied" podUID="debfcb67-d783-4aee-ac03-9c0da6bbdadd" pod="tigera-operator/tigera-operator-6f6897fdc5-47cw8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:54.586664 kubelet[2762]: I0515 13:08:54.585340 2762 kubelet.go:2306] "Pod admission denied" podUID="c4dcc512-1ac5-4853-a292-917cadb829fe" pod="tigera-operator/tigera-operator-6f6897fdc5-f449v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:54.620092 kubelet[2762]: I0515 13:08:54.620052 2762 kubelet.go:2306] "Pod admission denied" podUID="ec0b609d-d3c5-4a47-b287-2da01bded408" pod="tigera-operator/tigera-operator-6f6897fdc5-gd7wb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:54.948019 kubelet[2762]: I0515 13:08:54.947950 2762 kubelet.go:2306] "Pod admission denied" podUID="a487f185-6f19-464f-8fc3-a0c7e9c3c9ac" pod="tigera-operator/tigera-operator-6f6897fdc5-zn2dp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:54.958977 kubelet[2762]: E0515 13:08:54.958933 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:54.960065 containerd[1543]: time="2025-05-15T13:08:54.959991159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:08:55.030301 kubelet[2762]: I0515 13:08:55.030158 2762 kubelet.go:2306] "Pod admission denied" podUID="00a10698-16f7-4a53-8833-548f1ad10cf9" pod="tigera-operator/tigera-operator-6f6897fdc5-vh826" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.070834 containerd[1543]: time="2025-05-15T13:08:55.070733008Z" level=error msg="Failed to destroy network for sandbox \"fe70d7351727c2a19410d896b87b2d0dcc2e67a7506b48c9a297b6273fce4126\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:55.079175 containerd[1543]: time="2025-05-15T13:08:55.072105331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe70d7351727c2a19410d896b87b2d0dcc2e67a7506b48c9a297b6273fce4126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:55.076848 systemd[1]: run-netns-cni\x2d769dcafe\x2dff99\x2df1a7\x2dc9f6\x2dd12857d95a88.mount: Deactivated successfully. May 15 13:08:55.079530 kubelet[2762]: E0515 13:08:55.073730 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe70d7351727c2a19410d896b87b2d0dcc2e67a7506b48c9a297b6273fce4126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:08:55.079530 kubelet[2762]: E0515 13:08:55.073780 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe70d7351727c2a19410d896b87b2d0dcc2e67a7506b48c9a297b6273fce4126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:55.079530 kubelet[2762]: E0515 13:08:55.073800 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe70d7351727c2a19410d896b87b2d0dcc2e67a7506b48c9a297b6273fce4126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:08:55.079530 kubelet[2762]: E0515 13:08:55.073834 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe70d7351727c2a19410d896b87b2d0dcc2e67a7506b48c9a297b6273fce4126\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xfdz2" podUID="b53c6794-8ef1-4efd-9179-2e706d6227cb" May 15 13:08:55.090296 kubelet[2762]: I0515 13:08:55.090240 2762 kubelet.go:2306] "Pod admission denied" podUID="da054a39-6fa5-4bd5-9e14-694f43398092" pod="tigera-operator/tigera-operator-6f6897fdc5-brw86" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.119214 kubelet[2762]: I0515 13:08:55.119154 2762 kubelet.go:2306] "Pod admission denied" podUID="90171c32-bd27-4d2a-802e-6e1d2ee94aa2" pod="tigera-operator/tigera-operator-6f6897fdc5-n5fvk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.237352 kubelet[2762]: I0515 13:08:55.237203 2762 kubelet.go:2306] "Pod admission denied" podUID="2f01729e-7674-48fa-9691-3dcf0b84772e" pod="tigera-operator/tigera-operator-6f6897fdc5-jts2m" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.323154 kubelet[2762]: I0515 13:08:55.323099 2762 kubelet.go:2306] "Pod admission denied" podUID="59bce920-4b01-417d-ac27-acbaba5c59d5" pod="tigera-operator/tigera-operator-6f6897fdc5-lr2v4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.424654 kubelet[2762]: I0515 13:08:55.424590 2762 kubelet.go:2306] "Pod admission denied" podUID="c5cec5b6-841b-41df-9f63-390a0ccde064" pod="tigera-operator/tigera-operator-6f6897fdc5-hlwqv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.533948 kubelet[2762]: I0515 13:08:55.533513 2762 kubelet.go:2306] "Pod admission denied" podUID="53552753-d71a-4f5a-ad3e-251d2a35d7bd" pod="tigera-operator/tigera-operator-6f6897fdc5-nsrfc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.624993 kubelet[2762]: I0515 13:08:55.624907 2762 kubelet.go:2306] "Pod admission denied" podUID="a06e3cea-2359-4dd6-9111-d3687a20c586" pod="tigera-operator/tigera-operator-6f6897fdc5-v6bkv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.821981 kubelet[2762]: I0515 13:08:55.821821 2762 kubelet.go:2306] "Pod admission denied" podUID="0e257ca7-6bf5-425d-bd82-91063d38dd1f" pod="tigera-operator/tigera-operator-6f6897fdc5-rldld" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:55.931854 kubelet[2762]: I0515 13:08:55.931076 2762 kubelet.go:2306] "Pod admission denied" podUID="a7836de9-63f5-44b0-a417-8950d1ac4a53" pod="tigera-operator/tigera-operator-6f6897fdc5-kdknt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.023806 kubelet[2762]: I0515 13:08:56.023743 2762 kubelet.go:2306] "Pod admission denied" podUID="e562ca68-d1dc-47b1-816a-c2ed288f448c" pod="tigera-operator/tigera-operator-6f6897fdc5-zvjx7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.124433 kubelet[2762]: I0515 13:08:56.124382 2762 kubelet.go:2306] "Pod admission denied" podUID="825064b0-2cc2-4681-813d-bc35ad691b5b" pod="tigera-operator/tigera-operator-6f6897fdc5-dmjm2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.184652 kubelet[2762]: I0515 13:08:56.184606 2762 kubelet.go:2306] "Pod admission denied" podUID="b658a8d8-8f88-4c6a-9cca-f4085d72a76d" pod="tigera-operator/tigera-operator-6f6897fdc5-lh5mr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.271422 kubelet[2762]: I0515 13:08:56.271362 2762 kubelet.go:2306] "Pod admission denied" podUID="51d32423-9830-4206-9c48-660124c1f7aa" pod="tigera-operator/tigera-operator-6f6897fdc5-msmmc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.470373 kubelet[2762]: I0515 13:08:56.470248 2762 kubelet.go:2306] "Pod admission denied" podUID="52b656aa-3f99-4d4a-b0d9-366eab599109" pod="tigera-operator/tigera-operator-6f6897fdc5-bh27t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.581547 kubelet[2762]: I0515 13:08:56.581501 2762 kubelet.go:2306] "Pod admission denied" podUID="223f73bc-e153-4a3a-9235-02b0bbaf580c" pod="tigera-operator/tigera-operator-6f6897fdc5-829fv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.672812 kubelet[2762]: I0515 13:08:56.672756 2762 kubelet.go:2306] "Pod admission denied" podUID="8a6bacf3-60c6-4db1-a5bc-1f278d90b211" pod="tigera-operator/tigera-operator-6f6897fdc5-gptrp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.772871 kubelet[2762]: I0515 13:08:56.771258 2762 kubelet.go:2306] "Pod admission denied" podUID="0c0def75-ccb5-4bb1-b6d9-f5165eb7fc28" pod="tigera-operator/tigera-operator-6f6897fdc5-j5frx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.884861 kubelet[2762]: I0515 13:08:56.884077 2762 kubelet.go:2306] "Pod admission denied" podUID="a71f4152-44ca-44d1-89cd-b579a6d2d4f9" pod="tigera-operator/tigera-operator-6f6897fdc5-ps26x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:56.971852 kubelet[2762]: I0515 13:08:56.971775 2762 kubelet.go:2306] "Pod admission denied" podUID="73eed74e-beb0-4530-97d9-fe5b6428e55a" pod="tigera-operator/tigera-operator-6f6897fdc5-5lx4c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.019339 kubelet[2762]: I0515 13:08:57.018849 2762 kubelet.go:2306] "Pod admission denied" podUID="25e1221c-d8d6-453e-a1e7-5a969123dbe5" pod="tigera-operator/tigera-operator-6f6897fdc5-fsht8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.129572 kubelet[2762]: I0515 13:08:57.129317 2762 kubelet.go:2306] "Pod admission denied" podUID="73b3669e-dca2-4f4a-afd7-4b4a697b0b4a" pod="tigera-operator/tigera-operator-6f6897fdc5-5lw2k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.223656 kubelet[2762]: I0515 13:08:57.223597 2762 kubelet.go:2306] "Pod admission denied" podUID="2cc7d9e5-a2a5-47aa-92e8-d6ace4bfd8f1" pod="tigera-operator/tigera-operator-6f6897fdc5-zfb5k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.321686 kubelet[2762]: I0515 13:08:57.321620 2762 kubelet.go:2306] "Pod admission denied" podUID="8b926fa8-792d-45ae-8623-ee809444a35d" pod="tigera-operator/tigera-operator-6f6897fdc5-nt2td" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.434675 kubelet[2762]: I0515 13:08:57.432837 2762 kubelet.go:2306] "Pod admission denied" podUID="4060a52f-71b1-415d-834c-9ee82ac67a14" pod="tigera-operator/tigera-operator-6f6897fdc5-wtpft" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.528839 kubelet[2762]: I0515 13:08:57.528765 2762 kubelet.go:2306] "Pod admission denied" podUID="bcf09138-830e-470a-891c-2d09f3d048de" pod="tigera-operator/tigera-operator-6f6897fdc5-wwmjc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.722603 kubelet[2762]: I0515 13:08:57.721649 2762 kubelet.go:2306] "Pod admission denied" podUID="3407ba16-d2e9-4494-9c5a-78e49707abd3" pod="tigera-operator/tigera-operator-6f6897fdc5-nqj7k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.829508 kubelet[2762]: I0515 13:08:57.829412 2762 kubelet.go:2306] "Pod admission denied" podUID="909fa551-23be-4ad3-97c1-d1d7b9f0e5ec" pod="tigera-operator/tigera-operator-6f6897fdc5-7jtpj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.870703 kubelet[2762]: I0515 13:08:57.870643 2762 kubelet.go:2306] "Pod admission denied" podUID="3e3c8f23-ea20-497d-8ad8-57b123ccdfe0" pod="tigera-operator/tigera-operator-6f6897fdc5-nr6dj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:57.973945 kubelet[2762]: I0515 13:08:57.973804 2762 kubelet.go:2306] "Pod admission denied" podUID="4d952990-26ba-4559-8526-8fdcb0077f3e" pod="tigera-operator/tigera-operator-6f6897fdc5-8rrb7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:58.183643 kubelet[2762]: I0515 13:08:58.182475 2762 kubelet.go:2306] "Pod admission denied" podUID="6632acf6-357b-46ec-93ad-60bdb7345a08" pod="tigera-operator/tigera-operator-6f6897fdc5-5qpcv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:58.273595 kubelet[2762]: I0515 13:08:58.273028 2762 kubelet.go:2306] "Pod admission denied" podUID="57b97497-ed0a-46af-806a-0258504f9af7" pod="tigera-operator/tigera-operator-6f6897fdc5-zrr2q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:58.374421 kubelet[2762]: I0515 13:08:58.374168 2762 kubelet.go:2306] "Pod admission denied" podUID="47aae5b4-c49b-4bc9-9056-6a8656be39ad" pod="tigera-operator/tigera-operator-6f6897fdc5-rg6rq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:58.485502 kubelet[2762]: I0515 13:08:58.485008 2762 kubelet.go:2306] "Pod admission denied" podUID="5a2fdc12-8bde-4189-b8a8-480af797dab3" pod="tigera-operator/tigera-operator-6f6897fdc5-c8j6r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:58.572357 kubelet[2762]: I0515 13:08:58.572217 2762 kubelet.go:2306] "Pod admission denied" podUID="2edc33a2-d0cd-4e39-9d13-c9e728a26721" pod="tigera-operator/tigera-operator-6f6897fdc5-q5t5c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:58.772884 kubelet[2762]: I0515 13:08:58.772787 2762 kubelet.go:2306] "Pod admission denied" podUID="b0f96cad-a344-427f-906e-a1c05bab62b4" pod="tigera-operator/tigera-operator-6f6897fdc5-92tvp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:58.887446 kubelet[2762]: I0515 13:08:58.887179 2762 kubelet.go:2306] "Pod admission denied" podUID="e4e019d4-32a8-411d-a210-2cf47d95a93d" pod="tigera-operator/tigera-operator-6f6897fdc5-kgd7s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:58.927265 kubelet[2762]: I0515 13:08:58.926768 2762 kubelet.go:2306] "Pod admission denied" podUID="db89d5d8-cadd-4dba-91ec-5050ffb526c4" pod="tigera-operator/tigera-operator-6f6897fdc5-qk8ts" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.023421 kubelet[2762]: I0515 13:08:59.023343 2762 kubelet.go:2306] "Pod admission denied" podUID="ea103d91-f677-41fb-878a-556063986d11" pod="tigera-operator/tigera-operator-6f6897fdc5-5mfmj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.223879 kubelet[2762]: I0515 13:08:59.222690 2762 kubelet.go:2306] "Pod admission denied" podUID="8d3410d4-da8e-4114-9a0c-aceee3284677" pod="tigera-operator/tigera-operator-6f6897fdc5-75jrc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.324380 kubelet[2762]: I0515 13:08:59.324241 2762 kubelet.go:2306] "Pod admission denied" podUID="b56137cd-3170-4b29-97c5-83fa0375f51a" pod="tigera-operator/tigera-operator-6f6897fdc5-htzkp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.430622 kubelet[2762]: I0515 13:08:59.430237 2762 kubelet.go:2306] "Pod admission denied" podUID="8a3417ca-2dc8-4385-b365-0c0fb9449832" pod="tigera-operator/tigera-operator-6f6897fdc5-jhtvp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.625607 kubelet[2762]: I0515 13:08:59.625523 2762 kubelet.go:2306] "Pod admission denied" podUID="20478aee-87f8-4c1d-9b6d-4c61ccbd7b16" pod="tigera-operator/tigera-operator-6f6897fdc5-8ptfg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.723198 kubelet[2762]: I0515 13:08:59.723129 2762 kubelet.go:2306] "Pod admission denied" podUID="3e88b2c5-65c2-4426-97d4-a0355fd4b34f" pod="tigera-operator/tigera-operator-6f6897fdc5-6wx6d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.822189 kubelet[2762]: I0515 13:08:59.822109 2762 kubelet.go:2306] "Pod admission denied" podUID="0f9d38bf-b326-4303-8064-f0dd62bfb803" pod="tigera-operator/tigera-operator-6f6897fdc5-wskbx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.925621 kubelet[2762]: I0515 13:08:59.925453 2762 kubelet.go:2306] "Pod admission denied" podUID="25f495e6-d0ed-4de9-85b4-14d550d57e48" pod="tigera-operator/tigera-operator-6f6897fdc5-nr4vr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:08:59.966127 kubelet[2762]: E0515 13:08:59.966069 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:08:59.968673 containerd[1543]: time="2025-05-15T13:08:59.968281857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:08:59.970283 containerd[1543]: time="2025-05-15T13:08:59.968283947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:09:00.043606 containerd[1543]: time="2025-05-15T13:09:00.042412522Z" level=error msg="Failed to destroy network for sandbox \"80eb2f0cd7de2d28c2f29c618ef8d5d9341873cd6b7ff5383e241475ad30a6de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:00.045860 containerd[1543]: time="2025-05-15T13:09:00.045816898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80eb2f0cd7de2d28c2f29c618ef8d5d9341873cd6b7ff5383e241475ad30a6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:00.047547 systemd[1]: run-netns-cni\x2df621f7fe\x2d768d\x2db61b\x2df42b\x2d91812486d111.mount: Deactivated successfully. May 15 13:09:00.051592 kubelet[2762]: E0515 13:09:00.051207 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80eb2f0cd7de2d28c2f29c618ef8d5d9341873cd6b7ff5383e241475ad30a6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:00.051592 kubelet[2762]: E0515 13:09:00.051350 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80eb2f0cd7de2d28c2f29c618ef8d5d9341873cd6b7ff5383e241475ad30a6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:00.051592 kubelet[2762]: E0515 13:09:00.051390 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80eb2f0cd7de2d28c2f29c618ef8d5d9341873cd6b7ff5383e241475ad30a6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:00.051592 kubelet[2762]: E0515 13:09:00.051464 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80eb2f0cd7de2d28c2f29c618ef8d5d9341873cd6b7ff5383e241475ad30a6de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:09:00.081028 kubelet[2762]: I0515 13:09:00.080971 2762 kubelet.go:2306] "Pod admission denied" podUID="1211da14-f10c-40f7-9110-50d57a10eca4" pod="tigera-operator/tigera-operator-6f6897fdc5-2sdrf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:00.134182 kubelet[2762]: I0515 13:09:00.134117 2762 kubelet.go:2306] "Pod admission denied" podUID="59f2b1ac-768f-4223-b904-39c6f0b6e9f0" pod="tigera-operator/tigera-operator-6f6897fdc5-2fvgf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:00.139200 containerd[1543]: time="2025-05-15T13:09:00.139071008Z" level=error msg="Failed to destroy network for sandbox \"40f23cb803203aefd6fab77197a2506f62d587bf7d11a330e81f3565f38fb9ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:00.145073 containerd[1543]: time="2025-05-15T13:09:00.145035829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f23cb803203aefd6fab77197a2506f62d587bf7d11a330e81f3565f38fb9ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:00.145951 systemd[1]: run-netns-cni\x2dc540fc44\x2d8776\x2d01ff\x2d094b\x2dea5d2bad4382.mount: Deactivated successfully. May 15 13:09:00.147191 kubelet[2762]: E0515 13:09:00.147157 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f23cb803203aefd6fab77197a2506f62d587bf7d11a330e81f3565f38fb9ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:00.148761 kubelet[2762]: E0515 13:09:00.147732 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f23cb803203aefd6fab77197a2506f62d587bf7d11a330e81f3565f38fb9ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:00.148761 kubelet[2762]: E0515 13:09:00.147759 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f23cb803203aefd6fab77197a2506f62d587bf7d11a330e81f3565f38fb9ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:00.148761 kubelet[2762]: E0515 13:09:00.147850 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40f23cb803203aefd6fab77197a2506f62d587bf7d11a330e81f3565f38fb9ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:09:00.223673 kubelet[2762]: I0515 13:09:00.223515 2762 kubelet.go:2306] "Pod admission denied" podUID="07b48d33-2221-4012-af6d-8aa5365120b8" pod="tigera-operator/tigera-operator-6f6897fdc5-926rr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:00.339404 kubelet[2762]: I0515 13:09:00.339353 2762 kubelet.go:2306] "Pod admission denied" podUID="350548c0-bbca-4d2e-9a4b-80f84e2924c4" pod="tigera-operator/tigera-operator-6f6897fdc5-z2ngv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:00.423542 kubelet[2762]: I0515 13:09:00.423479 2762 kubelet.go:2306] "Pod admission denied" podUID="5ee32336-18a6-460e-9e63-2cc25ffd0f0b" pod="tigera-operator/tigera-operator-6f6897fdc5-df8x5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:00.624232 kubelet[2762]: I0515 13:09:00.624177 2762 kubelet.go:2306] "Pod admission denied" podUID="285fe159-b585-453b-b82e-ca81f9fa3248" pod="tigera-operator/tigera-operator-6f6897fdc5-rrsvt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:00.725700 kubelet[2762]: I0515 13:09:00.725619 2762 kubelet.go:2306] "Pod admission denied" podUID="a4a861c7-df61-4a7e-a883-2577695a09dd" pod="tigera-operator/tigera-operator-6f6897fdc5-7phzr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:00.793793 kubelet[2762]: I0515 13:09:00.793723 2762 kubelet.go:2306] "Pod admission denied" podUID="24b722b9-aebb-4dcc-aeac-4f9843f236c6" pod="tigera-operator/tigera-operator-6f6897fdc5-nxjh2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:00.890688 kubelet[2762]: I0515 13:09:00.888139 2762 kubelet.go:2306] "Pod admission denied" podUID="15cdfd17-6d69-4347-b65f-cf5289800be5" pod="tigera-operator/tigera-operator-6f6897fdc5-nbfrx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.126908 kubelet[2762]: I0515 13:09:01.126849 2762 kubelet.go:2306] "Pod admission denied" podUID="a80431ed-0a47-4202-b95c-f4bb8566fa98" pod="tigera-operator/tigera-operator-6f6897fdc5-6v8mk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.224435 kubelet[2762]: I0515 13:09:01.224279 2762 kubelet.go:2306] "Pod admission denied" podUID="80b71412-33f6-492c-9885-35ebf697e507" pod="tigera-operator/tigera-operator-6f6897fdc5-pp546" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.328188 kubelet[2762]: I0515 13:09:01.328124 2762 kubelet.go:2306] "Pod admission denied" podUID="9e7ad884-daed-4179-85d9-d70effaabc2c" pod="tigera-operator/tigera-operator-6f6897fdc5-b7thd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.372666 kubelet[2762]: I0515 13:09:01.372604 2762 kubelet.go:2306] "Pod admission denied" podUID="fffea893-9843-4bc2-a5de-4d5ae6199c37" pod="tigera-operator/tigera-operator-6f6897fdc5-dmzr2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.499816 kubelet[2762]: I0515 13:09:01.498846 2762 kubelet.go:2306] "Pod admission denied" podUID="5d944d52-23d2-4530-a1d8-0cb53b585775" pod="tigera-operator/tigera-operator-6f6897fdc5-6t7lg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.571859 kubelet[2762]: I0515 13:09:01.571798 2762 kubelet.go:2306] "Pod admission denied" podUID="efd75795-eb47-42f8-a9ec-d85a3fcba6dd" pod="tigera-operator/tigera-operator-6f6897fdc5-qwrb7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.676718 kubelet[2762]: I0515 13:09:01.676651 2762 kubelet.go:2306] "Pod admission denied" podUID="ad4c7a0c-dc79-45c8-bd88-979b1dbcdab9" pod="tigera-operator/tigera-operator-6f6897fdc5-vbsxt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.776954 kubelet[2762]: I0515 13:09:01.776057 2762 kubelet.go:2306] "Pod admission denied" podUID="1206d4c6-3fa4-4c05-aa26-26c506844be6" pod="tigera-operator/tigera-operator-6f6897fdc5-qpqmb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.821769 kubelet[2762]: I0515 13:09:01.821686 2762 kubelet.go:2306] "Pod admission denied" podUID="8e652c2d-92d0-4760-b411-f7b8f69b54a3" pod="tigera-operator/tigera-operator-6f6897fdc5-hzfpc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:01.922485 kubelet[2762]: I0515 13:09:01.922411 2762 kubelet.go:2306] "Pod admission denied" podUID="47696d3a-374e-4b47-a1df-640bc2c87163" pod="tigera-operator/tigera-operator-6f6897fdc5-s2tz4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.032906 kubelet[2762]: I0515 13:09:02.031924 2762 kubelet.go:2306] "Pod admission denied" podUID="2928c9d6-8a7d-406e-8bc8-ab01b9c8e3bf" pod="tigera-operator/tigera-operator-6f6897fdc5-gfvnl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.125105 kubelet[2762]: I0515 13:09:02.125004 2762 kubelet.go:2306] "Pod admission denied" podUID="33c9c3ea-aea8-4bea-aa10-db30cb72a100" pod="tigera-operator/tigera-operator-6f6897fdc5-tkbrd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.221090 kubelet[2762]: I0515 13:09:02.221014 2762 kubelet.go:2306] "Pod admission denied" podUID="2fba17ad-ef5f-4937-977d-5e8747fc3537" pod="tigera-operator/tigera-operator-6f6897fdc5-kzbrz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.334699 kubelet[2762]: I0515 13:09:02.334008 2762 kubelet.go:2306] "Pod admission denied" podUID="c9e0ba57-2cbf-4440-aa66-1dc38ac70f87" pod="tigera-operator/tigera-operator-6f6897fdc5-rgdv5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.424871 kubelet[2762]: I0515 13:09:02.424810 2762 kubelet.go:2306] "Pod admission denied" podUID="31c0fe9a-a60d-4291-a730-9c2e419128f6" pod="tigera-operator/tigera-operator-6f6897fdc5-wws6t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.523264 kubelet[2762]: I0515 13:09:02.522898 2762 kubelet.go:2306] "Pod admission denied" podUID="9fa6d0fb-d1e5-42a5-be0b-c9b726e95664" pod="tigera-operator/tigera-operator-6f6897fdc5-pcdwc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.630950 kubelet[2762]: I0515 13:09:02.630912 2762 kubelet.go:2306] "Pod admission denied" podUID="c0b32ba2-7038-4441-983a-345eaf1381df" pod="tigera-operator/tigera-operator-6f6897fdc5-858sq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.724647 kubelet[2762]: I0515 13:09:02.724578 2762 kubelet.go:2306] "Pod admission denied" podUID="936e37d0-f895-4e55-8449-0930858719fb" pod="tigera-operator/tigera-operator-6f6897fdc5-m57r9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.820067 kubelet[2762]: I0515 13:09:02.820001 2762 kubelet.go:2306] "Pod admission denied" podUID="7fd5e34d-598f-49d0-96d2-19d9b0ebb908" pod="tigera-operator/tigera-operator-6f6897fdc5-wqfbb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:02.933190 kubelet[2762]: I0515 13:09:02.932511 2762 kubelet.go:2306] "Pod admission denied" podUID="eee55ef0-4f16-4407-a7ba-89e83b3d0a0f" pod="tigera-operator/tigera-operator-6f6897fdc5-2x842" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.023908 kubelet[2762]: I0515 13:09:03.023832 2762 kubelet.go:2306] "Pod admission denied" podUID="910eb0c3-979d-4d4e-8e1b-6e87414ee4cd" pod="tigera-operator/tigera-operator-6f6897fdc5-nv8wc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.124526 kubelet[2762]: I0515 13:09:03.124456 2762 kubelet.go:2306] "Pod admission denied" podUID="eff2bd39-0455-4ffc-bf65-fa1aca48c946" pod="tigera-operator/tigera-operator-6f6897fdc5-dscwv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.237007 kubelet[2762]: I0515 13:09:03.235168 2762 kubelet.go:2306] "Pod admission denied" podUID="31687080-e04a-4270-9ce6-ea98327edeba" pod="tigera-operator/tigera-operator-6f6897fdc5-7qm9d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.321116 kubelet[2762]: I0515 13:09:03.321053 2762 kubelet.go:2306] "Pod admission denied" podUID="04dab36b-8fe9-4051-bde8-e2a84a25ba55" pod="tigera-operator/tigera-operator-6f6897fdc5-dzpls" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.433641 kubelet[2762]: I0515 13:09:03.433580 2762 kubelet.go:2306] "Pod admission denied" podUID="e39e11f4-74fc-4da9-b6d3-471708536353" pod="tigera-operator/tigera-operator-6f6897fdc5-f29zv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.487996 kubelet[2762]: I0515 13:09:03.486497 2762 kubelet.go:2306] "Pod admission denied" podUID="0a3d3721-c55a-4694-9680-e421e12f276a" pod="tigera-operator/tigera-operator-6f6897fdc5-8hjtk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.572861 kubelet[2762]: I0515 13:09:03.572803 2762 kubelet.go:2306] "Pod admission denied" podUID="32d86a7c-dad7-4b9e-b8c9-83304d9c28f4" pod="tigera-operator/tigera-operator-6f6897fdc5-26mj4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.774136 kubelet[2762]: I0515 13:09:03.773502 2762 kubelet.go:2306] "Pod admission denied" podUID="6d734dbb-1d0c-48c4-81d8-4a45a9c74c2f" pod="tigera-operator/tigera-operator-6f6897fdc5-fr2m7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.884414 kubelet[2762]: I0515 13:09:03.883965 2762 kubelet.go:2306] "Pod admission denied" podUID="255a247f-f734-4c68-b6b7-ae902fde686e" pod="tigera-operator/tigera-operator-6f6897fdc5-rfkzs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:03.971826 kubelet[2762]: I0515 13:09:03.971765 2762 kubelet.go:2306] "Pod admission denied" podUID="01b3a199-5519-415d-b314-5d78cc4c958b" pod="tigera-operator/tigera-operator-6f6897fdc5-8bcqv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.073620 kubelet[2762]: I0515 13:09:04.072806 2762 kubelet.go:2306] "Pod admission denied" podUID="97518892-de5a-4725-aa03-f9f2e54d8dfe" pod="tigera-operator/tigera-operator-6f6897fdc5-wrc2k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.175966 kubelet[2762]: I0515 13:09:04.175917 2762 kubelet.go:2306] "Pod admission denied" podUID="c5a704d3-6b7a-4a01-8823-9a3b0bcececc" pod="tigera-operator/tigera-operator-6f6897fdc5-nhb8s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.272590 kubelet[2762]: I0515 13:09:04.272526 2762 kubelet.go:2306] "Pod admission denied" podUID="62e9ac92-0ade-4e32-ae91-8f0a4162487e" pod="tigera-operator/tigera-operator-6f6897fdc5-6sq4d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.385577 kubelet[2762]: I0515 13:09:04.385304 2762 kubelet.go:2306] "Pod admission denied" podUID="ea7db70b-c2ee-488e-b870-394a841e81d4" pod="tigera-operator/tigera-operator-6f6897fdc5-dbdvx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.474668 kubelet[2762]: I0515 13:09:04.474428 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:09:04.474924 kubelet[2762]: I0515 13:09:04.474886 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:09:04.477571 kubelet[2762]: I0515 13:09:04.477152 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:09:04.484362 kubelet[2762]: I0515 13:09:04.484319 2762 kubelet.go:2306] "Pod admission denied" podUID="57b96e76-8ac4-41d8-8397-40b162d97bdc" pod="tigera-operator/tigera-operator-6f6897fdc5-qgxmt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.493292 kubelet[2762]: I0515 13:09:04.493249 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:09:04.493361 kubelet[2762]: I0515 13:09:04.493344 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/csi-node-driver-fxxht","calico-system/calico-node-h5k9z","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:09:04.493659 kubelet[2762]: E0515 13:09:04.493390 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:04.493659 kubelet[2762]: E0515 13:09:04.493399 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:04.493659 kubelet[2762]: E0515 13:09:04.493406 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:04.493659 kubelet[2762]: E0515 13:09:04.493412 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:04.493659 kubelet[2762]: E0515 13:09:04.493419 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:09:04.493659 kubelet[2762]: E0515 13:09:04.493437 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:09:04.493659 kubelet[2762]: E0515 13:09:04.493649 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:09:04.493659 kubelet[2762]: E0515 13:09:04.493659 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:09:04.493861 kubelet[2762]: E0515 13:09:04.493669 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:09:04.493861 kubelet[2762]: E0515 13:09:04.493677 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:09:04.493861 kubelet[2762]: I0515 13:09:04.493691 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:09:04.574782 kubelet[2762]: I0515 13:09:04.574716 2762 kubelet.go:2306] "Pod admission denied" podUID="bd4ce70b-4d57-4317-af6a-30e47f417a25" pod="tigera-operator/tigera-operator-6f6897fdc5-ct65t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.672533 kubelet[2762]: I0515 13:09:04.671736 2762 kubelet.go:2306] "Pod admission denied" podUID="35fe7aae-6328-4c1b-ae69-60cfff4dc443" pod="tigera-operator/tigera-operator-6f6897fdc5-szbl6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.724273 kubelet[2762]: I0515 13:09:04.724208 2762 kubelet.go:2306] "Pod admission denied" podUID="cc0b9542-a6df-443e-a532-23d729784622" pod="tigera-operator/tigera-operator-6f6897fdc5-hqs57" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.825799 kubelet[2762]: I0515 13:09:04.825735 2762 kubelet.go:2306] "Pod admission denied" podUID="7e4e56bb-ad51-441e-963d-518aeecf037e" pod="tigera-operator/tigera-operator-6f6897fdc5-hks2z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.929163 kubelet[2762]: I0515 13:09:04.929019 2762 kubelet.go:2306] "Pod admission denied" podUID="c51cbba7-0c77-4385-a9af-a5ceb025c4ec" pod="tigera-operator/tigera-operator-6f6897fdc5-kbc5m" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:04.958250 kubelet[2762]: E0515 13:09:04.957190 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:04.959791 kubelet[2762]: E0515 13:09:04.959538 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-h5k9z" podUID="1a8a24dd-708e-4ec3-b972-4df98026b344" May 15 13:09:05.026105 kubelet[2762]: I0515 13:09:05.026041 2762 kubelet.go:2306] "Pod admission denied" podUID="78586670-6567-4a4c-b28e-34f2f4637fff" pod="tigera-operator/tigera-operator-6f6897fdc5-qzcfg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.123765 kubelet[2762]: I0515 13:09:05.123701 2762 kubelet.go:2306] "Pod admission denied" podUID="7829e24b-eb70-44a1-a090-21aa0fb6d967" pod="tigera-operator/tigera-operator-6f6897fdc5-jb644" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.225903 kubelet[2762]: I0515 13:09:05.225362 2762 kubelet.go:2306] "Pod admission denied" podUID="a85547ff-60ed-4809-85cd-f7de58b20239" pod="tigera-operator/tigera-operator-6f6897fdc5-dgqms" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.326295 kubelet[2762]: I0515 13:09:05.326233 2762 kubelet.go:2306] "Pod admission denied" podUID="f1248361-de8a-4044-9be9-be00a8b9be8e" pod="tigera-operator/tigera-operator-6f6897fdc5-gn2pg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.386202 kubelet[2762]: I0515 13:09:05.386130 2762 kubelet.go:2306] "Pod admission denied" podUID="357247e4-6c9b-4c61-853d-1008d24dd9de" pod="tigera-operator/tigera-operator-6f6897fdc5-2gwlx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.474097 kubelet[2762]: I0515 13:09:05.474036 2762 kubelet.go:2306] "Pod admission denied" podUID="ef5daa40-4c1b-4a97-abd3-152f9fedcbaf" pod="tigera-operator/tigera-operator-6f6897fdc5-ssvwr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.573308 kubelet[2762]: I0515 13:09:05.573156 2762 kubelet.go:2306] "Pod admission denied" podUID="3583b4bd-ad66-499b-a1a9-16ef5e8ac945" pod="tigera-operator/tigera-operator-6f6897fdc5-7lxlc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.692302 kubelet[2762]: I0515 13:09:05.692229 2762 kubelet.go:2306] "Pod admission denied" podUID="3560110b-7e9d-4013-8334-d9632e90b5de" pod="tigera-operator/tigera-operator-6f6897fdc5-xqljt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.777195 kubelet[2762]: I0515 13:09:05.777120 2762 kubelet.go:2306] "Pod admission denied" podUID="8d4e86b7-60ff-4739-8a8e-8209d379ba0e" pod="tigera-operator/tigera-operator-6f6897fdc5-4tcrx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.883148 kubelet[2762]: I0515 13:09:05.882513 2762 kubelet.go:2306] "Pod admission denied" podUID="3795abc1-5ae7-45f7-a30d-abf198b09906" pod="tigera-operator/tigera-operator-6f6897fdc5-cttwk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:05.974990 kubelet[2762]: I0515 13:09:05.974928 2762 kubelet.go:2306] "Pod admission denied" podUID="64559df4-68eb-47ab-a757-efbe7f8055a1" pod="tigera-operator/tigera-operator-6f6897fdc5-kg8zp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.073545 kubelet[2762]: I0515 13:09:06.073482 2762 kubelet.go:2306] "Pod admission denied" podUID="774afb7e-0285-446e-902f-b227f6eadffd" pod="tigera-operator/tigera-operator-6f6897fdc5-8zq9x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.173413 kubelet[2762]: I0515 13:09:06.172477 2762 kubelet.go:2306] "Pod admission denied" podUID="2b8cd962-a88b-4041-ac96-79424265b6e5" pod="tigera-operator/tigera-operator-6f6897fdc5-dwdrz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.280090 kubelet[2762]: I0515 13:09:06.280018 2762 kubelet.go:2306] "Pod admission denied" podUID="a230e1f6-cd31-449a-b844-73383b03c9fb" pod="tigera-operator/tigera-operator-6f6897fdc5-q9cgs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.379583 kubelet[2762]: I0515 13:09:06.379046 2762 kubelet.go:2306] "Pod admission denied" podUID="764cbbf6-7012-4476-b538-85cde4d3d622" pod="tigera-operator/tigera-operator-6f6897fdc5-gj2r5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.435844 kubelet[2762]: I0515 13:09:06.435286 2762 kubelet.go:2306] "Pod admission denied" podUID="5e29a742-6c2d-4e94-819e-7fa501ed33f8" pod="tigera-operator/tigera-operator-6f6897fdc5-2zxzd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.524501 kubelet[2762]: I0515 13:09:06.524439 2762 kubelet.go:2306] "Pod admission denied" podUID="ea23e820-323c-4b7b-a33d-9a76fe619580" pod="tigera-operator/tigera-operator-6f6897fdc5-v47x9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.623166 kubelet[2762]: I0515 13:09:06.623094 2762 kubelet.go:2306] "Pod admission denied" podUID="91de3f97-98c6-4397-9470-c4fa12bfa526" pod="tigera-operator/tigera-operator-6f6897fdc5-gsfn4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.722924 kubelet[2762]: I0515 13:09:06.722162 2762 kubelet.go:2306] "Pod admission denied" podUID="f5266acc-1f5e-4383-a765-3a072249f77f" pod="tigera-operator/tigera-operator-6f6897fdc5-qq5gr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.829418 kubelet[2762]: I0515 13:09:06.828782 2762 kubelet.go:2306] "Pod admission denied" podUID="2196f68f-a2be-4d14-84d0-b8ff9b293ac0" pod="tigera-operator/tigera-operator-6f6897fdc5-mptxm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:06.926222 kubelet[2762]: I0515 13:09:06.926151 2762 kubelet.go:2306] "Pod admission denied" podUID="65875c79-bb31-44d2-94cc-7863ad37f7dc" pod="tigera-operator/tigera-operator-6f6897fdc5-lwlws" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.023611 kubelet[2762]: I0515 13:09:07.023202 2762 kubelet.go:2306] "Pod admission denied" podUID="abf24b1d-8479-46b9-a995-dbd1c28a01f7" pod="tigera-operator/tigera-operator-6f6897fdc5-ptmq7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.076662 kubelet[2762]: I0515 13:09:07.076610 2762 kubelet.go:2306] "Pod admission denied" podUID="8c383bdf-b3ad-400a-aee4-34ce6610879a" pod="tigera-operator/tigera-operator-6f6897fdc5-bkqz9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.175326 kubelet[2762]: I0515 13:09:07.175269 2762 kubelet.go:2306] "Pod admission denied" podUID="6a58cfeb-b2df-4cde-a950-fa98d3426957" pod="tigera-operator/tigera-operator-6f6897fdc5-bs624" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.281962 kubelet[2762]: I0515 13:09:07.281830 2762 kubelet.go:2306] "Pod admission denied" podUID="4adfa91d-9d1f-429d-8b22-c880db1c355b" pod="tigera-operator/tigera-operator-6f6897fdc5-qj67j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.328331 kubelet[2762]: I0515 13:09:07.328285 2762 kubelet.go:2306] "Pod admission denied" podUID="b9b69552-8902-4138-85db-0f964de5cf1a" pod="tigera-operator/tigera-operator-6f6897fdc5-hvrd9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.429240 kubelet[2762]: I0515 13:09:07.429171 2762 kubelet.go:2306] "Pod admission denied" podUID="2834ac7d-2d90-486d-a1a7-3d6455280628" pod="tigera-operator/tigera-operator-6f6897fdc5-rl5zd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.526960 kubelet[2762]: I0515 13:09:07.526881 2762 kubelet.go:2306] "Pod admission denied" podUID="57c70754-8ea1-45d4-99a6-948b91c6c360" pod="tigera-operator/tigera-operator-6f6897fdc5-28tk2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.624269 kubelet[2762]: I0515 13:09:07.624211 2762 kubelet.go:2306] "Pod admission denied" podUID="b7e4a461-7e62-45d8-888e-26d3fcf9a729" pod="tigera-operator/tigera-operator-6f6897fdc5-zkb2p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.727954 kubelet[2762]: I0515 13:09:07.727900 2762 kubelet.go:2306] "Pod admission denied" podUID="f677528d-a4ed-4093-bdaa-d6f779619508" pod="tigera-operator/tigera-operator-6f6897fdc5-p6j2l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.824242 kubelet[2762]: I0515 13:09:07.824186 2762 kubelet.go:2306] "Pod admission denied" podUID="e3483165-ae28-422a-b724-a3e07d263107" pod="tigera-operator/tigera-operator-6f6897fdc5-mz9hg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.926156 kubelet[2762]: I0515 13:09:07.926011 2762 kubelet.go:2306] "Pod admission denied" podUID="83d1df41-dd85-40a7-93a4-c4e5ebf1ec85" pod="tigera-operator/tigera-operator-6f6897fdc5-gv6ss" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:07.958672 kubelet[2762]: E0515 13:09:07.958617 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:07.962133 containerd[1543]: time="2025-05-15T13:09:07.961417300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:09:07.962856 kubelet[2762]: E0515 13:09:07.961474 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:08.075702 kubelet[2762]: I0515 13:09:08.075570 2762 kubelet.go:2306] "Pod admission denied" podUID="b4f6a526-f855-46b6-8cf7-815182ac0923" pod="tigera-operator/tigera-operator-6f6897fdc5-85jvt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:08.078588 containerd[1543]: time="2025-05-15T13:09:08.076317183Z" level=error msg="Failed to destroy network for sandbox \"5832f1f4cf771c4a52b02b70cc04f6e2a3fecec73c3e4b02d55ed0104bf6b5de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:08.080065 containerd[1543]: time="2025-05-15T13:09:08.080024510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5832f1f4cf771c4a52b02b70cc04f6e2a3fecec73c3e4b02d55ed0104bf6b5de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:08.080705 kubelet[2762]: E0515 13:09:08.080379 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5832f1f4cf771c4a52b02b70cc04f6e2a3fecec73c3e4b02d55ed0104bf6b5de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:08.080798 kubelet[2762]: E0515 13:09:08.080756 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5832f1f4cf771c4a52b02b70cc04f6e2a3fecec73c3e4b02d55ed0104bf6b5de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:08.080898 kubelet[2762]: E0515 13:09:08.080871 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5832f1f4cf771c4a52b02b70cc04f6e2a3fecec73c3e4b02d55ed0104bf6b5de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:08.081058 kubelet[2762]: E0515 13:09:08.081005 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5832f1f4cf771c4a52b02b70cc04f6e2a3fecec73c3e4b02d55ed0104bf6b5de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:09:08.081690 systemd[1]: run-netns-cni\x2d4687b70d\x2d5dbd\x2dd17b\x2d7454\x2d39e81fa4ef5f.mount: Deactivated successfully. May 15 13:09:08.278771 kubelet[2762]: I0515 13:09:08.277655 2762 kubelet.go:2306] "Pod admission denied" podUID="00e1d86c-db97-42dd-b0f6-1c525d4dd6ec" pod="tigera-operator/tigera-operator-6f6897fdc5-pjsqx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:08.374897 kubelet[2762]: I0515 13:09:08.374831 2762 kubelet.go:2306] "Pod admission denied" podUID="ff85e31a-b48b-409f-84fb-878eddaae7ee" pod="tigera-operator/tigera-operator-6f6897fdc5-4ssgq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:08.473765 kubelet[2762]: I0515 13:09:08.473700 2762 kubelet.go:2306] "Pod admission denied" podUID="2a21b8d1-e9fd-4ff4-b018-c71a0e2c97a4" pod="tigera-operator/tigera-operator-6f6897fdc5-7bxrk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:08.594699 kubelet[2762]: I0515 13:09:08.594335 2762 kubelet.go:2306] "Pod admission denied" podUID="dcbdfb57-5c61-4748-aa76-28a563f6bb1e" pod="tigera-operator/tigera-operator-6f6897fdc5-pqgfl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:08.691582 kubelet[2762]: I0515 13:09:08.690425 2762 kubelet.go:2306] "Pod admission denied" podUID="a3b120ed-c887-4421-b4f8-2acce96d4a84" pod="tigera-operator/tigera-operator-6f6897fdc5-bnjgs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:08.793579 kubelet[2762]: I0515 13:09:08.793512 2762 kubelet.go:2306] "Pod admission denied" podUID="9fc03f5b-a90c-4649-99dd-843ffe7de55c" pod="tigera-operator/tigera-operator-6f6897fdc5-vqmw8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:08.874741 kubelet[2762]: I0515 13:09:08.874154 2762 kubelet.go:2306] "Pod admission denied" podUID="fb71f525-516a-4efe-82ea-df5cf62815dd" pod="tigera-operator/tigera-operator-6f6897fdc5-fpmnf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:08.977409 kubelet[2762]: I0515 13:09:08.977363 2762 kubelet.go:2306] "Pod admission denied" podUID="d0f5433b-9750-47e3-8773-eb5cf0d6d81e" pod="tigera-operator/tigera-operator-6f6897fdc5-9pjvz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.075827 kubelet[2762]: I0515 13:09:09.075765 2762 kubelet.go:2306] "Pod admission denied" podUID="f434f81d-1fd1-492e-badd-70af45a02353" pod="tigera-operator/tigera-operator-6f6897fdc5-wbvnz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.176446 kubelet[2762]: I0515 13:09:09.176298 2762 kubelet.go:2306] "Pod admission denied" podUID="81693ace-5368-4cd1-adff-56ad8a6097ec" pod="tigera-operator/tigera-operator-6f6897fdc5-hnkv5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.277218 kubelet[2762]: I0515 13:09:09.277145 2762 kubelet.go:2306] "Pod admission denied" podUID="0f79959e-6695-4230-9321-4ee1f86c1512" pod="tigera-operator/tigera-operator-6f6897fdc5-284b4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.376159 kubelet[2762]: I0515 13:09:09.376089 2762 kubelet.go:2306] "Pod admission denied" podUID="2c012234-5ccb-4ea8-b07f-45f9028ab099" pod="tigera-operator/tigera-operator-6f6897fdc5-hmm95" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.579752 kubelet[2762]: I0515 13:09:09.579470 2762 kubelet.go:2306] "Pod admission denied" podUID="bd4b20ed-f6c5-4a81-807f-4d7c23fbd125" pod="tigera-operator/tigera-operator-6f6897fdc5-dpkcx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.674366 kubelet[2762]: I0515 13:09:09.674294 2762 kubelet.go:2306] "Pod admission denied" podUID="b1643f8c-2a62-4afd-a188-a0aebbf1b6b3" pod="tigera-operator/tigera-operator-6f6897fdc5-d6sz9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.774386 kubelet[2762]: I0515 13:09:09.774322 2762 kubelet.go:2306] "Pod admission denied" podUID="b09b5ca7-8d3d-4926-8f91-23dc26258c18" pod="tigera-operator/tigera-operator-6f6897fdc5-xgwnf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.876305 kubelet[2762]: I0515 13:09:09.876243 2762 kubelet.go:2306] "Pod admission denied" podUID="bac053d8-4c5a-4498-92ec-d2470fe01396" pod="tigera-operator/tigera-operator-6f6897fdc5-75dwt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:09.964262 kubelet[2762]: E0515 13:09:09.964193 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:09.967947 containerd[1543]: time="2025-05-15T13:09:09.967915993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:09:09.992115 kubelet[2762]: I0515 13:09:09.991533 2762 kubelet.go:2306] "Pod admission denied" podUID="1440ddba-c2a6-4721-b7cc-08d1a75afd11" pod="tigera-operator/tigera-operator-6f6897fdc5-8c2l5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.101527 kubelet[2762]: I0515 13:09:10.101487 2762 kubelet.go:2306] "Pod admission denied" podUID="3ca94291-52be-47a5-896e-713e00db4875" pod="tigera-operator/tigera-operator-6f6897fdc5-mqknx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.106978 containerd[1543]: time="2025-05-15T13:09:10.106816111Z" level=error msg="Failed to destroy network for sandbox \"f83c00f4228dab6dddad2fdd9176cfaf84a5d9b44e8f49924b4836aad02eb891\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:10.109357 systemd[1]: run-netns-cni\x2df8e775be\x2d40dc\x2d484e\x2d555d\x2db17f5aed3354.mount: Deactivated successfully. May 15 13:09:10.111485 containerd[1543]: time="2025-05-15T13:09:10.110696619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83c00f4228dab6dddad2fdd9176cfaf84a5d9b44e8f49924b4836aad02eb891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:10.115009 kubelet[2762]: E0515 13:09:10.114202 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83c00f4228dab6dddad2fdd9176cfaf84a5d9b44e8f49924b4836aad02eb891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:10.115009 kubelet[2762]: E0515 13:09:10.114260 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83c00f4228dab6dddad2fdd9176cfaf84a5d9b44e8f49924b4836aad02eb891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:10.115009 kubelet[2762]: E0515 13:09:10.114284 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83c00f4228dab6dddad2fdd9176cfaf84a5d9b44e8f49924b4836aad02eb891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:10.115009 kubelet[2762]: E0515 13:09:10.114330 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f83c00f4228dab6dddad2fdd9176cfaf84a5d9b44e8f49924b4836aad02eb891\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xfdz2" podUID="b53c6794-8ef1-4efd-9179-2e706d6227cb" May 15 13:09:10.240193 kubelet[2762]: I0515 13:09:10.239980 2762 kubelet.go:2306] "Pod admission denied" podUID="ab2d874e-e737-435b-9f32-8949006cc8b4" pod="tigera-operator/tigera-operator-6f6897fdc5-jkpcn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.329459 kubelet[2762]: I0515 13:09:10.329388 2762 kubelet.go:2306] "Pod admission denied" podUID="72af069c-3a23-412a-8e27-a857ba263276" pod="tigera-operator/tigera-operator-6f6897fdc5-czmm8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.431805 kubelet[2762]: I0515 13:09:10.431707 2762 kubelet.go:2306] "Pod admission denied" podUID="d9d566bd-cfb6-4227-9161-e8b3354e0c51" pod="tigera-operator/tigera-operator-6f6897fdc5-jqnt5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.544342 kubelet[2762]: I0515 13:09:10.544104 2762 kubelet.go:2306] "Pod admission denied" podUID="4516663b-2988-400d-984c-332d27ca5f1a" pod="tigera-operator/tigera-operator-6f6897fdc5-5c9nt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.668434 kubelet[2762]: I0515 13:09:10.667785 2762 kubelet.go:2306] "Pod admission denied" podUID="fe4edf16-4a39-4cd0-bf9f-59fe62065ec1" pod="tigera-operator/tigera-operator-6f6897fdc5-jhjdg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.791029 kubelet[2762]: I0515 13:09:10.790932 2762 kubelet.go:2306] "Pod admission denied" podUID="e9ab6f9e-ea43-4d2d-b7db-eeafcb5b14cd" pod="tigera-operator/tigera-operator-6f6897fdc5-8hwld" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.863213 kubelet[2762]: I0515 13:09:10.862670 2762 kubelet.go:2306] "Pod admission denied" podUID="d14aaf21-21d7-41db-9cd4-8a2c5f172941" pod="tigera-operator/tigera-operator-6f6897fdc5-bvnrk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:10.993470 kubelet[2762]: I0515 13:09:10.993364 2762 kubelet.go:2306] "Pod admission denied" podUID="d5b0716f-20a7-49fd-8bb8-53325f73b0d7" pod="tigera-operator/tigera-operator-6f6897fdc5-rf8w5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:11.077099 kubelet[2762]: I0515 13:09:11.077036 2762 kubelet.go:2306] "Pod admission denied" podUID="9d167646-8f05-4bc3-98d4-dc9c2b60f952" pod="tigera-operator/tigera-operator-6f6897fdc5-zxvlt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:11.286635 kubelet[2762]: I0515 13:09:11.286332 2762 kubelet.go:2306] "Pod admission denied" podUID="6922c648-7448-4c12-ac2a-054ca72ba5ae" pod="tigera-operator/tigera-operator-6f6897fdc5-pmcsk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:11.374035 kubelet[2762]: I0515 13:09:11.373968 2762 kubelet.go:2306] "Pod admission denied" podUID="04ef9811-22b2-4686-be2c-d01dd08c215a" pod="tigera-operator/tigera-operator-6f6897fdc5-thsdv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:11.426862 kubelet[2762]: I0515 13:09:11.426804 2762 kubelet.go:2306] "Pod admission denied" podUID="aa73f19d-8ddc-4a13-95b9-7e47120fa555" pod="tigera-operator/tigera-operator-6f6897fdc5-gnqlq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:11.540313 kubelet[2762]: I0515 13:09:11.539943 2762 kubelet.go:2306] "Pod admission denied" podUID="268dc807-7f4e-4128-8e38-c2df2db6fc70" pod="tigera-operator/tigera-operator-6f6897fdc5-8vv8l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:11.729189 kubelet[2762]: I0515 13:09:11.729107 2762 kubelet.go:2306] "Pod admission denied" podUID="def7b71f-2cb5-4584-93a7-cf893f21c494" pod="tigera-operator/tigera-operator-6f6897fdc5-d8sjd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:11.825085 kubelet[2762]: I0515 13:09:11.824947 2762 kubelet.go:2306] "Pod admission denied" podUID="8d6edc50-dfd7-468b-9cb1-6e0acce40410" pod="tigera-operator/tigera-operator-6f6897fdc5-skpwb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:11.929243 kubelet[2762]: I0515 13:09:11.929182 2762 kubelet.go:2306] "Pod admission denied" podUID="51a79915-993c-4321-8fcf-c3324ac0b56d" pod="tigera-operator/tigera-operator-6f6897fdc5-kjsr6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.128096 kubelet[2762]: I0515 13:09:12.128036 2762 kubelet.go:2306] "Pod admission denied" podUID="515c34e3-7a59-4e0c-89b3-f2b99c9b7238" pod="tigera-operator/tigera-operator-6f6897fdc5-kjjr5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.232367 kubelet[2762]: I0515 13:09:12.232275 2762 kubelet.go:2306] "Pod admission denied" podUID="2a4fd8ea-7c63-40f5-a3d1-4d7ccb2cf9e3" pod="tigera-operator/tigera-operator-6f6897fdc5-t4d86" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.327980 kubelet[2762]: I0515 13:09:12.327909 2762 kubelet.go:2306] "Pod admission denied" podUID="2a05e5e5-7b0a-4ca6-8c8e-9aa3e71c75eb" pod="tigera-operator/tigera-operator-6f6897fdc5-sp88g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.426999 kubelet[2762]: I0515 13:09:12.426823 2762 kubelet.go:2306] "Pod admission denied" podUID="54b6cc72-f572-46f5-84c1-578a863fd15a" pod="tigera-operator/tigera-operator-6f6897fdc5-hcj5l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.523004 kubelet[2762]: I0515 13:09:12.522926 2762 kubelet.go:2306] "Pod admission denied" podUID="89837f2f-e496-4b94-a52f-aaaf6e6bcffc" pod="tigera-operator/tigera-operator-6f6897fdc5-57ll5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.627789 kubelet[2762]: I0515 13:09:12.627719 2762 kubelet.go:2306] "Pod admission denied" podUID="8f17d029-82ff-4588-b254-8375203850d2" pod="tigera-operator/tigera-operator-6f6897fdc5-87588" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.679148 kubelet[2762]: I0515 13:09:12.679005 2762 kubelet.go:2306] "Pod admission denied" podUID="0879c01d-1877-469c-9938-dc98de7101ce" pod="tigera-operator/tigera-operator-6f6897fdc5-ps64g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.775449 kubelet[2762]: I0515 13:09:12.775372 2762 kubelet.go:2306] "Pod admission denied" podUID="bb08e853-9a30-46e2-84fb-357793d44748" pod="tigera-operator/tigera-operator-6f6897fdc5-vxnlp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.875925 kubelet[2762]: I0515 13:09:12.875854 2762 kubelet.go:2306] "Pod admission denied" podUID="ebb4251d-3bb2-4443-beda-96c5ac77a930" pod="tigera-operator/tigera-operator-6f6897fdc5-5h9fv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:12.990947 kubelet[2762]: I0515 13:09:12.989872 2762 kubelet.go:2306] "Pod admission denied" podUID="fe2f070a-9351-4da7-8f40-6290d6717a6d" pod="tigera-operator/tigera-operator-6f6897fdc5-497bt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.075162 kubelet[2762]: I0515 13:09:13.075080 2762 kubelet.go:2306] "Pod admission denied" podUID="d089dd00-cf85-46c5-82ad-82cf86febd7c" pod="tigera-operator/tigera-operator-6f6897fdc5-w4vhw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.177206 kubelet[2762]: I0515 13:09:13.177118 2762 kubelet.go:2306] "Pod admission denied" podUID="ff11d8e5-0f7d-4ecf-a328-9c1b8ab8a70a" pod="tigera-operator/tigera-operator-6f6897fdc5-kt6zr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.278787 kubelet[2762]: I0515 13:09:13.278381 2762 kubelet.go:2306] "Pod admission denied" podUID="dc607c90-0856-4447-aacf-df7b88be0221" pod="tigera-operator/tigera-operator-6f6897fdc5-jthsz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.378910 kubelet[2762]: I0515 13:09:13.378828 2762 kubelet.go:2306] "Pod admission denied" podUID="41e21560-a495-417e-93a3-8896589922f2" pod="tigera-operator/tigera-operator-6f6897fdc5-twgb4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.488323 kubelet[2762]: I0515 13:09:13.488188 2762 kubelet.go:2306] "Pod admission denied" podUID="8883f170-a69a-49d8-b565-581f6507f079" pod="tigera-operator/tigera-operator-6f6897fdc5-w9kr2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.575538 kubelet[2762]: I0515 13:09:13.575399 2762 kubelet.go:2306] "Pod admission denied" podUID="3a0d2857-3ed9-43f1-99f0-109bfd3de1ef" pod="tigera-operator/tigera-operator-6f6897fdc5-nhbhf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.675917 kubelet[2762]: I0515 13:09:13.675860 2762 kubelet.go:2306] "Pod admission denied" podUID="ce2f1525-f4bd-40d6-8afd-cf0c1091790e" pod="tigera-operator/tigera-operator-6f6897fdc5-t875d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.778628 kubelet[2762]: I0515 13:09:13.778540 2762 kubelet.go:2306] "Pod admission denied" podUID="132d3d09-ff42-4e95-a757-857e486dd633" pod="tigera-operator/tigera-operator-6f6897fdc5-298mb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.876988 kubelet[2762]: I0515 13:09:13.876939 2762 kubelet.go:2306] "Pod admission denied" podUID="4f56479c-ae17-4f78-bef5-13a16658001b" pod="tigera-operator/tigera-operator-6f6897fdc5-9mfjq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.933131 kubelet[2762]: I0515 13:09:13.932838 2762 kubelet.go:2306] "Pod admission denied" podUID="84259f56-d962-4a3e-88a6-f6dc0f8d50ab" pod="tigera-operator/tigera-operator-6f6897fdc5-8p97x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:13.960237 containerd[1543]: time="2025-05-15T13:09:13.960133416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:09:14.032971 kubelet[2762]: I0515 13:09:14.032914 2762 kubelet.go:2306] "Pod admission denied" podUID="f6377a27-d8f8-48aa-9e66-9c41a1c73754" pod="tigera-operator/tigera-operator-6f6897fdc5-lkq64" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.063184 containerd[1543]: time="2025-05-15T13:09:14.063106149Z" level=error msg="Failed to destroy network for sandbox \"c4601b4a29151a6cff5df6f26e3f551a51fa8dcfdd2ea8be8e26a86ad0c66d11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:14.067280 systemd[1]: run-netns-cni\x2d8ce98d72\x2d9f50\x2dd06b\x2d6161\x2d120a59be511d.mount: Deactivated successfully. May 15 13:09:14.069750 containerd[1543]: time="2025-05-15T13:09:14.067856829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4601b4a29151a6cff5df6f26e3f551a51fa8dcfdd2ea8be8e26a86ad0c66d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:14.070061 kubelet[2762]: E0515 13:09:14.069080 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4601b4a29151a6cff5df6f26e3f551a51fa8dcfdd2ea8be8e26a86ad0c66d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:14.070061 kubelet[2762]: E0515 13:09:14.069249 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4601b4a29151a6cff5df6f26e3f551a51fa8dcfdd2ea8be8e26a86ad0c66d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:14.070061 kubelet[2762]: E0515 13:09:14.069850 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4601b4a29151a6cff5df6f26e3f551a51fa8dcfdd2ea8be8e26a86ad0c66d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:14.070834 kubelet[2762]: E0515 13:09:14.070046 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4601b4a29151a6cff5df6f26e3f551a51fa8dcfdd2ea8be8e26a86ad0c66d11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:09:14.124796 kubelet[2762]: I0515 13:09:14.124539 2762 kubelet.go:2306] "Pod admission denied" podUID="c77bbc0d-e6e9-4abf-a4fa-dba1f7910a45" pod="tigera-operator/tigera-operator-6f6897fdc5-g7r84" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.228540 kubelet[2762]: I0515 13:09:14.227449 2762 kubelet.go:2306] "Pod admission denied" podUID="a4d3c5f7-0211-4b12-9ee7-e2d4a9ebd0f3" pod="tigera-operator/tigera-operator-6f6897fdc5-r2n97" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.329815 kubelet[2762]: I0515 13:09:14.329737 2762 kubelet.go:2306] "Pod admission denied" podUID="dee31c43-2f9d-4fd6-8099-3ff9f7a7727f" pod="tigera-operator/tigera-operator-6f6897fdc5-jlq9r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.429793 kubelet[2762]: I0515 13:09:14.429546 2762 kubelet.go:2306] "Pod admission denied" podUID="9ea88bb5-d0c4-4517-899f-3ecc139c03c1" pod="tigera-operator/tigera-operator-6f6897fdc5-7v428" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.529851 kubelet[2762]: I0515 13:09:14.529514 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:09:14.529851 kubelet[2762]: I0515 13:09:14.529576 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:09:14.533235 kubelet[2762]: I0515 13:09:14.533197 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:09:14.537505 kubelet[2762]: I0515 13:09:14.537477 2762 kubelet.go:2306] "Pod admission denied" podUID="92af7a02-8236-4565-8253-00a973dc57f8" pod="tigera-operator/tigera-operator-6f6897fdc5-ghwdk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.556719 kubelet[2762]: I0515 13:09:14.556684 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:09:14.557425 kubelet[2762]: I0515 13:09:14.556804 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556838 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556848 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556855 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556861 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556869 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556883 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556894 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556902 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556911 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:09:14.557425 kubelet[2762]: E0515 13:09:14.556919 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:09:14.557425 kubelet[2762]: I0515 13:09:14.556928 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:09:14.629525 kubelet[2762]: I0515 13:09:14.629462 2762 kubelet.go:2306] "Pod admission denied" podUID="b7174099-2ea4-4fe8-935d-f89ffb2d7e20" pod="tigera-operator/tigera-operator-6f6897fdc5-khttv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.825377 kubelet[2762]: I0515 13:09:14.825234 2762 kubelet.go:2306] "Pod admission denied" podUID="3c2615c5-f7af-4137-bee8-fe0021e709fd" pod="tigera-operator/tigera-operator-6f6897fdc5-9hsf7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.926828 kubelet[2762]: I0515 13:09:14.926771 2762 kubelet.go:2306] "Pod admission denied" podUID="6c0ec52b-26db-4c27-98b6-f75f8cfd0057" pod="tigera-operator/tigera-operator-6f6897fdc5-hm624" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:14.959297 kubelet[2762]: E0515 13:09:14.958364 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:14.963142 containerd[1543]: time="2025-05-15T13:09:14.962023688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:09:14.998206 kubelet[2762]: I0515 13:09:14.998153 2762 kubelet.go:2306] "Pod admission denied" podUID="63cff08f-89ae-4075-8b92-90cd364ad1cd" pod="tigera-operator/tigera-operator-6f6897fdc5-wjgpv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.084790 containerd[1543]: time="2025-05-15T13:09:15.084368707Z" level=error msg="Failed to destroy network for sandbox \"6788a742ea3f669f68bc96dd4421c21fcdc82568bf31f70e0c71f8756dffdc69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:15.089501 systemd[1]: run-netns-cni\x2dc4ce576f\x2de3f3\x2dca72\x2dcf7b\x2d2fab643698df.mount: Deactivated successfully. May 15 13:09:15.092788 containerd[1543]: time="2025-05-15T13:09:15.091742541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6788a742ea3f669f68bc96dd4421c21fcdc82568bf31f70e0c71f8756dffdc69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:15.093878 kubelet[2762]: E0515 13:09:15.093799 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6788a742ea3f669f68bc96dd4421c21fcdc82568bf31f70e0c71f8756dffdc69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:15.094001 kubelet[2762]: E0515 13:09:15.093962 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6788a742ea3f669f68bc96dd4421c21fcdc82568bf31f70e0c71f8756dffdc69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:15.094057 kubelet[2762]: E0515 13:09:15.094017 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6788a742ea3f669f68bc96dd4421c21fcdc82568bf31f70e0c71f8756dffdc69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:15.094173 kubelet[2762]: E0515 13:09:15.094125 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6788a742ea3f669f68bc96dd4421c21fcdc82568bf31f70e0c71f8756dffdc69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:09:15.099100 kubelet[2762]: I0515 13:09:15.099062 2762 kubelet.go:2306] "Pod admission denied" podUID="66b33110-ad26-4e51-aca1-9d7e81250918" pod="tigera-operator/tigera-operator-6f6897fdc5-qptsc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.322599 kubelet[2762]: I0515 13:09:15.322511 2762 kubelet.go:2306] "Pod admission denied" podUID="b9008230-5e74-4b7a-ab9d-6cdf86103c5c" pod="tigera-operator/tigera-operator-6f6897fdc5-5g8zx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.426962 kubelet[2762]: I0515 13:09:15.426902 2762 kubelet.go:2306] "Pod admission denied" podUID="f16ae77b-3a11-4fd1-838d-c7b1193cc215" pod="tigera-operator/tigera-operator-6f6897fdc5-7xqnj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.526187 kubelet[2762]: I0515 13:09:15.526126 2762 kubelet.go:2306] "Pod admission denied" podUID="80810662-45e5-4817-be1b-17e232b4b4f1" pod="tigera-operator/tigera-operator-6f6897fdc5-xgs5j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.636510 kubelet[2762]: I0515 13:09:15.635086 2762 kubelet.go:2306] "Pod admission denied" podUID="21697a47-6b9f-4afd-aa9a-a850bcb7d157" pod="tigera-operator/tigera-operator-6f6897fdc5-t7l88" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.729942 kubelet[2762]: I0515 13:09:15.729798 2762 kubelet.go:2306] "Pod admission denied" podUID="fc29568a-14c1-45e4-bff7-f4cef2d0c1f0" pod="tigera-operator/tigera-operator-6f6897fdc5-wbbvm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.776588 kubelet[2762]: I0515 13:09:15.776503 2762 kubelet.go:2306] "Pod admission denied" podUID="b22b9e5c-2b81-4578-b3b6-d13a29d6233d" pod="tigera-operator/tigera-operator-6f6897fdc5-67srp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.880482 kubelet[2762]: I0515 13:09:15.880419 2762 kubelet.go:2306] "Pod admission denied" podUID="8ce0e818-6d32-4924-8b77-ad86589e3aee" pod="tigera-operator/tigera-operator-6f6897fdc5-dtrcc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:15.981041 kubelet[2762]: I0515 13:09:15.980056 2762 kubelet.go:2306] "Pod admission denied" podUID="94c44c6d-1c67-4124-82c9-a71d94178e51" pod="tigera-operator/tigera-operator-6f6897fdc5-5z5mm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:16.089748 kubelet[2762]: I0515 13:09:16.089686 2762 kubelet.go:2306] "Pod admission denied" podUID="9e61300d-be54-4d83-a4bb-8f8295c9a7b8" pod="tigera-operator/tigera-operator-6f6897fdc5-rppwr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:16.178838 kubelet[2762]: I0515 13:09:16.178775 2762 kubelet.go:2306] "Pod admission denied" podUID="434095f4-b9e8-484c-8fc9-7e4c82b43a16" pod="tigera-operator/tigera-operator-6f6897fdc5-ns95c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:16.276589 kubelet[2762]: I0515 13:09:16.275877 2762 kubelet.go:2306] "Pod admission denied" podUID="1ffff313-4c7f-4782-9e48-330bc5f80e87" pod="tigera-operator/tigera-operator-6f6897fdc5-bltm4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:16.426096 kubelet[2762]: I0515 13:09:16.426012 2762 kubelet.go:2306] "Pod admission denied" podUID="a4a4d748-34b7-473e-9f5e-6b26badf2805" pod="tigera-operator/tigera-operator-6f6897fdc5-7kbd9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:16.497146 kubelet[2762]: I0515 13:09:16.497077 2762 kubelet.go:2306] "Pod admission denied" podUID="a4c91894-0063-41a0-8806-3e391c06206b" pod="tigera-operator/tigera-operator-6f6897fdc5-hpzh7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:16.587256 kubelet[2762]: I0515 13:09:16.586390 2762 kubelet.go:2306] "Pod admission denied" podUID="e351b1ac-9eb8-4ab7-ae57-4ee0f4fba313" pod="tigera-operator/tigera-operator-6f6897fdc5-vk8zl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:16.748646 kubelet[2762]: I0515 13:09:16.748576 2762 kubelet.go:2306] "Pod admission denied" podUID="ce67527b-2ecd-4974-9230-341534bbe30b" pod="tigera-operator/tigera-operator-6f6897fdc5-6t5gc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:16.927460 kubelet[2762]: I0515 13:09:16.927399 2762 kubelet.go:2306] "Pod admission denied" podUID="331bb41c-e674-4fe0-9eb3-5a69115668fb" pod="tigera-operator/tigera-operator-6f6897fdc5-8hblv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.037011 kubelet[2762]: I0515 13:09:17.036943 2762 kubelet.go:2306] "Pod admission denied" podUID="98978bf8-6059-495b-9f50-7ca4bc3d4ea7" pod="tigera-operator/tigera-operator-6f6897fdc5-72cqz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.129199 kubelet[2762]: I0515 13:09:17.129136 2762 kubelet.go:2306] "Pod admission denied" podUID="6a69ecd7-16e0-4492-bcaf-503b4f3bdd81" pod="tigera-operator/tigera-operator-6f6897fdc5-8mk5m" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.227120 kubelet[2762]: I0515 13:09:17.226976 2762 kubelet.go:2306] "Pod admission denied" podUID="adf32489-ca22-4bda-aafa-7f40cb7f4dfe" pod="tigera-operator/tigera-operator-6f6897fdc5-vjlf7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.324049 kubelet[2762]: I0515 13:09:17.323985 2762 kubelet.go:2306] "Pod admission denied" podUID="233bb2b9-578a-4452-bff9-81d9bc391542" pod="tigera-operator/tigera-operator-6f6897fdc5-25mwx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.428945 kubelet[2762]: I0515 13:09:17.428887 2762 kubelet.go:2306] "Pod admission denied" podUID="10d86b14-76b9-43cd-a78d-25b6fec2281e" pod="tigera-operator/tigera-operator-6f6897fdc5-5hk2p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.538492 kubelet[2762]: I0515 13:09:17.536923 2762 kubelet.go:2306] "Pod admission denied" podUID="3d5157f3-fe30-4d53-9238-b2141813aee3" pod="tigera-operator/tigera-operator-6f6897fdc5-lhlp5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.628365 kubelet[2762]: I0515 13:09:17.628278 2762 kubelet.go:2306] "Pod admission denied" podUID="17356a59-57a5-4c71-8a1e-1e97a6b76615" pod="tigera-operator/tigera-operator-6f6897fdc5-zn7rq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.728689 kubelet[2762]: I0515 13:09:17.728449 2762 kubelet.go:2306] "Pod admission denied" podUID="bf8c3069-0d6b-4625-bba3-74d50d5acf81" pod="tigera-operator/tigera-operator-6f6897fdc5-dzklg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.787310 kubelet[2762]: I0515 13:09:17.787245 2762 kubelet.go:2306] "Pod admission denied" podUID="b02d718b-5f6b-42fa-a908-9c7d71617b2e" pod="tigera-operator/tigera-operator-6f6897fdc5-djl4x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.874388 kubelet[2762]: I0515 13:09:17.874340 2762 kubelet.go:2306] "Pod admission denied" podUID="7401493e-770a-4d03-aa6a-a080a46ef4e0" pod="tigera-operator/tigera-operator-6f6897fdc5-gpls6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:17.960888 kubelet[2762]: E0515 13:09:17.960788 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:17.966903 kubelet[2762]: E0515 13:09:17.966844 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-h5k9z" podUID="1a8a24dd-708e-4ec3-b972-4df98026b344" May 15 13:09:17.986598 kubelet[2762]: I0515 13:09:17.986484 2762 kubelet.go:2306] "Pod admission denied" podUID="3e09e36a-2a77-4277-9182-c8959ed63a04" pod="tigera-operator/tigera-operator-6f6897fdc5-tct8z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.014972 kubelet[2762]: I0515 13:09:18.008503 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-tct8z" podStartSLOduration=1.008445914 podStartE2EDuration="1.008445914s" podCreationTimestamp="2025-05-15 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 13:09:17.999764918 +0000 UTC m=+108.172062802" watchObservedRunningTime="2025-05-15 13:09:18.008445914 +0000 UTC m=+108.180743808" May 15 13:09:18.081376 kubelet[2762]: I0515 13:09:18.081334 2762 kubelet.go:2306] "Pod admission denied" podUID="0145c97d-d8b7-455a-ac2e-1d3d8db18a6a" pod="tigera-operator/tigera-operator-6f6897fdc5-579r6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.179307 kubelet[2762]: I0515 13:09:18.178473 2762 kubelet.go:2306] "Pod admission denied" podUID="18948670-b8d3-4a8e-9bf7-3697deda6fb8" pod="tigera-operator/tigera-operator-6f6897fdc5-dxkvn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.279414 kubelet[2762]: I0515 13:09:18.279341 2762 kubelet.go:2306] "Pod admission denied" podUID="181dcaa4-c92e-47ad-af94-2a5b06f85748" pod="tigera-operator/tigera-operator-6f6897fdc5-4frtk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.378200 kubelet[2762]: I0515 13:09:18.376842 2762 kubelet.go:2306] "Pod admission denied" podUID="def49a30-a0fd-4b12-8160-aa8e9479b86a" pod="tigera-operator/tigera-operator-6f6897fdc5-tmr69" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.491898 kubelet[2762]: I0515 13:09:18.491388 2762 kubelet.go:2306] "Pod admission denied" podUID="4b553c59-7b6a-4e51-8826-aed0f3c04ebf" pod="tigera-operator/tigera-operator-6f6897fdc5-zphj8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.578420 kubelet[2762]: I0515 13:09:18.578365 2762 kubelet.go:2306] "Pod admission denied" podUID="53338f0b-a78c-4508-8631-981bf740207a" pod="tigera-operator/tigera-operator-6f6897fdc5-lvspp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.678859 kubelet[2762]: I0515 13:09:18.678797 2762 kubelet.go:2306] "Pod admission denied" podUID="cc29f5ad-4dc5-4390-9f56-ecbec34e4e0f" pod="tigera-operator/tigera-operator-6f6897fdc5-xfbv4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.878339 kubelet[2762]: I0515 13:09:18.878287 2762 kubelet.go:2306] "Pod admission denied" podUID="0ea99d82-ec98-47fa-b9d6-104733f1f351" pod="tigera-operator/tigera-operator-6f6897fdc5-k8f4t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:18.980222 kubelet[2762]: I0515 13:09:18.980156 2762 kubelet.go:2306] "Pod admission denied" podUID="22faa45f-d25b-4544-a36c-d716975057cc" pod="tigera-operator/tigera-operator-6f6897fdc5-bm5hl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.036982 kubelet[2762]: I0515 13:09:19.036125 2762 kubelet.go:2306] "Pod admission denied" podUID="9faf324c-f131-4b35-975b-030d69e4b5d3" pod="tigera-operator/tigera-operator-6f6897fdc5-r67cb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.126923 kubelet[2762]: I0515 13:09:19.126866 2762 kubelet.go:2306] "Pod admission denied" podUID="115a58b0-482b-43d6-9c36-e4979e71f512" pod="tigera-operator/tigera-operator-6f6897fdc5-nkk25" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.226021 kubelet[2762]: I0515 13:09:19.225869 2762 kubelet.go:2306] "Pod admission denied" podUID="78f5d11e-eb0d-493f-abd2-355931689fc5" pod="tigera-operator/tigera-operator-6f6897fdc5-w9gfx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.327145 kubelet[2762]: I0515 13:09:19.327075 2762 kubelet.go:2306] "Pod admission denied" podUID="0b449de5-58e5-43f3-8ac6-bf6533cf2bf9" pod="tigera-operator/tigera-operator-6f6897fdc5-vgc5x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.530312 kubelet[2762]: I0515 13:09:19.530129 2762 kubelet.go:2306] "Pod admission denied" podUID="c5c94ded-5a17-4253-8430-27b62c4b3619" pod="tigera-operator/tigera-operator-6f6897fdc5-vnmlb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.641590 kubelet[2762]: I0515 13:09:19.640894 2762 kubelet.go:2306] "Pod admission denied" podUID="20d9ff77-2c95-4cbd-9cd6-3a968b8404d1" pod="tigera-operator/tigera-operator-6f6897fdc5-v94tb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.692339 kubelet[2762]: I0515 13:09:19.692273 2762 kubelet.go:2306] "Pod admission denied" podUID="3bdc8f2a-bc96-48e3-bc85-d1f4255e0844" pod="tigera-operator/tigera-operator-6f6897fdc5-rzz8h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.778190 kubelet[2762]: I0515 13:09:19.778131 2762 kubelet.go:2306] "Pod admission denied" podUID="405062fe-ae25-4b83-aba8-4c000fefd48a" pod="tigera-operator/tigera-operator-6f6897fdc5-dk6jt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.873566 kubelet[2762]: I0515 13:09:19.873500 2762 kubelet.go:2306] "Pod admission denied" podUID="6707d0be-878f-47bc-bd6e-e815bf2961e8" pod="tigera-operator/tigera-operator-6f6897fdc5-xt49h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:19.982743 kubelet[2762]: I0515 13:09:19.982675 2762 kubelet.go:2306] "Pod admission denied" podUID="0235549c-0073-45e7-b337-c604f5b9bbf5" pod="tigera-operator/tigera-operator-6f6897fdc5-9b7kc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.189996 kubelet[2762]: I0515 13:09:20.189813 2762 kubelet.go:2306] "Pod admission denied" podUID="48834330-15d1-42be-ba02-5d12fbae5d55" pod="tigera-operator/tigera-operator-6f6897fdc5-vlkhl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.274111 kubelet[2762]: I0515 13:09:20.274052 2762 kubelet.go:2306] "Pod admission denied" podUID="85752e4a-6435-40ab-abe0-72196142c011" pod="tigera-operator/tigera-operator-6f6897fdc5-6dt27" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.376993 kubelet[2762]: I0515 13:09:20.376914 2762 kubelet.go:2306] "Pod admission denied" podUID="c614c9d7-b574-4d56-8760-a906be985837" pod="tigera-operator/tigera-operator-6f6897fdc5-sg96w" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.477336 kubelet[2762]: I0515 13:09:20.477134 2762 kubelet.go:2306] "Pod admission denied" podUID="8bba0934-9806-4efc-9f7a-42050bb25475" pod="tigera-operator/tigera-operator-6f6897fdc5-z6cm5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.574966 kubelet[2762]: I0515 13:09:20.574906 2762 kubelet.go:2306] "Pod admission denied" podUID="42bd51ad-3a91-464f-a9f5-c6419fad7a46" pod="tigera-operator/tigera-operator-6f6897fdc5-8ts7w" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.679584 kubelet[2762]: I0515 13:09:20.679193 2762 kubelet.go:2306] "Pod admission denied" podUID="f8bc4cda-9074-475d-8d18-f90ed55efbb2" pod="tigera-operator/tigera-operator-6f6897fdc5-8m7t8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.777317 kubelet[2762]: I0515 13:09:20.777175 2762 kubelet.go:2306] "Pod admission denied" podUID="a984e8c2-c871-468e-b070-6a717e68553b" pod="tigera-operator/tigera-operator-6f6897fdc5-wsxlv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.877124 kubelet[2762]: I0515 13:09:20.877055 2762 kubelet.go:2306] "Pod admission denied" podUID="bf28d7db-82ba-44a0-81d6-6c85f8c6a6ff" pod="tigera-operator/tigera-operator-6f6897fdc5-fcwrh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:20.957161 kubelet[2762]: E0515 13:09:20.957106 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:20.959197 containerd[1543]: time="2025-05-15T13:09:20.958291871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:09:20.992252 kubelet[2762]: I0515 13:09:20.992204 2762 kubelet.go:2306] "Pod admission denied" podUID="e9ab3759-d786-4761-b616-f5da278b07f1" pod="tigera-operator/tigera-operator-6f6897fdc5-9m6hd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.047875 containerd[1543]: time="2025-05-15T13:09:21.044083244Z" level=error msg="Failed to destroy network for sandbox \"193cbed83547ddfc98dfa1fd5141a9e184f01f7df87b707d7a234627768e17a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:21.047875 containerd[1543]: time="2025-05-15T13:09:21.045442097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"193cbed83547ddfc98dfa1fd5141a9e184f01f7df87b707d7a234627768e17a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:21.048083 kubelet[2762]: E0515 13:09:21.046053 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193cbed83547ddfc98dfa1fd5141a9e184f01f7df87b707d7a234627768e17a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:21.048083 kubelet[2762]: E0515 13:09:21.046149 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193cbed83547ddfc98dfa1fd5141a9e184f01f7df87b707d7a234627768e17a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:21.048083 kubelet[2762]: E0515 13:09:21.046172 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"193cbed83547ddfc98dfa1fd5141a9e184f01f7df87b707d7a234627768e17a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:21.048083 kubelet[2762]: E0515 13:09:21.046230 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"193cbed83547ddfc98dfa1fd5141a9e184f01f7df87b707d7a234627768e17a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xfdz2" podUID="b53c6794-8ef1-4efd-9179-2e706d6227cb" May 15 13:09:21.049278 systemd[1]: run-netns-cni\x2d875adc55\x2d679c\x2d8dbf\x2dcff9\x2d82e5effb5602.mount: Deactivated successfully. May 15 13:09:21.177347 kubelet[2762]: I0515 13:09:21.177287 2762 kubelet.go:2306] "Pod admission denied" podUID="741a6984-8669-4877-b3cc-a2ae0d9eb3e3" pod="tigera-operator/tigera-operator-6f6897fdc5-5qslp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.286276 kubelet[2762]: I0515 13:09:21.285314 2762 kubelet.go:2306] "Pod admission denied" podUID="e216f6dc-c0bc-439c-99bb-48c6cb3150df" pod="tigera-operator/tigera-operator-6f6897fdc5-2lfqn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.375118 kubelet[2762]: I0515 13:09:21.375070 2762 kubelet.go:2306] "Pod admission denied" podUID="20462ef0-7e01-474e-88c0-58c3c756ed6a" pod="tigera-operator/tigera-operator-6f6897fdc5-5zqst" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.475461 kubelet[2762]: I0515 13:09:21.475407 2762 kubelet.go:2306] "Pod admission denied" podUID="bd94f24a-62ee-4556-a7ad-d22b8fdfacea" pod="tigera-operator/tigera-operator-6f6897fdc5-2jwc8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.576303 kubelet[2762]: I0515 13:09:21.576242 2762 kubelet.go:2306] "Pod admission denied" podUID="0cf3357e-2795-408e-a223-4a0fc8c59707" pod="tigera-operator/tigera-operator-6f6897fdc5-jffcl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.677940 kubelet[2762]: I0515 13:09:21.677600 2762 kubelet.go:2306] "Pod admission denied" podUID="b17c5621-273d-47c1-b19b-cf833ecd1c6e" pod="tigera-operator/tigera-operator-6f6897fdc5-8rb94" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.806751 kubelet[2762]: I0515 13:09:21.805696 2762 kubelet.go:2306] "Pod admission denied" podUID="a1f60f24-4653-48c2-a148-eddc837904f9" pod="tigera-operator/tigera-operator-6f6897fdc5-dsf8c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.880579 kubelet[2762]: I0515 13:09:21.880509 2762 kubelet.go:2306] "Pod admission denied" podUID="66745ad1-5fcd-4289-8807-e147b5ed0401" pod="tigera-operator/tigera-operator-6f6897fdc5-zhspv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:21.979144 kubelet[2762]: I0515 13:09:21.978297 2762 kubelet.go:2306] "Pod admission denied" podUID="4caedc78-6882-4bbb-b6ee-8bf6352ddfe6" pod="tigera-operator/tigera-operator-6f6897fdc5-ml7bl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.078512 kubelet[2762]: I0515 13:09:22.078468 2762 kubelet.go:2306] "Pod admission denied" podUID="8e7330c0-789f-4c28-84db-e85450add89a" pod="tigera-operator/tigera-operator-6f6897fdc5-dq85g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.124036 kubelet[2762]: I0515 13:09:22.123982 2762 kubelet.go:2306] "Pod admission denied" podUID="1dcfeca6-d645-40aa-88ee-6845fc3cc717" pod="tigera-operator/tigera-operator-6f6897fdc5-gmwj7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.237574 kubelet[2762]: I0515 13:09:22.235166 2762 kubelet.go:2306] "Pod admission denied" podUID="85ce1f55-b885-485d-bc7f-0aa159f430e1" pod="tigera-operator/tigera-operator-6f6897fdc5-6qwck" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.336229 kubelet[2762]: I0515 13:09:22.336161 2762 kubelet.go:2306] "Pod admission denied" podUID="3215b461-3153-4c95-932b-b60744c5088a" pod="tigera-operator/tigera-operator-6f6897fdc5-29m5v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.431439 kubelet[2762]: I0515 13:09:22.431366 2762 kubelet.go:2306] "Pod admission denied" podUID="2662b777-cf10-487b-b01e-8e946304fd75" pod="tigera-operator/tigera-operator-6f6897fdc5-tvlcj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.533409 kubelet[2762]: I0515 13:09:22.533245 2762 kubelet.go:2306] "Pod admission denied" podUID="e0441012-eaaa-4596-86e7-248ea87ed9e7" pod="tigera-operator/tigera-operator-6f6897fdc5-djc4g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.628593 kubelet[2762]: I0515 13:09:22.628472 2762 kubelet.go:2306] "Pod admission denied" podUID="58144ca8-610b-4daa-a23d-6132321088ab" pod="tigera-operator/tigera-operator-6f6897fdc5-6g4xf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.741754 kubelet[2762]: I0515 13:09:22.741413 2762 kubelet.go:2306] "Pod admission denied" podUID="5e5f37ed-c618-4fa5-a020-942513c8d1ea" pod="tigera-operator/tigera-operator-6f6897fdc5-cxk7h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.832377 kubelet[2762]: I0515 13:09:22.831661 2762 kubelet.go:2306] "Pod admission denied" podUID="5cd7793e-953f-445d-a1dc-4885b96f1dc7" pod="tigera-operator/tigera-operator-6f6897fdc5-wlkmx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.929135 kubelet[2762]: I0515 13:09:22.929063 2762 kubelet.go:2306] "Pod admission denied" podUID="61874f43-47a3-4eea-b163-cc23c7989d89" pod="tigera-operator/tigera-operator-6f6897fdc5-hgfgh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:22.973946 containerd[1543]: time="2025-05-15T13:09:22.973332915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:09:23.035846 kubelet[2762]: I0515 13:09:23.035745 2762 kubelet.go:2306] "Pod admission denied" podUID="b2a29955-5660-4396-866f-a371c0b9a76c" pod="tigera-operator/tigera-operator-6f6897fdc5-4t5kr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:23.062596 containerd[1543]: time="2025-05-15T13:09:23.062488504Z" level=error msg="Failed to destroy network for sandbox \"b0f56478ac4612419a8f6c93d00e178895475b3f0167a2f468ecf1b11d2271da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:23.066948 containerd[1543]: time="2025-05-15T13:09:23.065923400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f56478ac4612419a8f6c93d00e178895475b3f0167a2f468ecf1b11d2271da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:23.067051 kubelet[2762]: E0515 13:09:23.066110 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f56478ac4612419a8f6c93d00e178895475b3f0167a2f468ecf1b11d2271da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:23.067051 kubelet[2762]: E0515 13:09:23.066159 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f56478ac4612419a8f6c93d00e178895475b3f0167a2f468ecf1b11d2271da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:23.067051 kubelet[2762]: E0515 13:09:23.066181 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f56478ac4612419a8f6c93d00e178895475b3f0167a2f468ecf1b11d2271da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:23.067051 kubelet[2762]: E0515 13:09:23.066219 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0f56478ac4612419a8f6c93d00e178895475b3f0167a2f468ecf1b11d2271da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:09:23.069995 systemd[1]: run-netns-cni\x2da96a7b88\x2ddb3b\x2dee1b\x2db429\x2ddc15f2f10409.mount: Deactivated successfully. May 15 13:09:23.126939 kubelet[2762]: I0515 13:09:23.126885 2762 kubelet.go:2306] "Pod admission denied" podUID="e55edfc2-48c4-4953-b35d-9e9f34d51a0f" pod="tigera-operator/tigera-operator-6f6897fdc5-cqcl5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:23.245366 kubelet[2762]: I0515 13:09:23.244540 2762 kubelet.go:2306] "Pod admission denied" podUID="8403edb8-90be-49c6-a8d6-9042b236b170" pod="tigera-operator/tigera-operator-6f6897fdc5-sfcmc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:23.333214 kubelet[2762]: I0515 13:09:23.333147 2762 kubelet.go:2306] "Pod admission denied" podUID="b3b14383-cb85-4df4-a2b9-a1eefedc6117" pod="tigera-operator/tigera-operator-6f6897fdc5-tmcw9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:23.434419 kubelet[2762]: I0515 13:09:23.434294 2762 kubelet.go:2306] "Pod admission denied" podUID="99403de0-d96a-4b2d-8815-9ff0b46b8a66" pod="tigera-operator/tigera-operator-6f6897fdc5-hz622" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:23.534858 kubelet[2762]: I0515 13:09:23.534757 2762 kubelet.go:2306] "Pod admission denied" podUID="58000469-c296-4c01-8fc2-37f3ae67961f" pod="tigera-operator/tigera-operator-6f6897fdc5-gmvmk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:23.602591 kubelet[2762]: I0515 13:09:23.602510 2762 kubelet.go:2306] "Pod admission denied" podUID="78efc6bc-e8c2-46b5-b0b4-916d936bb6f4" pod="tigera-operator/tigera-operator-6f6897fdc5-t5884" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:23.748499 kubelet[2762]: I0515 13:09:23.746318 2762 kubelet.go:2306] "Pod admission denied" podUID="633a0f4d-3d99-4a99-84a6-ce2c7078aa36" pod="tigera-operator/tigera-operator-6f6897fdc5-9crfg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:23.919911 kubelet[2762]: I0515 13:09:23.919844 2762 kubelet.go:2306] "Pod admission denied" podUID="0761733a-31e0-4efd-a405-ecc4c6bda1b3" pod="tigera-operator/tigera-operator-6f6897fdc5-j86k6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.056287 kubelet[2762]: I0515 13:09:24.055478 2762 kubelet.go:2306] "Pod admission denied" podUID="c60515e2-45c8-4437-9813-1caf09a409ed" pod="tigera-operator/tigera-operator-6f6897fdc5-ds826" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.179199 kubelet[2762]: I0515 13:09:24.179124 2762 kubelet.go:2306] "Pod admission denied" podUID="528265c9-3fd9-4008-b199-38943c6f89a7" pod="tigera-operator/tigera-operator-6f6897fdc5-sgzzz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.280490 kubelet[2762]: I0515 13:09:24.280421 2762 kubelet.go:2306] "Pod admission denied" podUID="784e1469-010e-4ff4-b2eb-815d6b4af619" pod="tigera-operator/tigera-operator-6f6897fdc5-zddnw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.478308 kubelet[2762]: I0515 13:09:24.478233 2762 kubelet.go:2306] "Pod admission denied" podUID="137297fe-a1f6-40bd-9bd6-0cad8a7b8e9c" pod="tigera-operator/tigera-operator-6f6897fdc5-54chf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.586400 kubelet[2762]: I0515 13:09:24.586346 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:09:24.586400 kubelet[2762]: I0515 13:09:24.586416 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:09:24.591578 kubelet[2762]: I0515 13:09:24.591367 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:09:24.593577 kubelet[2762]: I0515 13:09:24.593515 2762 kubelet.go:2306] "Pod admission denied" podUID="3ac5cf6f-184d-4de3-bb6a-08db258ae5f6" pod="tigera-operator/tigera-operator-6f6897fdc5-mjd2t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.621992 kubelet[2762]: I0515 13:09:24.621952 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:09:24.622391 kubelet[2762]: I0515 13:09:24.622359 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:09:24.623349 kubelet[2762]: E0515 13:09:24.622803 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:24.623605 kubelet[2762]: E0515 13:09:24.623585 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:24.624448 kubelet[2762]: E0515 13:09:24.623897 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:24.624448 kubelet[2762]: E0515 13:09:24.623912 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:09:24.624448 kubelet[2762]: E0515 13:09:24.623922 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:24.624448 kubelet[2762]: E0515 13:09:24.623953 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:09:24.624448 kubelet[2762]: E0515 13:09:24.623966 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:09:24.624448 kubelet[2762]: E0515 13:09:24.623978 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:09:24.624448 kubelet[2762]: E0515 13:09:24.623993 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:09:24.624448 kubelet[2762]: E0515 13:09:24.624004 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:09:24.624448 kubelet[2762]: I0515 13:09:24.624019 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:09:24.689870 kubelet[2762]: I0515 13:09:24.689743 2762 kubelet.go:2306] "Pod admission denied" podUID="61368467-9ab9-487c-a2a9-55da050ae80e" pod="tigera-operator/tigera-operator-6f6897fdc5-kb4n6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.786172 kubelet[2762]: I0515 13:09:24.786012 2762 kubelet.go:2306] "Pod admission denied" podUID="b0ceaa0b-5788-41d9-8ab0-c2e3940908e7" pod="tigera-operator/tigera-operator-6f6897fdc5-2wwt5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.856272 kubelet[2762]: I0515 13:09:24.856160 2762 kubelet.go:2306] "Pod admission denied" podUID="173af387-4c4c-4ab8-8cc1-61874aa39b09" pod="tigera-operator/tigera-operator-6f6897fdc5-cqx79" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:24.973281 kubelet[2762]: I0515 13:09:24.973207 2762 kubelet.go:2306] "Pod admission denied" podUID="e5e242cd-227b-4847-a92c-d3d9c7ff936a" pod="tigera-operator/tigera-operator-6f6897fdc5-mqs87" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.077746 kubelet[2762]: I0515 13:09:25.077605 2762 kubelet.go:2306] "Pod admission denied" podUID="554388dd-2278-4938-929b-93e03163ed76" pod="tigera-operator/tigera-operator-6f6897fdc5-hsg5d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.178311 kubelet[2762]: I0515 13:09:25.178254 2762 kubelet.go:2306] "Pod admission denied" podUID="c14dde77-270b-47f4-8236-0a58abf3b3e5" pod="tigera-operator/tigera-operator-6f6897fdc5-n479j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.279923 kubelet[2762]: I0515 13:09:25.279649 2762 kubelet.go:2306] "Pod admission denied" podUID="12d24e24-ebdf-4db0-93e5-4e37ea8a9343" pod="tigera-operator/tigera-operator-6f6897fdc5-stjt8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.385801 kubelet[2762]: I0515 13:09:25.385539 2762 kubelet.go:2306] "Pod admission denied" podUID="1f274d23-81f7-4d4d-b9b3-cf3977c1043c" pod="tigera-operator/tigera-operator-6f6897fdc5-fslxn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.480013 kubelet[2762]: I0515 13:09:25.479956 2762 kubelet.go:2306] "Pod admission denied" podUID="a0674901-2c2a-4d6a-9d83-6657eb43f66f" pod="tigera-operator/tigera-operator-6f6897fdc5-8r2bp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.577600 kubelet[2762]: I0515 13:09:25.577532 2762 kubelet.go:2306] "Pod admission denied" podUID="89fa0233-fa53-4ee4-9bd1-abc93e3d3913" pod="tigera-operator/tigera-operator-6f6897fdc5-j6826" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.682225 kubelet[2762]: I0515 13:09:25.681816 2762 kubelet.go:2306] "Pod admission denied" podUID="22b23968-003d-45d5-887e-eb945282b497" pod="tigera-operator/tigera-operator-6f6897fdc5-7lwbp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.777308 kubelet[2762]: I0515 13:09:25.777255 2762 kubelet.go:2306] "Pod admission denied" podUID="8575a53e-2bb1-452e-86ef-b2ad1193ba75" pod="tigera-operator/tigera-operator-6f6897fdc5-ksjpb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.889487 kubelet[2762]: I0515 13:09:25.889415 2762 kubelet.go:2306] "Pod admission denied" podUID="1b195e4b-5c5e-4382-9772-885de9ae2fa1" pod="tigera-operator/tigera-operator-6f6897fdc5-mbdm2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:25.986864 kubelet[2762]: I0515 13:09:25.986718 2762 kubelet.go:2306] "Pod admission denied" podUID="76e74627-1094-41ec-a5f9-938e239d0311" pod="tigera-operator/tigera-operator-6f6897fdc5-wgtkb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:26.075978 kubelet[2762]: I0515 13:09:26.075918 2762 kubelet.go:2306] "Pod admission denied" podUID="a08c90fa-ec15-4012-8ef6-d32dbeb3ef54" pod="tigera-operator/tigera-operator-6f6897fdc5-qws8p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:26.194885 kubelet[2762]: I0515 13:09:26.194830 2762 kubelet.go:2306] "Pod admission denied" podUID="eacca8c9-36f3-4b75-9f1c-adc7600f738f" pod="tigera-operator/tigera-operator-6f6897fdc5-ld2vh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:26.257829 kubelet[2762]: I0515 13:09:26.257704 2762 kubelet.go:2306] "Pod admission denied" podUID="7c4df0cc-19ae-4f67-97c0-ae144716eab3" pod="tigera-operator/tigera-operator-6f6897fdc5-kkrbt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:26.385579 kubelet[2762]: I0515 13:09:26.385037 2762 kubelet.go:2306] "Pod admission denied" podUID="358142d7-b66b-4dca-b0fa-e78902ac0379" pod="tigera-operator/tigera-operator-6f6897fdc5-7dvwq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:26.442828 kubelet[2762]: I0515 13:09:26.442759 2762 kubelet.go:2306] "Pod admission denied" podUID="5628fe4b-6c5d-4488-aa23-ee58bb029258" pod="tigera-operator/tigera-operator-6f6897fdc5-gx9lc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 13:09:27.031673 systemd[1]: Started sshd@10-172.236.109.179:22-139.178.89.65:40302.service - OpenSSH per-connection server daemon (139.178.89.65:40302). May 15 13:09:27.382662 sshd[4480]: Accepted publickey for core from 139.178.89.65 port 40302 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:09:27.384705 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:09:27.392829 systemd-logind[1516]: New session 10 of user core. May 15 13:09:27.401708 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 13:09:27.737949 sshd[4482]: Connection closed by 139.178.89.65 port 40302 May 15 13:09:27.739355 sshd-session[4480]: pam_unix(sshd:session): session closed for user core May 15 13:09:27.744175 systemd[1]: sshd@10-172.236.109.179:22-139.178.89.65:40302.service: Deactivated successfully. May 15 13:09:27.746490 systemd[1]: session-10.scope: Deactivated successfully. May 15 13:09:27.747549 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. May 15 13:09:27.749954 systemd-logind[1516]: Removed session 10. May 15 13:09:27.958824 containerd[1543]: time="2025-05-15T13:09:27.958412845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:09:28.028675 containerd[1543]: time="2025-05-15T13:09:28.026516436Z" level=error msg="Failed to destroy network for sandbox \"1f32dac8469181929525eb766ce8e9bb5276985097902e7786c2039089065cad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:28.029687 containerd[1543]: time="2025-05-15T13:09:28.029627432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f32dac8469181929525eb766ce8e9bb5276985097902e7786c2039089065cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:28.030484 kubelet[2762]: E0515 13:09:28.030098 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f32dac8469181929525eb766ce8e9bb5276985097902e7786c2039089065cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:28.030484 kubelet[2762]: E0515 13:09:28.030155 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f32dac8469181929525eb766ce8e9bb5276985097902e7786c2039089065cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:28.030484 kubelet[2762]: E0515 13:09:28.030176 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f32dac8469181929525eb766ce8e9bb5276985097902e7786c2039089065cad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:28.030484 kubelet[2762]: E0515 13:09:28.030217 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f32dac8469181929525eb766ce8e9bb5276985097902e7786c2039089065cad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:09:28.030287 systemd[1]: run-netns-cni\x2d72dd4312\x2dfaee\x2deb79\x2d6e9e\x2d4b239ca17097.mount: Deactivated successfully. May 15 13:09:28.957234 kubelet[2762]: E0515 13:09:28.957180 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:28.958799 containerd[1543]: time="2025-05-15T13:09:28.958738037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:09:29.016665 containerd[1543]: time="2025-05-15T13:09:29.016616806Z" level=error msg="Failed to destroy network for sandbox \"a740d6ea0516500c7efe06858c93ea7d232ed5c6a4a2cb07ac1d9b79dc1cfae3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:29.019255 systemd[1]: run-netns-cni\x2d74a70ea7\x2df08e\x2d274b\x2db6e9\x2d5edbdc164ce6.mount: Deactivated successfully. May 15 13:09:29.020427 containerd[1543]: time="2025-05-15T13:09:29.020333414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a740d6ea0516500c7efe06858c93ea7d232ed5c6a4a2cb07ac1d9b79dc1cfae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:29.022282 kubelet[2762]: E0515 13:09:29.021024 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a740d6ea0516500c7efe06858c93ea7d232ed5c6a4a2cb07ac1d9b79dc1cfae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:29.022282 kubelet[2762]: E0515 13:09:29.021084 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a740d6ea0516500c7efe06858c93ea7d232ed5c6a4a2cb07ac1d9b79dc1cfae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:29.022282 kubelet[2762]: E0515 13:09:29.021108 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a740d6ea0516500c7efe06858c93ea7d232ed5c6a4a2cb07ac1d9b79dc1cfae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:29.022282 kubelet[2762]: E0515 13:09:29.021145 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a740d6ea0516500c7efe06858c93ea7d232ed5c6a4a2cb07ac1d9b79dc1cfae3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:09:31.957153 kubelet[2762]: E0515 13:09:31.956590 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:31.957979 containerd[1543]: time="2025-05-15T13:09:31.957949783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 13:09:32.805775 systemd[1]: Started sshd@11-172.236.109.179:22-139.178.89.65:40316.service - OpenSSH per-connection server daemon (139.178.89.65:40316). May 15 13:09:33.162218 sshd[4557]: Accepted publickey for core from 139.178.89.65 port 40316 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:09:33.164576 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:09:33.175334 systemd-logind[1516]: New session 11 of user core. May 15 13:09:33.181072 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 13:09:33.507655 sshd[4559]: Connection closed by 139.178.89.65 port 40316 May 15 13:09:33.508437 sshd-session[4557]: pam_unix(sshd:session): session closed for user core May 15 13:09:33.515044 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. May 15 13:09:33.517604 systemd[1]: sshd@11-172.236.109.179:22-139.178.89.65:40316.service: Deactivated successfully. May 15 13:09:33.521371 systemd[1]: session-11.scope: Deactivated successfully. May 15 13:09:33.525326 systemd-logind[1516]: Removed session 11. May 15 13:09:34.671027 kubelet[2762]: I0515 13:09:34.670977 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:09:34.672993 kubelet[2762]: I0515 13:09:34.672584 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:09:34.676708 kubelet[2762]: I0515 13:09:34.676659 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:09:34.680130 kubelet[2762]: I0515 13:09:34.680095 2762 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" May 15 13:09:34.681465 containerd[1543]: time="2025-05-15T13:09:34.681223595Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 13:09:34.682745 containerd[1543]: time="2025-05-15T13:09:34.682702927Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" May 15 13:09:34.683425 containerd[1543]: time="2025-05-15T13:09:34.683389838Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" May 15 13:09:34.684089 containerd[1543]: time="2025-05-15T13:09:34.684042140Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" May 15 13:09:34.684581 containerd[1543]: time="2025-05-15T13:09:34.684115890Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 13:09:34.684649 kubelet[2762]: I0515 13:09:34.684491 2762 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" size=18182961 runtimeHandler="" May 15 13:09:34.685209 containerd[1543]: time="2025-05-15T13:09:34.685178352Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 13:09:34.686478 containerd[1543]: time="2025-05-15T13:09:34.686451714Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 13:09:34.687351 containerd[1543]: time="2025-05-15T13:09:34.687329426Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"" May 15 13:09:34.688021 containerd[1543]: time="2025-05-15T13:09:34.687998388Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" returns successfully" May 15 13:09:34.688153 containerd[1543]: time="2025-05-15T13:09:34.688129338Z" level=info msg="ImageDelete event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 13:09:34.712388 kubelet[2762]: I0515 13:09:34.712044 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:09:34.712887 kubelet[2762]: I0515 13:09:34.712769 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:09:34.712998 kubelet[2762]: E0515 13:09:34.712921 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:34.712998 kubelet[2762]: E0515 13:09:34.712937 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:34.712998 kubelet[2762]: E0515 13:09:34.712949 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:34.712998 kubelet[2762]: E0515 13:09:34.712956 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:09:34.712998 kubelet[2762]: E0515 13:09:34.712993 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:34.713694 kubelet[2762]: E0515 13:09:34.713006 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:09:34.713694 kubelet[2762]: E0515 13:09:34.713015 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:09:34.713694 kubelet[2762]: E0515 13:09:34.713027 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:09:34.713694 kubelet[2762]: E0515 13:09:34.713147 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:09:34.713694 kubelet[2762]: E0515 13:09:34.713163 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:09:34.713694 kubelet[2762]: I0515 13:09:34.713173 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:09:34.957365 kubelet[2762]: E0515 13:09:34.957244 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:34.958550 containerd[1543]: time="2025-05-15T13:09:34.958193936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:09:34.959362 containerd[1543]: time="2025-05-15T13:09:34.959338648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:09:35.172530 containerd[1543]: time="2025-05-15T13:09:35.172480127Z" level=error msg="Failed to destroy network for sandbox \"a0130418e5b84c2018ea5f1525782ace4bf70878f167db03bb0b2e31dddc010d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:35.174636 containerd[1543]: time="2025-05-15T13:09:35.174519271Z" level=error msg="Failed to destroy network for sandbox \"78a1d994664d024e3106b434be51b722e5ce55808425a99571389fce2a1a80fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:35.178624 containerd[1543]: time="2025-05-15T13:09:35.176785646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a1d994664d024e3106b434be51b722e5ce55808425a99571389fce2a1a80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:35.178714 kubelet[2762]: E0515 13:09:35.177216 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a1d994664d024e3106b434be51b722e5ce55808425a99571389fce2a1a80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:35.178714 kubelet[2762]: E0515 13:09:35.177301 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a1d994664d024e3106b434be51b722e5ce55808425a99571389fce2a1a80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:35.178714 kubelet[2762]: E0515 13:09:35.177330 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78a1d994664d024e3106b434be51b722e5ce55808425a99571389fce2a1a80fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:35.178714 kubelet[2762]: E0515 13:09:35.177380 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78a1d994664d024e3106b434be51b722e5ce55808425a99571389fce2a1a80fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:09:35.180794 systemd[1]: run-netns-cni\x2de109e2ff\x2dd37f\x2d6ab1\x2d95eb\x2d4ff47d8c13fa.mount: Deactivated successfully. May 15 13:09:35.184709 containerd[1543]: time="2025-05-15T13:09:35.183453828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0130418e5b84c2018ea5f1525782ace4bf70878f167db03bb0b2e31dddc010d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:35.186693 systemd[1]: run-netns-cni\x2d20cf5ef4\x2d7eb8\x2d65fb\x2d6c33\x2d6b165af115c4.mount: Deactivated successfully. May 15 13:09:35.187965 kubelet[2762]: E0515 13:09:35.184930 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0130418e5b84c2018ea5f1525782ace4bf70878f167db03bb0b2e31dddc010d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 13:09:35.187965 kubelet[2762]: E0515 13:09:35.186752 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0130418e5b84c2018ea5f1525782ace4bf70878f167db03bb0b2e31dddc010d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:35.187965 kubelet[2762]: E0515 13:09:35.186827 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0130418e5b84c2018ea5f1525782ace4bf70878f167db03bb0b2e31dddc010d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:35.187965 kubelet[2762]: E0515 13:09:35.186901 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfdz2_kube-system(b53c6794-8ef1-4efd-9179-2e706d6227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0130418e5b84c2018ea5f1525782ace4bf70878f167db03bb0b2e31dddc010d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xfdz2" podUID="b53c6794-8ef1-4efd-9179-2e706d6227cb" May 15 13:09:36.953187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1412377387.mount: Deactivated successfully. May 15 13:09:36.986567 containerd[1543]: time="2025-05-15T13:09:36.986488953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:36.987924 containerd[1543]: time="2025-05-15T13:09:36.987882466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 13:09:36.988621 containerd[1543]: time="2025-05-15T13:09:36.988386147Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:36.989938 containerd[1543]: time="2025-05-15T13:09:36.989875729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:36.990587 containerd[1543]: time="2025-05-15T13:09:36.990515691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 5.032511617s" May 15 13:09:36.990587 containerd[1543]: time="2025-05-15T13:09:36.990582371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 13:09:37.011674 containerd[1543]: time="2025-05-15T13:09:37.011595462Z" level=info msg="CreateContainer within sandbox \"1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 13:09:37.024787 containerd[1543]: time="2025-05-15T13:09:37.024736847Z" level=info msg="Container d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5: CDI devices from CRI Config.CDIDevices: []" May 15 13:09:37.036294 containerd[1543]: time="2025-05-15T13:09:37.036251629Z" level=info msg="CreateContainer within sandbox \"1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\"" May 15 13:09:37.037607 containerd[1543]: time="2025-05-15T13:09:37.036954060Z" level=info msg="StartContainer for \"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\"" May 15 13:09:37.038612 containerd[1543]: time="2025-05-15T13:09:37.038547104Z" level=info msg="connecting to shim d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5" address="unix:///run/containerd/s/a77803d17418b2d2db4702b8e9402f5186877e7ae232a68d53b97b391b0ad662" protocol=ttrpc version=3 May 15 13:09:37.091759 systemd[1]: Started cri-containerd-d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5.scope - libcontainer container d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5. May 15 13:09:37.163073 containerd[1543]: time="2025-05-15T13:09:37.162985312Z" level=info msg="StartContainer for \"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" returns successfully" May 15 13:09:37.254886 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 13:09:37.255076 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 13:09:37.255192 kubelet[2762]: E0515 13:09:37.253102 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:37.312598 kubelet[2762]: I0515 13:09:37.311405 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h5k9z" podStartSLOduration=1.370253106 podStartE2EDuration="1m54.311386018s" podCreationTimestamp="2025-05-15 13:07:43 +0000 UTC" firstStartedPulling="2025-05-15 13:07:44.050978052 +0000 UTC m=+14.223275936" lastFinishedPulling="2025-05-15 13:09:36.992110964 +0000 UTC m=+127.164408848" observedRunningTime="2025-05-15 13:09:37.288930885 +0000 UTC m=+127.461228769" watchObservedRunningTime="2025-05-15 13:09:37.311386018 +0000 UTC m=+127.483683902" May 15 13:09:37.711700 containerd[1543]: time="2025-05-15T13:09:37.711645948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"4401330c439acae19823b5144f9a2a9ab9245ca25bac42e3b55e7d4d1f0954f7\" pid:4695 exit_status:1 exited_at:{seconds:1747314577 nanos:710775367}" May 15 13:09:38.256465 kubelet[2762]: E0515 13:09:38.256414 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:38.318866 containerd[1543]: time="2025-05-15T13:09:38.318743475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"c18550856e6fd44ab94e7208ec4fe1221e8c4e06577d02554a31c20dbabdc14f\" pid:4742 exit_status:1 exited_at:{seconds:1747314578 nanos:318431665}" May 15 13:09:38.570177 systemd[1]: Started sshd@12-172.236.109.179:22-139.178.89.65:58232.service - OpenSSH per-connection server daemon (139.178.89.65:58232). May 15 13:09:39.100143 sshd[4755]: Accepted publickey for core from 139.178.89.65 port 58232 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:09:39.104267 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:09:39.116758 systemd-logind[1516]: New session 12 of user core. May 15 13:09:39.124315 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 13:09:39.729239 sshd[4843]: Connection closed by 139.178.89.65 port 58232 May 15 13:09:39.730145 sshd-session[4755]: pam_unix(sshd:session): session closed for user core May 15 13:09:39.736262 systemd[1]: sshd@12-172.236.109.179:22-139.178.89.65:58232.service: Deactivated successfully. May 15 13:09:39.739687 systemd[1]: session-12.scope: Deactivated successfully. May 15 13:09:39.739895 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. May 15 13:09:39.746123 systemd-logind[1516]: Removed session 12. May 15 13:09:39.797413 systemd[1]: Started sshd@13-172.236.109.179:22-139.178.89.65:58246.service - OpenSSH per-connection server daemon (139.178.89.65:58246). May 15 13:09:40.005396 systemd-networkd[1468]: vxlan.calico: Link UP May 15 13:09:40.005430 systemd-networkd[1468]: vxlan.calico: Gained carrier May 15 13:09:40.029838 containerd[1543]: time="2025-05-15T13:09:40.029729001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,}" May 15 13:09:40.161713 sshd[4896]: Accepted publickey for core from 139.178.89.65 port 58246 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:09:40.162238 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:09:40.174421 systemd-logind[1516]: New session 13 of user core. May 15 13:09:40.178692 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 13:09:40.293661 systemd-networkd[1468]: cali1c5c08f7cac: Link UP May 15 13:09:40.295350 systemd-networkd[1468]: cali1c5c08f7cac: Gained carrier May 15 13:09:40.318524 containerd[1543]: 2025-05-15 13:09:40.154 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0 calico-kube-controllers-6f97f99f64- calico-system 627c03e7-e267-48fe-b4ed-2069e33dcd5c 754 0 2025-05-15 13:07:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f97f99f64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-109-179 calico-kube-controllers-6f97f99f64-zpxjv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1c5c08f7cac [] []}} ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Namespace="calico-system" Pod="calico-kube-controllers-6f97f99f64-zpxjv" WorkloadEndpoint="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-" May 15 13:09:40.318524 containerd[1543]: 2025-05-15 13:09:40.155 [INFO][4925] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Namespace="calico-system" Pod="calico-kube-controllers-6f97f99f64-zpxjv" WorkloadEndpoint="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" May 15 13:09:40.318524 containerd[1543]: 2025-05-15 13:09:40.226 [INFO][4947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" HandleID="k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Workload="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.237 [INFO][4947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" HandleID="k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Workload="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039bb40), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-109-179", "pod":"calico-kube-controllers-6f97f99f64-zpxjv", "timestamp":"2025-05-15 13:09:40.226203099 +0000 UTC"}, Hostname:"172-236-109-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.244 [INFO][4947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.244 [INFO][4947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.244 [INFO][4947] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-179' May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.247 [INFO][4947] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" host="172-236-109-179" May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.254 [INFO][4947] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-109-179" May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.263 [INFO][4947] ipam/ipam.go 489: Trying affinity for 192.168.44.192/26 host="172-236-109-179" May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.266 [INFO][4947] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:40.319247 containerd[1543]: 2025-05-15 13:09:40.269 [INFO][4947] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:40.319485 containerd[1543]: 2025-05-15 13:09:40.269 [INFO][4947] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.192/26 handle="k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" host="172-236-109-179" May 15 13:09:40.319485 containerd[1543]: 2025-05-15 13:09:40.271 [INFO][4947] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852 May 15 13:09:40.319485 containerd[1543]: 2025-05-15 13:09:40.276 [INFO][4947] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.192/26 handle="k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" host="172-236-109-179" May 15 13:09:40.319485 containerd[1543]: 2025-05-15 13:09:40.283 [INFO][4947] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.193/26] block=192.168.44.192/26 handle="k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" host="172-236-109-179" May 15 13:09:40.319485 containerd[1543]: 2025-05-15 13:09:40.283 [INFO][4947] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.193/26] handle="k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" host="172-236-109-179" May 15 13:09:40.319485 containerd[1543]: 2025-05-15 13:09:40.283 [INFO][4947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 13:09:40.319485 containerd[1543]: 2025-05-15 13:09:40.283 [INFO][4947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.193/26] IPv6=[] ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" HandleID="k8s-pod-network.54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Workload="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" May 15 13:09:40.321695 containerd[1543]: 2025-05-15 13:09:40.288 [INFO][4925] cni-plugin/k8s.go 386: Populated endpoint ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Namespace="calico-system" Pod="calico-kube-controllers-6f97f99f64-zpxjv" WorkloadEndpoint="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0", GenerateName:"calico-kube-controllers-6f97f99f64-", Namespace:"calico-system", SelfLink:"", UID:"627c03e7-e267-48fe-b4ed-2069e33dcd5c", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f97f99f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"", Pod:"calico-kube-controllers-6f97f99f64-zpxjv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c5c08f7cac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:40.321789 containerd[1543]: 2025-05-15 13:09:40.288 [INFO][4925] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.193/32] ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Namespace="calico-system" Pod="calico-kube-controllers-6f97f99f64-zpxjv" WorkloadEndpoint="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" May 15 13:09:40.321789 containerd[1543]: 2025-05-15 13:09:40.288 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c5c08f7cac ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Namespace="calico-system" Pod="calico-kube-controllers-6f97f99f64-zpxjv" WorkloadEndpoint="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" May 15 13:09:40.321789 containerd[1543]: 2025-05-15 13:09:40.290 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Namespace="calico-system" Pod="calico-kube-controllers-6f97f99f64-zpxjv" WorkloadEndpoint="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" May 15 13:09:40.321863 containerd[1543]: 2025-05-15 13:09:40.291 [INFO][4925] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Namespace="calico-system" Pod="calico-kube-controllers-6f97f99f64-zpxjv" WorkloadEndpoint="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0", GenerateName:"calico-kube-controllers-6f97f99f64-", Namespace:"calico-system", SelfLink:"", UID:"627c03e7-e267-48fe-b4ed-2069e33dcd5c", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f97f99f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852", Pod:"calico-kube-controllers-6f97f99f64-zpxjv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c5c08f7cac", MAC:"3e:9b:b4:c9:1c:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:40.321943 containerd[1543]: 2025-05-15 13:09:40.306 [INFO][4925] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" Namespace="calico-system" Pod="calico-kube-controllers-6f97f99f64-zpxjv" WorkloadEndpoint="172--236--109--179-k8s-calico--kube--controllers--6f97f99f64--zpxjv-eth0" May 15 13:09:40.380808 containerd[1543]: time="2025-05-15T13:09:40.380761207Z" level=info msg="connecting to shim 54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852" address="unix:///run/containerd/s/e226ae7a110e36028180bb5178642a32ea0d08e53c7bd5acc60973e9923e12b0" namespace=k8s.io protocol=ttrpc version=3 May 15 13:09:40.433746 systemd[1]: Started cri-containerd-54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852.scope - libcontainer container 54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852. May 15 13:09:40.618664 containerd[1543]: time="2025-05-15T13:09:40.618181084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f97f99f64-zpxjv,Uid:627c03e7-e267-48fe-b4ed-2069e33dcd5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"54c409846d913f97a83073e073cd5afc4841e4c8a3b7a62bbd2e2c7c7aa58852\"" May 15 13:09:40.623621 containerd[1543]: time="2025-05-15T13:09:40.623588105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 13:09:40.641448 sshd[4951]: Connection closed by 139.178.89.65 port 58246 May 15 13:09:40.640734 sshd-session[4896]: pam_unix(sshd:session): session closed for user core May 15 13:09:40.645676 systemd[1]: sshd@13-172.236.109.179:22-139.178.89.65:58246.service: Deactivated successfully. May 15 13:09:40.648663 systemd[1]: session-13.scope: Deactivated successfully. May 15 13:09:40.650973 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. May 15 13:09:40.653133 systemd-logind[1516]: Removed session 13. May 15 13:09:40.705662 systemd[1]: Started sshd@14-172.236.109.179:22-139.178.89.65:58258.service - OpenSSH per-connection server daemon (139.178.89.65:58258). May 15 13:09:41.062174 sshd[5053]: Accepted publickey for core from 139.178.89.65 port 58258 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:09:41.064070 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:09:41.070047 systemd-logind[1516]: New session 14 of user core. May 15 13:09:41.074904 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 13:09:41.089942 systemd-networkd[1468]: vxlan.calico: Gained IPv6LL May 15 13:09:41.384164 systemd-networkd[1468]: cali1c5c08f7cac: Gained IPv6LL May 15 13:09:41.569729 sshd[5056]: Connection closed by 139.178.89.65 port 58258 May 15 13:09:41.569994 sshd-session[5053]: pam_unix(sshd:session): session closed for user core May 15 13:09:41.575211 systemd[1]: sshd@14-172.236.109.179:22-139.178.89.65:58258.service: Deactivated successfully. May 15 13:09:41.578885 systemd[1]: session-14.scope: Deactivated successfully. May 15 13:09:41.580517 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. May 15 13:09:41.582802 systemd-logind[1516]: Removed session 14. May 15 13:09:41.959544 kubelet[2762]: E0515 13:09:41.959465 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:41.963005 containerd[1543]: time="2025-05-15T13:09:41.962498957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:09:42.309173 systemd-networkd[1468]: cali563b25ebb68: Link UP May 15 13:09:42.311469 systemd-networkd[1468]: cali563b25ebb68: Gained carrier May 15 13:09:42.349390 containerd[1543]: 2025-05-15 13:09:42.060 [INFO][5076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0 coredns-6f6b679f8f- kube-system 4bce6dbe-21aa-444f-ac75-71dc3b47fb22 756 0 2025-05-15 13:07:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-109-179 coredns-6f6b679f8f-ftdbf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali563b25ebb68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-" May 15 13:09:42.349390 containerd[1543]: 2025-05-15 13:09:42.060 [INFO][5076] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.349390 containerd[1543]: 2025-05-15 13:09:42.200 [INFO][5087] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" HandleID="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.223 [INFO][5087] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" HandleID="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bec0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-109-179", "pod":"coredns-6f6b679f8f-ftdbf", "timestamp":"2025-05-15 13:09:42.200186835 +0000 UTC"}, Hostname:"172-236-109-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.224 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.224 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.224 [INFO][5087] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-179' May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.229 [INFO][5087] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" host="172-236-109-179" May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.249 [INFO][5087] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-109-179" May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.263 [INFO][5087] ipam/ipam.go 489: Trying affinity for 192.168.44.192/26 host="172-236-109-179" May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.269 [INFO][5087] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.272 [INFO][5087] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:42.349769 containerd[1543]: 2025-05-15 13:09:42.272 [INFO][5087] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.192/26 handle="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" host="172-236-109-179" May 15 13:09:42.350023 containerd[1543]: 2025-05-15 13:09:42.274 [INFO][5087] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815 May 15 13:09:42.350023 containerd[1543]: 2025-05-15 13:09:42.278 [INFO][5087] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.192/26 handle="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" host="172-236-109-179" May 15 13:09:42.350023 containerd[1543]: 2025-05-15 13:09:42.285 [INFO][5087] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.194/26] block=192.168.44.192/26 handle="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" host="172-236-109-179" May 15 13:09:42.350023 containerd[1543]: 2025-05-15 13:09:42.286 [INFO][5087] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.194/26] handle="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" host="172-236-109-179" May 15 13:09:42.350023 containerd[1543]: 2025-05-15 13:09:42.286 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 13:09:42.350023 containerd[1543]: 2025-05-15 13:09:42.286 [INFO][5087] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.194/26] IPv6=[] ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" HandleID="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.350172 containerd[1543]: 2025-05-15 13:09:42.289 [INFO][5076] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4bce6dbe-21aa-444f-ac75-71dc3b47fb22", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"", Pod:"coredns-6f6b679f8f-ftdbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali563b25ebb68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:42.350238 containerd[1543]: 2025-05-15 13:09:42.289 [INFO][5076] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.194/32] ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.350238 containerd[1543]: 2025-05-15 13:09:42.289 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali563b25ebb68 ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.350238 containerd[1543]: 2025-05-15 13:09:42.310 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.350314 containerd[1543]: 2025-05-15 13:09:42.311 [INFO][5076] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4bce6dbe-21aa-444f-ac75-71dc3b47fb22", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815", Pod:"coredns-6f6b679f8f-ftdbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali563b25ebb68", MAC:"d2:fe:79:22:3b:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:42.350314 containerd[1543]: 2025-05-15 13:09:42.333 [INFO][5076] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.406994 containerd[1543]: time="2025-05-15T13:09:42.406869015Z" level=info msg="connecting to shim 1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" address="unix:///run/containerd/s/8286474a8b76d9af5f452a39e2707a3b072dd3ae32772e68b2c216cc1c4def59" namespace=k8s.io protocol=ttrpc version=3 May 15 13:09:42.419462 containerd[1543]: time="2025-05-15T13:09:42.419382379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/71/fs/usr/bin/kube-controllers: no space left on device" May 15 13:09:42.419654 containerd[1543]: time="2025-05-15T13:09:42.419409979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 13:09:42.421398 kubelet[2762]: E0515 13:09:42.421338 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/71/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:09:42.421531 kubelet[2762]: E0515 13:09:42.421429 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/71/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:09:42.421612 kernel: overlayfs: failed to create directory /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/72/work/work (errno: 28); mounting read-only May 15 13:09:42.422439 kubelet[2762]: E0515 13:09:42.422351 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7p84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/71/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" May 15 13:09:42.425878 kernel: overlayfs: failed to set uuid (72/fs, err=-28); falling back to uuid=null. May 15 13:09:42.425951 kubelet[2762]: E0515 13:09:42.424696 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/71/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:09:42.457707 systemd[1]: Started cri-containerd-1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815.scope - libcontainer container 1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815. May 15 13:09:42.467465 systemd[1]: cri-containerd-1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815.scope: Deactivated successfully. May 15 13:09:42.468088 systemd[1]: Stopped cri-containerd-1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815.scope - libcontainer container 1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815. May 15 13:09:42.476159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815-rootfs.mount: Deactivated successfully. May 15 13:09:42.478311 containerd[1543]: time="2025-05-15T13:09:42.478244632Z" level=info msg="shim disconnected" id=1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815 namespace=k8s.io May 15 13:09:42.478504 containerd[1543]: time="2025-05-15T13:09:42.478473283Z" level=warning msg="cleaning up after shim disconnected" id=1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815 namespace=k8s.io May 15 13:09:42.479103 containerd[1543]: time="2025-05-15T13:09:42.479022474Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 13:09:42.497074 containerd[1543]: time="2025-05-15T13:09:42.496964998Z" level=warning msg="cleanup warnings time=\"2025-05-15T13:09:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 13:09:42.498271 containerd[1543]: time="2025-05-15T13:09:42.498242000Z" level=error msg="copy shim log" error="read /proc/self/fd/103: file already closed" namespace=k8s.io May 15 13:09:42.501234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815-shm.mount: Deactivated successfully. May 15 13:09:42.569889 systemd-networkd[1468]: cali563b25ebb68: Link DOWN May 15 13:09:42.570468 systemd-networkd[1468]: cali563b25ebb68: Lost carrier May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.567 [INFO][5170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.568 [INFO][5170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" iface="eth0" netns="/var/run/netns/cni-53081563-bc04-b631-ed7c-6c4c1c956a17" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.569 [INFO][5170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" iface="eth0" netns="/var/run/netns/cni-53081563-bc04-b631-ed7c-6c4c1c956a17" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.577 [INFO][5170] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" after=9.181909ms iface="eth0" netns="/var/run/netns/cni-53081563-bc04-b631-ed7c-6c4c1c956a17" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.577 [INFO][5170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.578 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.637 [INFO][5180] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" HandleID="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.637 [INFO][5180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.638 [INFO][5180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.676 [INFO][5180] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" HandleID="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.676 [INFO][5180] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" HandleID="k8s-pod-network.1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.677 [INFO][5180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 13:09:42.682108 containerd[1543]: 2025-05-15 13:09:42.679 [INFO][5170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" May 15 13:09:42.683617 containerd[1543]: time="2025-05-15T13:09:42.683577748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to start sandbox \"1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"proc\" to rootfs at \"/proc\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815/rootfs/proc: read-only file system" May 15 13:09:42.684179 kubelet[2762]: E0515 13:09:42.684131 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to start sandbox \"1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"proc\" to rootfs at \"/proc\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815/rootfs/proc: read-only file system" May 15 13:09:42.684280 kubelet[2762]: E0515 13:09:42.684219 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to start sandbox \"1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"proc\" to rootfs at \"/proc\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815/rootfs/proc: read-only file system" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:42.684280 kubelet[2762]: E0515 13:09:42.684264 2762 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to start sandbox \"1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"proc\" to rootfs at \"/proc\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815/rootfs/proc: read-only file system" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:42.684686 kubelet[2762]: E0515 13:09:42.684335 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ftdbf_kube-system(4bce6dbe-21aa-444f-ac75-71dc3b47fb22)\\\": rpc error: code = Unknown desc = failed to start sandbox \\\"1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815\\\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \\\"proc\\\" to rootfs at \\\"/proc\\\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815/rootfs/proc: read-only file system\"" pod="kube-system/coredns-6f6b679f8f-ftdbf" podUID="4bce6dbe-21aa-444f-ac75-71dc3b47fb22" May 15 13:09:42.970303 systemd[1]: run-netns-cni\x2d53081563\x2dbc04\x2db631\x2ded7c\x2d6c4c1c956a17.mount: Deactivated successfully. May 15 13:09:43.285546 kubelet[2762]: E0515 13:09:43.284876 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:43.286901 containerd[1543]: time="2025-05-15T13:09:43.286550862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,}" May 15 13:09:43.287754 kubelet[2762]: E0515 13:09:43.287659 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:09:43.415340 systemd-networkd[1468]: cali563b25ebb68: Link UP May 15 13:09:43.416057 systemd-networkd[1468]: cali563b25ebb68: Gained carrier May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.336 [INFO][5191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0 coredns-6f6b679f8f- kube-system 4bce6dbe-21aa-444f-ac75-71dc3b47fb22 6529 0 2025-05-15 13:07:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-109-179 1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815 coredns-6f6b679f8f-ftdbf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali563b25ebb68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.336 [INFO][5191] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.367 [INFO][5202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" HandleID="k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.378 [INFO][5202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" HandleID="k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290f30), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-109-179", "pod":"coredns-6f6b679f8f-ftdbf", "timestamp":"2025-05-15 13:09:43.367515219 +0000 UTC"}, Hostname:"172-236-109-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.378 [INFO][5202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.378 [INFO][5202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.378 [INFO][5202] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-179' May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.380 [INFO][5202] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.384 [INFO][5202] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.389 [INFO][5202] ipam/ipam.go 489: Trying affinity for 192.168.44.192/26 host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.391 [INFO][5202] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.394 [INFO][5202] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.394 [INFO][5202] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.192/26 handle="k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.395 [INFO][5202] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1 May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.399 [INFO][5202] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.192/26 handle="k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.406 [INFO][5202] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.195/26] block=192.168.44.192/26 handle="k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.406 [INFO][5202] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.195/26] handle="k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" host="172-236-109-179" May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.406 [INFO][5202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 13:09:43.432252 containerd[1543]: 2025-05-15 13:09:43.406 [INFO][5202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.195/26] IPv6=[] ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" HandleID="k8s-pod-network.1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:43.433315 containerd[1543]: 2025-05-15 13:09:43.409 [INFO][5191] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4bce6dbe-21aa-444f-ac75-71dc3b47fb22", ResourceVersion:"6529", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815", Pod:"coredns-6f6b679f8f-ftdbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali563b25ebb68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:43.433315 containerd[1543]: 2025-05-15 13:09:43.409 [INFO][5191] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.195/32] ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:43.433315 containerd[1543]: 2025-05-15 13:09:43.410 [INFO][5191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali563b25ebb68 ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:43.433315 containerd[1543]: 2025-05-15 13:09:43.413 [INFO][5191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:43.433315 containerd[1543]: 2025-05-15 13:09:43.413 [INFO][5191] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4bce6dbe-21aa-444f-ac75-71dc3b47fb22", ResourceVersion:"6529", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1", Pod:"coredns-6f6b679f8f-ftdbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali563b25ebb68", MAC:"9a:ee:bc:7e:03:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:43.433315 containerd[1543]: 2025-05-15 13:09:43.429 [INFO][5191] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-ftdbf" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--ftdbf-eth0" May 15 13:09:43.477605 containerd[1543]: time="2025-05-15T13:09:43.476652119Z" level=info msg="connecting to shim 1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1" address="unix:///run/containerd/s/8eeefb7520711d9c878620975d98000c7a2bf87def7815278fdb6785a87aad73" namespace=k8s.io protocol=ttrpc version=3 May 15 13:09:43.510173 systemd[1]: Started cri-containerd-1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1.scope - libcontainer container 1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1. May 15 13:09:43.581713 containerd[1543]: time="2025-05-15T13:09:43.581524202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ftdbf,Uid:4bce6dbe-21aa-444f-ac75-71dc3b47fb22,Namespace:kube-system,Attempt:0,} returns sandbox id \"1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1\"" May 15 13:09:43.584364 kubelet[2762]: E0515 13:09:43.584306 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:43.587330 containerd[1543]: time="2025-05-15T13:09:43.587068613Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 13:09:44.204494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555847071.mount: Deactivated successfully. May 15 13:09:44.902611 containerd[1543]: time="2025-05-15T13:09:44.902233983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:44.903233 containerd[1543]: time="2025-05-15T13:09:44.903078295Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 13:09:44.904065 containerd[1543]: time="2025-05-15T13:09:44.904018097Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:44.906612 containerd[1543]: time="2025-05-15T13:09:44.906543451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:44.907819 containerd[1543]: time="2025-05-15T13:09:44.907390323Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.32001152s" May 15 13:09:44.907819 containerd[1543]: time="2025-05-15T13:09:44.907427263Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 13:09:44.913412 containerd[1543]: time="2025-05-15T13:09:44.913381855Z" level=info msg="CreateContainer within sandbox \"1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 13:09:44.926415 containerd[1543]: time="2025-05-15T13:09:44.923973435Z" level=info msg="Container e2ef86c3191eecf0f0ef91ae7ff23aea9c01f62da7673e33cda07f3fb304eccc: CDI devices from CRI Config.CDIDevices: []" May 15 13:09:44.926253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052900668.mount: Deactivated successfully. May 15 13:09:44.937059 containerd[1543]: time="2025-05-15T13:09:44.937019900Z" level=info msg="CreateContainer within sandbox \"1727f1ff18399879cc5b2bb9d003643d69b92f3cc36f9d328479c57ed365c1a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2ef86c3191eecf0f0ef91ae7ff23aea9c01f62da7673e33cda07f3fb304eccc\"" May 15 13:09:44.938726 containerd[1543]: time="2025-05-15T13:09:44.938699913Z" level=info msg="StartContainer for \"e2ef86c3191eecf0f0ef91ae7ff23aea9c01f62da7673e33cda07f3fb304eccc\"" May 15 13:09:44.940076 containerd[1543]: time="2025-05-15T13:09:44.940034506Z" level=info msg="connecting to shim e2ef86c3191eecf0f0ef91ae7ff23aea9c01f62da7673e33cda07f3fb304eccc" address="unix:///run/containerd/s/8eeefb7520711d9c878620975d98000c7a2bf87def7815278fdb6785a87aad73" protocol=ttrpc version=3 May 15 13:09:44.967704 systemd[1]: Started cri-containerd-e2ef86c3191eecf0f0ef91ae7ff23aea9c01f62da7673e33cda07f3fb304eccc.scope - libcontainer container e2ef86c3191eecf0f0ef91ae7ff23aea9c01f62da7673e33cda07f3fb304eccc. May 15 13:09:45.002359 containerd[1543]: time="2025-05-15T13:09:45.002326307Z" level=info msg="StartContainer for \"e2ef86c3191eecf0f0ef91ae7ff23aea9c01f62da7673e33cda07f3fb304eccc\" returns successfully" May 15 13:09:45.293132 kubelet[2762]: E0515 13:09:45.293005 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:45.324854 kubelet[2762]: I0515 13:09:45.324722 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ftdbf" podStartSLOduration=127.002072116 podStartE2EDuration="2m8.32450979s" podCreationTimestamp="2025-05-15 13:07:37 +0000 UTC" firstStartedPulling="2025-05-15 13:09:43.586213571 +0000 UTC m=+133.758511465" lastFinishedPulling="2025-05-15 13:09:44.908651255 +0000 UTC m=+135.080949139" observedRunningTime="2025-05-15 13:09:45.307760477 +0000 UTC m=+135.480058361" watchObservedRunningTime="2025-05-15 13:09:45.32450979 +0000 UTC m=+135.496807674" May 15 13:09:45.439895 systemd-networkd[1468]: cali563b25ebb68: Gained IPv6LL May 15 13:09:45.587143 kubelet[2762]: W0515 13:09:45.586880 2762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bce6dbe_21aa_444f_ac75_71dc3b47fb22.slice/cri-containerd-1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815.scope WatchSource:0}: container "1cd7e269e560263324389d5a61742a3cb8d11f6995d0b2c42eeda1c867139815" in namespace "k8s.io": not found May 15 13:09:45.610219 kubelet[2762]: I0515 13:09:45.610178 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:09:45.610338 kubelet[2762]: I0515 13:09:45.610248 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:09:45.612301 kubelet[2762]: I0515 13:09:45.612281 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:09:45.626520 kubelet[2762]: I0515 13:09:45.626480 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:09:45.626680 kubelet[2762]: I0515 13:09:45.626654 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/csi-node-driver-fxxht","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626703 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626713 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626720 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626739 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626748 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626757 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626765 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626773 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626781 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:09:45.626825 kubelet[2762]: E0515 13:09:45.626789 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:09:45.626825 kubelet[2762]: I0515 13:09:45.626798 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:09:46.295083 kubelet[2762]: E0515 13:09:46.295048 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:46.634375 systemd[1]: Started sshd@15-172.236.109.179:22-139.178.89.65:48574.service - OpenSSH per-connection server daemon (139.178.89.65:48574). May 15 13:09:46.979742 sshd[5356]: Accepted publickey for core from 139.178.89.65 port 48574 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:09:46.981629 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:09:46.988106 systemd-logind[1516]: New session 15 of user core. May 15 13:09:46.992716 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 13:09:47.281750 sshd[5358]: Connection closed by 139.178.89.65 port 48574 May 15 13:09:47.282820 sshd-session[5356]: pam_unix(sshd:session): session closed for user core May 15 13:09:47.287107 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. May 15 13:09:47.287833 systemd[1]: sshd@15-172.236.109.179:22-139.178.89.65:48574.service: Deactivated successfully. May 15 13:09:47.290931 systemd[1]: session-15.scope: Deactivated successfully. May 15 13:09:47.292479 systemd-logind[1516]: Removed session 15. May 15 13:09:47.296384 kubelet[2762]: E0515 13:09:47.296359 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:49.778852 containerd[1543]: time="2025-05-15T13:09:49.778790609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"6481c4790f52fd5fee478e218d4669fa7677f2e5e7465906d63b439427ca24f1\" pid:5384 exited_at:{seconds:1747314589 nanos:778392348}" May 15 13:09:49.784172 kubelet[2762]: E0515 13:09:49.784141 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:49.963211 kubelet[2762]: E0515 13:09:49.963131 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:49.964102 containerd[1543]: time="2025-05-15T13:09:49.964071988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,}" May 15 13:09:50.087288 systemd-networkd[1468]: calic3d79fb24c9: Link UP May 15 13:09:50.089422 systemd-networkd[1468]: calic3d79fb24c9: Gained carrier May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.007 [INFO][5396] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0 coredns-6f6b679f8f- kube-system b53c6794-8ef1-4efd-9179-2e706d6227cb 749 0 2025-05-15 13:07:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-109-179 coredns-6f6b679f8f-xfdz2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3d79fb24c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Namespace="kube-system" Pod="coredns-6f6b679f8f-xfdz2" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.007 [INFO][5396] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Namespace="kube-system" Pod="coredns-6f6b679f8f-xfdz2" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.037 [INFO][5407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" HandleID="k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.047 [INFO][5407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" HandleID="k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031cc70), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-109-179", "pod":"coredns-6f6b679f8f-xfdz2", "timestamp":"2025-05-15 13:09:50.0373055 +0000 UTC"}, Hostname:"172-236-109-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.047 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.047 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.047 [INFO][5407] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-179' May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.050 [INFO][5407] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.054 [INFO][5407] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.060 [INFO][5407] ipam/ipam.go 489: Trying affinity for 192.168.44.192/26 host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.063 [INFO][5407] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.066 [INFO][5407] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.066 [INFO][5407] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.192/26 handle="k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.068 [INFO][5407] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55 May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.071 [INFO][5407] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.192/26 handle="k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.077 [INFO][5407] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.196/26] block=192.168.44.192/26 handle="k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.077 [INFO][5407] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.196/26] handle="k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" host="172-236-109-179" May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.078 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 13:09:50.128605 containerd[1543]: 2025-05-15 13:09:50.078 [INFO][5407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.196/26] IPv6=[] ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" HandleID="k8s-pod-network.6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Workload="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" May 15 13:09:50.129408 containerd[1543]: 2025-05-15 13:09:50.080 [INFO][5396] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Namespace="kube-system" Pod="coredns-6f6b679f8f-xfdz2" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b53c6794-8ef1-4efd-9179-2e706d6227cb", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"", Pod:"coredns-6f6b679f8f-xfdz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3d79fb24c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:50.129408 containerd[1543]: 2025-05-15 13:09:50.081 [INFO][5396] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.196/32] ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Namespace="kube-system" Pod="coredns-6f6b679f8f-xfdz2" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" May 15 13:09:50.129408 containerd[1543]: 2025-05-15 13:09:50.081 [INFO][5396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3d79fb24c9 ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Namespace="kube-system" Pod="coredns-6f6b679f8f-xfdz2" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" May 15 13:09:50.129408 containerd[1543]: 2025-05-15 13:09:50.090 [INFO][5396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Namespace="kube-system" Pod="coredns-6f6b679f8f-xfdz2" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" May 15 13:09:50.129408 containerd[1543]: 2025-05-15 13:09:50.091 [INFO][5396] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Namespace="kube-system" Pod="coredns-6f6b679f8f-xfdz2" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b53c6794-8ef1-4efd-9179-2e706d6227cb", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55", Pod:"coredns-6f6b679f8f-xfdz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3d79fb24c9", MAC:"9a:8d:55:22:7a:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:50.129408 containerd[1543]: 2025-05-15 13:09:50.106 [INFO][5396] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" Namespace="kube-system" Pod="coredns-6f6b679f8f-xfdz2" WorkloadEndpoint="172--236--109--179-k8s-coredns--6f6b679f8f--xfdz2-eth0" May 15 13:09:50.168516 containerd[1543]: time="2025-05-15T13:09:50.168434304Z" level=info msg="connecting to shim 6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55" address="unix:///run/containerd/s/b2b439a082e410c85f3b2cd915eeec93411a658c9de5eceaa500acea610a7b43" namespace=k8s.io protocol=ttrpc version=3 May 15 13:09:50.199701 systemd[1]: Started cri-containerd-6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55.scope - libcontainer container 6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55. May 15 13:09:50.253423 containerd[1543]: time="2025-05-15T13:09:50.253360728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfdz2,Uid:b53c6794-8ef1-4efd-9179-2e706d6227cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55\"" May 15 13:09:50.254609 kubelet[2762]: E0515 13:09:50.254588 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:50.258386 containerd[1543]: time="2025-05-15T13:09:50.258315318Z" level=info msg="CreateContainer within sandbox \"6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 13:09:50.267622 containerd[1543]: time="2025-05-15T13:09:50.267593047Z" level=info msg="Container 16537c88bf082f1f3e4f301ac26fcbf28e6bd0fa75971439a842f2de4f0f61d1: CDI devices from CRI Config.CDIDevices: []" May 15 13:09:50.276717 containerd[1543]: time="2025-05-15T13:09:50.275459711Z" level=info msg="CreateContainer within sandbox \"6dd6eb95d99753fc09bf603a137f5d1dcd6528eb0b05d715bfca01139e717e55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16537c88bf082f1f3e4f301ac26fcbf28e6bd0fa75971439a842f2de4f0f61d1\"" May 15 13:09:50.278331 containerd[1543]: time="2025-05-15T13:09:50.277790246Z" level=info msg="StartContainer for \"16537c88bf082f1f3e4f301ac26fcbf28e6bd0fa75971439a842f2de4f0f61d1\"" May 15 13:09:50.278021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416278005.mount: Deactivated successfully. May 15 13:09:50.281204 containerd[1543]: time="2025-05-15T13:09:50.281030103Z" level=info msg="connecting to shim 16537c88bf082f1f3e4f301ac26fcbf28e6bd0fa75971439a842f2de4f0f61d1" address="unix:///run/containerd/s/b2b439a082e410c85f3b2cd915eeec93411a658c9de5eceaa500acea610a7b43" protocol=ttrpc version=3 May 15 13:09:50.302945 systemd[1]: Started cri-containerd-16537c88bf082f1f3e4f301ac26fcbf28e6bd0fa75971439a842f2de4f0f61d1.scope - libcontainer container 16537c88bf082f1f3e4f301ac26fcbf28e6bd0fa75971439a842f2de4f0f61d1. May 15 13:09:50.340717 containerd[1543]: time="2025-05-15T13:09:50.340534837Z" level=info msg="StartContainer for \"16537c88bf082f1f3e4f301ac26fcbf28e6bd0fa75971439a842f2de4f0f61d1\" returns successfully" May 15 13:09:50.957459 kubelet[2762]: E0515 13:09:50.957404 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:50.958440 containerd[1543]: time="2025-05-15T13:09:50.958099074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,}" May 15 13:09:51.068814 systemd-networkd[1468]: calib1e317e8c1c: Link UP May 15 13:09:51.069409 systemd-networkd[1468]: calib1e317e8c1c: Gained carrier May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:50.997 [INFO][5512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--179-k8s-csi--node--driver--fxxht-eth0 csi-node-driver- calico-system 85ebef63-264f-4ef9-b5f5-d3d0ecc23527 647 0 2025-05-15 13:07:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-109-179 csi-node-driver-fxxht eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib1e317e8c1c [] []}} ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Namespace="calico-system" Pod="csi-node-driver-fxxht" WorkloadEndpoint="172--236--109--179-k8s-csi--node--driver--fxxht-" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:50.997 [INFO][5512] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Namespace="calico-system" Pod="csi-node-driver-fxxht" WorkloadEndpoint="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.026 [INFO][5525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" HandleID="k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Workload="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.034 [INFO][5525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" HandleID="k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Workload="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267930), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-109-179", "pod":"csi-node-driver-fxxht", "timestamp":"2025-05-15 13:09:51.026243097 +0000 UTC"}, Hostname:"172-236-109-179", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.035 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.036 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.036 [INFO][5525] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-179' May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.038 [INFO][5525] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.042 [INFO][5525] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.046 [INFO][5525] ipam/ipam.go 489: Trying affinity for 192.168.44.192/26 host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.048 [INFO][5525] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.050 [INFO][5525] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.192/26 host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.050 [INFO][5525] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.192/26 handle="k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.051 [INFO][5525] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.055 [INFO][5525] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.192/26 handle="k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.061 [INFO][5525] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.197/26] block=192.168.44.192/26 handle="k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.061 [INFO][5525] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.197/26] handle="k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" host="172-236-109-179" May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.061 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 13:09:51.085992 containerd[1543]: 2025-05-15 13:09:51.061 [INFO][5525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.197/26] IPv6=[] ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" HandleID="k8s-pod-network.63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Workload="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" May 15 13:09:51.088756 containerd[1543]: 2025-05-15 13:09:51.064 [INFO][5512] cni-plugin/k8s.go 386: Populated endpoint ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Namespace="calico-system" Pod="csi-node-driver-fxxht" WorkloadEndpoint="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-csi--node--driver--fxxht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85ebef63-264f-4ef9-b5f5-d3d0ecc23527", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"", Pod:"csi-node-driver-fxxht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1e317e8c1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:51.088756 containerd[1543]: 2025-05-15 13:09:51.064 [INFO][5512] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.197/32] ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Namespace="calico-system" Pod="csi-node-driver-fxxht" WorkloadEndpoint="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" May 15 13:09:51.088756 containerd[1543]: 2025-05-15 13:09:51.064 [INFO][5512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1e317e8c1c ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Namespace="calico-system" Pod="csi-node-driver-fxxht" WorkloadEndpoint="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" May 15 13:09:51.088756 containerd[1543]: 2025-05-15 13:09:51.067 [INFO][5512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Namespace="calico-system" Pod="csi-node-driver-fxxht" WorkloadEndpoint="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" May 15 13:09:51.088756 containerd[1543]: 2025-05-15 13:09:51.067 [INFO][5512] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Namespace="calico-system" Pod="csi-node-driver-fxxht" WorkloadEndpoint="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--179-k8s-csi--node--driver--fxxht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85ebef63-264f-4ef9-b5f5-d3d0ecc23527", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 13, 7, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-179", ContainerID:"63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a", Pod:"csi-node-driver-fxxht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1e317e8c1c", MAC:"02:ac:92:ad:1c:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 13:09:51.088756 containerd[1543]: 2025-05-15 13:09:51.080 [INFO][5512] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" Namespace="calico-system" Pod="csi-node-driver-fxxht" WorkloadEndpoint="172--236--109--179-k8s-csi--node--driver--fxxht-eth0" May 15 13:09:51.138631 containerd[1543]: time="2025-05-15T13:09:51.138572754Z" level=info msg="connecting to shim 63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a" address="unix:///run/containerd/s/9de5ef273515d2e262b4c7281ab00e5d766444ad79adc39df72b705e04a37fd7" namespace=k8s.io protocol=ttrpc version=3 May 15 13:09:51.179740 systemd[1]: Started cri-containerd-63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a.scope - libcontainer container 63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a. May 15 13:09:51.219787 containerd[1543]: time="2025-05-15T13:09:51.219672601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxxht,Uid:85ebef63-264f-4ef9-b5f5-d3d0ecc23527,Namespace:calico-system,Attempt:0,} returns sandbox id \"63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a\"" May 15 13:09:51.222283 containerd[1543]: time="2025-05-15T13:09:51.222226356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 13:09:51.311822 kubelet[2762]: E0515 13:09:51.311399 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:51.344945 kubelet[2762]: I0515 13:09:51.344872 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xfdz2" podStartSLOduration=134.344852564 podStartE2EDuration="2m14.344852564s" podCreationTimestamp="2025-05-15 13:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 13:09:51.329938575 +0000 UTC m=+141.502236459" watchObservedRunningTime="2025-05-15 13:09:51.344852564 +0000 UTC m=+141.517150448" May 15 13:09:51.712379 systemd-networkd[1468]: calic3d79fb24c9: Gained IPv6LL May 15 13:09:52.190462 containerd[1543]: time="2025-05-15T13:09:52.188933480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:52.190462 containerd[1543]: time="2025-05-15T13:09:52.189786022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 15 13:09:52.191861 containerd[1543]: time="2025-05-15T13:09:52.191678656Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:52.194673 containerd[1543]: time="2025-05-15T13:09:52.194647422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:09:52.196151 containerd[1543]: time="2025-05-15T13:09:52.195781534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 973.524057ms" May 15 13:09:52.196151 containerd[1543]: time="2025-05-15T13:09:52.195924164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 13:09:52.199798 containerd[1543]: time="2025-05-15T13:09:52.199776512Z" level=info msg="CreateContainer within sandbox \"63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 13:09:52.209950 containerd[1543]: time="2025-05-15T13:09:52.209927321Z" level=info msg="Container e5ed1de201f3357823ebe5a94dc0e3d9babc6dc859ae6bd1e95061b7b53c1edc: CDI devices from CRI Config.CDIDevices: []" May 15 13:09:52.218620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602153562.mount: Deactivated successfully. May 15 13:09:52.228197 containerd[1543]: time="2025-05-15T13:09:52.228143817Z" level=info msg="CreateContainer within sandbox \"63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e5ed1de201f3357823ebe5a94dc0e3d9babc6dc859ae6bd1e95061b7b53c1edc\"" May 15 13:09:52.229464 containerd[1543]: time="2025-05-15T13:09:52.229217929Z" level=info msg="StartContainer for \"e5ed1de201f3357823ebe5a94dc0e3d9babc6dc859ae6bd1e95061b7b53c1edc\"" May 15 13:09:52.232640 containerd[1543]: time="2025-05-15T13:09:52.232610986Z" level=info msg="connecting to shim e5ed1de201f3357823ebe5a94dc0e3d9babc6dc859ae6bd1e95061b7b53c1edc" address="unix:///run/containerd/s/9de5ef273515d2e262b4c7281ab00e5d766444ad79adc39df72b705e04a37fd7" protocol=ttrpc version=3 May 15 13:09:52.274783 systemd[1]: Started cri-containerd-e5ed1de201f3357823ebe5a94dc0e3d9babc6dc859ae6bd1e95061b7b53c1edc.scope - libcontainer container e5ed1de201f3357823ebe5a94dc0e3d9babc6dc859ae6bd1e95061b7b53c1edc. May 15 13:09:52.325167 kubelet[2762]: E0515 13:09:52.325084 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:52.346888 systemd[1]: Started sshd@16-172.236.109.179:22-139.178.89.65:48588.service - OpenSSH per-connection server daemon (139.178.89.65:48588). May 15 13:09:52.348073 containerd[1543]: time="2025-05-15T13:09:52.348005880Z" level=info msg="StartContainer for \"e5ed1de201f3357823ebe5a94dc0e3d9babc6dc859ae6bd1e95061b7b53c1edc\" returns successfully" May 15 13:09:52.350850 containerd[1543]: time="2025-05-15T13:09:52.350797845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 13:09:52.351809 systemd-networkd[1468]: calib1e317e8c1c: Gained IPv6LL May 15 13:09:52.696157 sshd[5628]: Accepted publickey for core from 139.178.89.65 port 48588 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:09:52.699117 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:09:52.705990 systemd-logind[1516]: New session 16 of user core. May 15 13:09:52.710731 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 13:09:53.015801 containerd[1543]: time="2025-05-15T13:09:53.015479464Z" level=error msg="failed to cleanup \"extract-910292710-tCvl sha256:f82f7f00015fd872301fdeeaafee8d248cfe36d482f3b3270b6cdec53de6ae3d\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 13:09:53.016355 containerd[1543]: time="2025-05-15T13:09:53.016318076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/25f3d4f972fca210802eb6fd5b932e3c6038e46cc38db95d3ecf34db45b18588/data: no space left on device" May 15 13:09:53.016411 containerd[1543]: time="2025-05-15T13:09:53.016399726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=8392913" May 15 13:09:53.016688 kubelet[2762]: E0515 13:09:53.016649 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/25f3d4f972fca210802eb6fd5b932e3c6038e46cc38db95d3ecf34db45b18588/data: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3" May 15 13:09:53.016842 kubelet[2762]: E0515 13:09:53.016697 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/25f3d4f972fca210802eb6fd5b932e3c6038e46cc38db95d3ecf34db45b18588/data: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3" May 15 13:09:53.019172 kubelet[2762]: E0515 13:09:53.019116 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59mln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fxxht_calico-system(85ebef63-264f-4ef9-b5f5-d3d0ecc23527): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/25f3d4f972fca210802eb6fd5b932e3c6038e46cc38db95d3ecf34db45b18588/data: no space left on device" logger="UnhandledError" May 15 13:09:53.021235 kubelet[2762]: E0515 13:09:53.021191 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/25f3d4f972fca210802eb6fd5b932e3c6038e46cc38db95d3ecf34db45b18588/data: no space left on device\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:09:53.025414 sshd[5630]: Connection closed by 139.178.89.65 port 48588 May 15 13:09:53.026040 sshd-session[5628]: pam_unix(sshd:session): session closed for user core May 15 13:09:53.031607 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. May 15 13:09:53.031926 systemd[1]: sshd@16-172.236.109.179:22-139.178.89.65:48588.service: Deactivated successfully. May 15 13:09:53.034328 systemd[1]: session-16.scope: Deactivated successfully. May 15 13:09:53.038184 systemd-logind[1516]: Removed session 16. May 15 13:09:53.329707 kubelet[2762]: E0515 13:09:53.329583 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\\\"\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:09:54.333023 kubelet[2762]: E0515 13:09:54.332375 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\\\"\"" pod="calico-system/csi-node-driver-fxxht" podUID="85ebef63-264f-4ef9-b5f5-d3d0ecc23527" May 15 13:09:54.959037 containerd[1543]: time="2025-05-15T13:09:54.958897306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 13:09:55.653689 kubelet[2762]: I0515 13:09:55.653645 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:09:55.653689 kubelet[2762]: I0515 13:09:55.653691 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:09:55.656532 kubelet[2762]: I0515 13:09:55.656464 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:09:55.677325 kubelet[2762]: I0515 13:09:55.677298 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:09:55.677691 kubelet[2762]: I0515 13:09:55.677646 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-scheduler-172-236-109-179"] May 15 13:09:55.677691 kubelet[2762]: E0515 13:09:55.677700 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:09:55.677691 kubelet[2762]: E0515 13:09:55.677721 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:09:55.677938 kubelet[2762]: E0515 13:09:55.677732 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:09:55.677938 kubelet[2762]: E0515 13:09:55.677742 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:09:55.677938 kubelet[2762]: E0515 13:09:55.677751 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:09:55.677938 kubelet[2762]: E0515 13:09:55.677760 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:09:55.677938 kubelet[2762]: E0515 13:09:55.677768 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:09:55.677938 kubelet[2762]: E0515 13:09:55.677778 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:09:55.677938 kubelet[2762]: E0515 13:09:55.677787 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:09:55.677938 kubelet[2762]: E0515 13:09:55.677799 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:09:55.677938 kubelet[2762]: I0515 13:09:55.677808 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:09:55.823028 containerd[1543]: time="2025-05-15T13:09:55.822972553Z" level=error msg="failed to cleanup \"extract-721216165-s0Ca sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 13:09:55.823790 containerd[1543]: time="2025-05-15T13:09:55.823659604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" May 15 13:09:55.823790 containerd[1543]: time="2025-05-15T13:09:55.823707884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=8392886" May 15 13:09:55.824004 kubelet[2762]: E0515 13:09:55.823967 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:09:55.824115 kubelet[2762]: E0515 13:09:55.824019 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:09:55.824205 kubelet[2762]: E0515 13:09:55.824154 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7p84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device" logger="UnhandledError" May 15 13:09:55.825643 kubelet[2762]: E0515 13:09:55.825607 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/a30191e6e725ce56612d980f181f7fd27583251c626f660ebf791cfe138f2043/data: no space left on device\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:09:57.957585 kubelet[2762]: E0515 13:09:57.957109 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:09:58.091127 systemd[1]: Started sshd@17-172.236.109.179:22-139.178.89.65:55260.service - OpenSSH per-connection server daemon (139.178.89.65:55260). May 15 13:09:58.449621 sshd[5649]: Accepted publickey for core from 139.178.89.65 port 55260 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:09:58.452279 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:09:58.460393 systemd-logind[1516]: New session 17 of user core. May 15 13:09:58.466688 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 13:09:58.781709 sshd[5651]: Connection closed by 139.178.89.65 port 55260 May 15 13:09:58.782726 sshd-session[5649]: pam_unix(sshd:session): session closed for user core May 15 13:09:58.787481 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. May 15 13:09:58.788038 systemd[1]: sshd@17-172.236.109.179:22-139.178.89.65:55260.service: Deactivated successfully. May 15 13:09:58.791008 systemd[1]: session-17.scope: Deactivated successfully. May 15 13:09:58.792975 systemd-logind[1516]: Removed session 17. May 15 13:10:01.957391 kubelet[2762]: E0515 13:10:01.957281 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:10:03.846388 systemd[1]: Started sshd@18-172.236.109.179:22-139.178.89.65:55264.service - OpenSSH per-connection server daemon (139.178.89.65:55264). May 15 13:10:04.196320 sshd[5675]: Accepted publickey for core from 139.178.89.65 port 55264 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:04.197879 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:04.202900 systemd-logind[1516]: New session 18 of user core. May 15 13:10:04.211694 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 13:10:04.570682 sshd[5677]: Connection closed by 139.178.89.65 port 55264 May 15 13:10:04.572164 sshd-session[5675]: pam_unix(sshd:session): session closed for user core May 15 13:10:04.578466 systemd[1]: sshd@18-172.236.109.179:22-139.178.89.65:55264.service: Deactivated successfully. May 15 13:10:04.582406 systemd[1]: session-18.scope: Deactivated successfully. May 15 13:10:04.583638 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. May 15 13:10:04.585579 systemd-logind[1516]: Removed session 18. May 15 13:10:05.707240 kubelet[2762]: I0515 13:10:05.707011 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:10:05.707240 kubelet[2762]: I0515 13:10:05.707252 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:10:05.711416 kubelet[2762]: I0515 13:10:05.711116 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:10:05.714539 kubelet[2762]: I0515 13:10:05.714495 2762 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578" size=21998657 runtimeHandler="" May 15 13:10:05.715290 containerd[1543]: time="2025-05-15T13:10:05.715242376Z" level=info msg="RemoveImage \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 13:10:05.716966 containerd[1543]: time="2025-05-15T13:10:05.716893659Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.36.7\"" May 15 13:10:05.717903 containerd[1543]: time="2025-05-15T13:10:05.717843810Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\"" May 15 13:10:05.718307 containerd[1543]: time="2025-05-15T13:10:05.718208731Z" level=info msg="RemoveImage \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" returns successfully" May 15 13:10:05.718307 containerd[1543]: time="2025-05-15T13:10:05.718262131Z" level=info msg="ImageDelete event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 13:10:05.735627 kubelet[2762]: I0515 13:10:05.735544 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:10:05.735886 kubelet[2762]: I0515 13:10:05.735782 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-scheduler-172-236-109-179"] May 15 13:10:05.735886 kubelet[2762]: E0515 13:10:05.735867 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:10:05.735886 kubelet[2762]: E0515 13:10:05.735881 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:10:05.736113 kubelet[2762]: E0515 13:10:05.735920 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:10:05.736113 kubelet[2762]: E0515 13:10:05.735932 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:10:05.736113 kubelet[2762]: E0515 13:10:05.735947 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:10:05.736113 kubelet[2762]: E0515 13:10:05.735955 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:10:05.736113 kubelet[2762]: E0515 13:10:05.735964 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:10:05.736113 kubelet[2762]: E0515 13:10:05.735972 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:10:05.736113 kubelet[2762]: E0515 13:10:05.736011 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:10:05.736113 kubelet[2762]: E0515 13:10:05.736019 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:10:05.736113 kubelet[2762]: I0515 13:10:05.736028 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:10:06.958046 containerd[1543]: time="2025-05-15T13:10:06.957999668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 13:10:08.350045 containerd[1543]: time="2025-05-15T13:10:08.349959671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:10:08.351018 containerd[1543]: time="2025-05-15T13:10:08.350902443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 15 13:10:08.351818 containerd[1543]: time="2025-05-15T13:10:08.351785395Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:10:08.353809 containerd[1543]: time="2025-05-15T13:10:08.353761369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 13:10:08.354694 containerd[1543]: time="2025-05-15T13:10:08.354659351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.396611003s" May 15 13:10:08.354793 containerd[1543]: time="2025-05-15T13:10:08.354774711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 13:10:08.357111 containerd[1543]: time="2025-05-15T13:10:08.357091305Z" level=info msg="CreateContainer within sandbox \"63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 13:10:08.366843 containerd[1543]: time="2025-05-15T13:10:08.366786574Z" level=info msg="Container 7a255d4c4ffc58de9c7abce41c9e5f6b2488ae16ef01bbed847ec80dac596eee: CDI devices from CRI Config.CDIDevices: []" May 15 13:10:08.372964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245842365.mount: Deactivated successfully. May 15 13:10:08.393221 containerd[1543]: time="2025-05-15T13:10:08.392540374Z" level=info msg="CreateContainer within sandbox \"63e5d4467a16a083783522276d43bd4f598289ad8d28228ad050e540e3bce07a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7a255d4c4ffc58de9c7abce41c9e5f6b2488ae16ef01bbed847ec80dac596eee\"" May 15 13:10:08.393843 containerd[1543]: time="2025-05-15T13:10:08.393801617Z" level=info msg="StartContainer for \"7a255d4c4ffc58de9c7abce41c9e5f6b2488ae16ef01bbed847ec80dac596eee\"" May 15 13:10:08.395961 containerd[1543]: time="2025-05-15T13:10:08.395827341Z" level=info msg="connecting to shim 7a255d4c4ffc58de9c7abce41c9e5f6b2488ae16ef01bbed847ec80dac596eee" address="unix:///run/containerd/s/9de5ef273515d2e262b4c7281ab00e5d766444ad79adc39df72b705e04a37fd7" protocol=ttrpc version=3 May 15 13:10:08.463992 systemd[1]: Started cri-containerd-7a255d4c4ffc58de9c7abce41c9e5f6b2488ae16ef01bbed847ec80dac596eee.scope - libcontainer container 7a255d4c4ffc58de9c7abce41c9e5f6b2488ae16ef01bbed847ec80dac596eee. May 15 13:10:08.528954 containerd[1543]: time="2025-05-15T13:10:08.528880520Z" level=info msg="StartContainer for \"7a255d4c4ffc58de9c7abce41c9e5f6b2488ae16ef01bbed847ec80dac596eee\" returns successfully" May 15 13:10:08.958801 kubelet[2762]: E0515 13:10:08.958052 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:10:09.121040 kubelet[2762]: I0515 13:10:09.120989 2762 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 13:10:09.121040 kubelet[2762]: I0515 13:10:09.121052 2762 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 13:10:09.423417 kubelet[2762]: I0515 13:10:09.423343 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fxxht" podStartSLOduration=129.289267387 podStartE2EDuration="2m26.423325984s" podCreationTimestamp="2025-05-15 13:07:43 +0000 UTC" firstStartedPulling="2025-05-15 13:09:51.221537745 +0000 UTC m=+141.393835629" lastFinishedPulling="2025-05-15 13:10:08.355596342 +0000 UTC m=+158.527894226" observedRunningTime="2025-05-15 13:10:09.399218587 +0000 UTC m=+159.571516471" watchObservedRunningTime="2025-05-15 13:10:09.423325984 +0000 UTC m=+159.595623868" May 15 13:10:09.636914 systemd[1]: Started sshd@19-172.236.109.179:22-139.178.89.65:53488.service - OpenSSH per-connection server daemon (139.178.89.65:53488). May 15 13:10:10.003959 sshd[5726]: Accepted publickey for core from 139.178.89.65 port 53488 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:10.005869 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:10.011886 systemd-logind[1516]: New session 19 of user core. May 15 13:10:10.017680 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 13:10:10.351249 sshd[5728]: Connection closed by 139.178.89.65 port 53488 May 15 13:10:10.351970 sshd-session[5726]: pam_unix(sshd:session): session closed for user core May 15 13:10:10.358144 systemd[1]: sshd@19-172.236.109.179:22-139.178.89.65:53488.service: Deactivated successfully. May 15 13:10:10.360813 systemd[1]: session-19.scope: Deactivated successfully. May 15 13:10:10.362451 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. May 15 13:10:10.363915 systemd-logind[1516]: Removed session 19. May 15 13:10:15.410946 systemd[1]: Started sshd@20-172.236.109.179:22-139.178.89.65:53492.service - OpenSSH per-connection server daemon (139.178.89.65:53492). May 15 13:10:15.752809 sshd[5743]: Accepted publickey for core from 139.178.89.65 port 53492 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:15.754262 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:15.762025 systemd-logind[1516]: New session 20 of user core. May 15 13:10:15.765927 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 13:10:15.774137 kubelet[2762]: I0515 13:10:15.773220 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:10:15.774137 kubelet[2762]: I0515 13:10:15.773254 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:10:15.781408 kubelet[2762]: I0515 13:10:15.781143 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:10:15.814642 kubelet[2762]: I0515 13:10:15.814611 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:10:15.815370 kubelet[2762]: I0515 13:10:15.814951 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:10:15.815519 kubelet[2762]: E0515 13:10:15.815503 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:10:15.815758 kubelet[2762]: E0515 13:10:15.815634 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:10:15.816691 kubelet[2762]: E0515 13:10:15.816651 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:10:15.816748 kubelet[2762]: E0515 13:10:15.816697 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:10:15.816748 kubelet[2762]: E0515 13:10:15.816715 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:10:15.816748 kubelet[2762]: E0515 13:10:15.816727 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:10:15.816748 kubelet[2762]: E0515 13:10:15.816743 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:10:15.816844 kubelet[2762]: E0515 13:10:15.816754 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:10:15.816844 kubelet[2762]: E0515 13:10:15.816774 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:10:15.816844 kubelet[2762]: E0515 13:10:15.816787 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:10:15.816844 kubelet[2762]: I0515 13:10:15.816800 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:10:16.078601 sshd[5745]: Connection closed by 139.178.89.65 port 53492 May 15 13:10:16.077638 sshd-session[5743]: pam_unix(sshd:session): session closed for user core May 15 13:10:16.083404 systemd[1]: sshd@20-172.236.109.179:22-139.178.89.65:53492.service: Deactivated successfully. May 15 13:10:16.087320 systemd[1]: session-20.scope: Deactivated successfully. May 15 13:10:16.089772 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. May 15 13:10:16.092353 systemd-logind[1516]: Removed session 20. May 15 13:10:17.957587 kubelet[2762]: E0515 13:10:17.956846 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:10:19.784444 containerd[1543]: time="2025-05-15T13:10:19.784291374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"340d03d19adae882e45de9ad55e9b772963d66aa9d96bdc6c6e82b6165e798ef\" pid:5768 exited_at:{seconds:1747314619 nanos:783801073}" May 15 13:10:21.138774 systemd[1]: Started sshd@21-172.236.109.179:22-139.178.89.65:45406.service - OpenSSH per-connection server daemon (139.178.89.65:45406). May 15 13:10:21.478203 sshd[5788]: Accepted publickey for core from 139.178.89.65 port 45406 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:21.480256 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:21.486211 systemd-logind[1516]: New session 21 of user core. May 15 13:10:21.488707 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 13:10:21.803872 sshd[5790]: Connection closed by 139.178.89.65 port 45406 May 15 13:10:21.804910 sshd-session[5788]: pam_unix(sshd:session): session closed for user core May 15 13:10:21.809799 systemd[1]: sshd@21-172.236.109.179:22-139.178.89.65:45406.service: Deactivated successfully. May 15 13:10:21.813930 systemd[1]: session-21.scope: Deactivated successfully. May 15 13:10:21.815225 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. May 15 13:10:21.820455 systemd-logind[1516]: Removed session 21. May 15 13:10:22.959186 containerd[1543]: time="2025-05-15T13:10:22.959123970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 13:10:24.153855 containerd[1543]: time="2025-05-15T13:10:24.153784097Z" level=error msg="failed to cleanup \"extract-605334394-fmyg sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 13:10:24.154467 containerd[1543]: time="2025-05-15T13:10:24.154401128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device" May 15 13:10:24.154627 containerd[1543]: time="2025-05-15T13:10:24.154489798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 13:10:24.154872 kubelet[2762]: E0515 13:10:24.154802 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:10:24.154872 kubelet[2762]: E0515 13:10:24.154867 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:10:24.155598 kubelet[2762]: E0515 13:10:24.155074 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7p84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" May 15 13:10:24.157184 kubelet[2762]: E0515 13:10:24.157120 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:10:25.855839 kubelet[2762]: I0515 13:10:25.855786 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:10:25.855839 kubelet[2762]: I0515 13:10:25.855858 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:10:25.857580 kubelet[2762]: I0515 13:10:25.857544 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:10:25.886174 kubelet[2762]: I0515 13:10:25.886134 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:10:25.886495 kubelet[2762]: I0515 13:10:25.886423 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:10:25.886610 kubelet[2762]: E0515 13:10:25.886517 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:10:25.886610 kubelet[2762]: E0515 13:10:25.886581 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:10:25.886610 kubelet[2762]: E0515 13:10:25.886597 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:10:25.886717 kubelet[2762]: E0515 13:10:25.886639 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:10:25.886717 kubelet[2762]: E0515 13:10:25.886650 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:10:25.886717 kubelet[2762]: E0515 13:10:25.886658 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:10:25.886717 kubelet[2762]: E0515 13:10:25.886670 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:10:25.886717 kubelet[2762]: E0515 13:10:25.886677 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:10:25.886717 kubelet[2762]: E0515 13:10:25.886686 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:10:25.886717 kubelet[2762]: E0515 13:10:25.886694 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:10:25.886881 kubelet[2762]: I0515 13:10:25.886727 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:10:26.872073 systemd[1]: Started sshd@22-172.236.109.179:22-139.178.89.65:44488.service - OpenSSH per-connection server daemon (139.178.89.65:44488). May 15 13:10:27.221772 sshd[5806]: Accepted publickey for core from 139.178.89.65 port 44488 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:27.225550 sshd-session[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:27.232468 systemd-logind[1516]: New session 22 of user core. May 15 13:10:27.242852 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 13:10:27.557878 sshd[5808]: Connection closed by 139.178.89.65 port 44488 May 15 13:10:27.558706 sshd-session[5806]: pam_unix(sshd:session): session closed for user core May 15 13:10:27.564355 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. May 15 13:10:27.565289 systemd[1]: sshd@22-172.236.109.179:22-139.178.89.65:44488.service: Deactivated successfully. May 15 13:10:27.569031 systemd[1]: session-22.scope: Deactivated successfully. May 15 13:10:27.574081 systemd-logind[1516]: Removed session 22. May 15 13:10:32.622969 systemd[1]: Started sshd@23-172.236.109.179:22-139.178.89.65:44496.service - OpenSSH per-connection server daemon (139.178.89.65:44496). May 15 13:10:32.961308 sshd[5824]: Accepted publickey for core from 139.178.89.65 port 44496 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:32.963707 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:32.971754 systemd-logind[1516]: New session 23 of user core. May 15 13:10:32.975983 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 13:10:33.291889 sshd[5826]: Connection closed by 139.178.89.65 port 44496 May 15 13:10:33.292512 sshd-session[5824]: pam_unix(sshd:session): session closed for user core May 15 13:10:33.297234 systemd[1]: sshd@23-172.236.109.179:22-139.178.89.65:44496.service: Deactivated successfully. May 15 13:10:33.299731 systemd[1]: session-23.scope: Deactivated successfully. May 15 13:10:33.300849 systemd-logind[1516]: Session 23 logged out. Waiting for processes to exit. May 15 13:10:33.302821 systemd-logind[1516]: Removed session 23. May 15 13:10:35.912665 kubelet[2762]: I0515 13:10:35.912375 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:10:35.912665 kubelet[2762]: I0515 13:10:35.912431 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:10:35.916244 kubelet[2762]: I0515 13:10:35.916199 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:10:35.939339 kubelet[2762]: I0515 13:10:35.939290 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:10:35.939767 kubelet[2762]: I0515 13:10:35.939512 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939775 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939793 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939803 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939812 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939820 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939827 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939837 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939846 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939860 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:10:35.939896 kubelet[2762]: E0515 13:10:35.939874 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:10:35.939896 kubelet[2762]: I0515 13:10:35.939883 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:10:35.958756 kubelet[2762]: E0515 13:10:35.958719 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:10:37.957583 kubelet[2762]: E0515 13:10:37.957456 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:10:38.355188 systemd[1]: Started sshd@24-172.236.109.179:22-139.178.89.65:37088.service - OpenSSH per-connection server daemon (139.178.89.65:37088). May 15 13:10:38.702168 sshd[5840]: Accepted publickey for core from 139.178.89.65 port 37088 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:38.704683 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:38.713673 systemd-logind[1516]: New session 24 of user core. May 15 13:10:38.719045 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 13:10:39.027091 sshd[5842]: Connection closed by 139.178.89.65 port 37088 May 15 13:10:39.027969 sshd-session[5840]: pam_unix(sshd:session): session closed for user core May 15 13:10:39.034017 systemd[1]: sshd@24-172.236.109.179:22-139.178.89.65:37088.service: Deactivated successfully. May 15 13:10:39.038310 systemd[1]: session-24.scope: Deactivated successfully. May 15 13:10:39.040312 systemd-logind[1516]: Session 24 logged out. Waiting for processes to exit. May 15 13:10:39.042917 systemd-logind[1516]: Removed session 24. May 15 13:10:40.671330 update_engine[1518]: I20250515 13:10:40.671205 1518 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 13:10:40.671330 update_engine[1518]: I20250515 13:10:40.671332 1518 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 13:10:40.672163 update_engine[1518]: I20250515 13:10:40.671898 1518 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 13:10:40.672688 update_engine[1518]: I20250515 13:10:40.672652 1518 omaha_request_params.cc:62] Current group set to developer May 15 13:10:40.673414 update_engine[1518]: I20250515 13:10:40.672876 1518 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 13:10:40.673414 update_engine[1518]: I20250515 13:10:40.672894 1518 update_attempter.cc:643] Scheduling an action processor start. May 15 13:10:40.673414 update_engine[1518]: I20250515 13:10:40.672914 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 13:10:40.673414 update_engine[1518]: I20250515 13:10:40.673002 1518 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 13:10:40.673414 update_engine[1518]: I20250515 13:10:40.673087 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 13:10:40.673414 update_engine[1518]: I20250515 13:10:40.673097 1518 omaha_request_action.cc:272] Request: May 15 13:10:40.673414 update_engine[1518]: May 15 13:10:40.673414 update_engine[1518]: May 15 13:10:40.673414 update_engine[1518]: May 15 13:10:40.673414 update_engine[1518]: May 15 13:10:40.673414 update_engine[1518]: May 15 13:10:40.673414 update_engine[1518]: May 15 13:10:40.673414 update_engine[1518]: May 15 13:10:40.673414 update_engine[1518]: May 15 13:10:40.673414 update_engine[1518]: I20250515 13:10:40.673110 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 13:10:40.674978 locksmithd[1566]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 13:10:40.676309 update_engine[1518]: I20250515 13:10:40.676278 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 13:10:40.677076 update_engine[1518]: I20250515 13:10:40.677028 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 13:10:40.700340 update_engine[1518]: E20250515 13:10:40.700045 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 13:10:40.700340 update_engine[1518]: I20250515 13:10:40.700350 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 13:10:44.092837 systemd[1]: Started sshd@25-172.236.109.179:22-139.178.89.65:37104.service - OpenSSH per-connection server daemon (139.178.89.65:37104). May 15 13:10:44.443174 sshd[5854]: Accepted publickey for core from 139.178.89.65 port 37104 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:44.445433 sshd-session[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:44.452426 systemd-logind[1516]: New session 25 of user core. May 15 13:10:44.458726 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 13:10:44.754536 sshd[5856]: Connection closed by 139.178.89.65 port 37104 May 15 13:10:44.755586 sshd-session[5854]: pam_unix(sshd:session): session closed for user core May 15 13:10:44.759959 systemd-logind[1516]: Session 25 logged out. Waiting for processes to exit. May 15 13:10:44.760596 systemd[1]: sshd@25-172.236.109.179:22-139.178.89.65:37104.service: Deactivated successfully. May 15 13:10:44.762918 systemd[1]: session-25.scope: Deactivated successfully. May 15 13:10:44.768473 systemd-logind[1516]: Removed session 25. May 15 13:10:45.980168 kubelet[2762]: I0515 13:10:45.979588 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:10:45.980168 kubelet[2762]: I0515 13:10:45.979635 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:10:45.984226 kubelet[2762]: I0515 13:10:45.984196 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:10:46.018486 kubelet[2762]: I0515 13:10:46.018403 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:10:46.019084 kubelet[2762]: I0515 13:10:46.019040 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019087 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019103 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019114 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019124 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019132 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019141 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019155 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019175 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019187 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:10:46.019432 kubelet[2762]: E0515 13:10:46.019197 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:10:46.019432 kubelet[2762]: I0515 13:10:46.019206 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:10:49.798909 containerd[1543]: time="2025-05-15T13:10:49.798705162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"977926e6ac71d2c2cedad47a41c5b649c01c6bc85a0be613d4d126c39da35cad\" pid:5879 exited_at:{seconds:1747314649 nanos:796524212}" May 15 13:10:49.819778 systemd[1]: Started sshd@26-172.236.109.179:22-139.178.89.65:51072.service - OpenSSH per-connection server daemon (139.178.89.65:51072). May 15 13:10:49.964140 kubelet[2762]: E0515 13:10:49.964099 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:10:50.155600 sshd[5892]: Accepted publickey for core from 139.178.89.65 port 51072 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:50.157850 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:50.166164 systemd-logind[1516]: New session 26 of user core. May 15 13:10:50.172932 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 13:10:50.482720 sshd[5894]: Connection closed by 139.178.89.65 port 51072 May 15 13:10:50.482575 sshd-session[5892]: pam_unix(sshd:session): session closed for user core May 15 13:10:50.490812 systemd-logind[1516]: Session 26 logged out. Waiting for processes to exit. May 15 13:10:50.492483 systemd[1]: sshd@26-172.236.109.179:22-139.178.89.65:51072.service: Deactivated successfully. May 15 13:10:50.496612 systemd[1]: session-26.scope: Deactivated successfully. May 15 13:10:50.500139 systemd-logind[1516]: Removed session 26. May 15 13:10:50.670655 update_engine[1518]: I20250515 13:10:50.670469 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 13:10:50.672593 update_engine[1518]: I20250515 13:10:50.672168 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 13:10:50.673582 update_engine[1518]: I20250515 13:10:50.673501 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 13:10:50.707652 update_engine[1518]: E20250515 13:10:50.707579 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 13:10:50.707779 update_engine[1518]: I20250515 13:10:50.707677 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 13:10:55.546023 systemd[1]: Started sshd@27-172.236.109.179:22-139.178.89.65:51086.service - OpenSSH per-connection server daemon (139.178.89.65:51086). May 15 13:10:55.884799 sshd[5907]: Accepted publickey for core from 139.178.89.65 port 51086 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:10:55.888974 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:10:55.894393 systemd-logind[1516]: New session 27 of user core. May 15 13:10:55.899710 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 13:10:56.047576 kubelet[2762]: I0515 13:10:56.047523 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:10:56.048296 kubelet[2762]: I0515 13:10:56.047592 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:10:56.050347 kubelet[2762]: I0515 13:10:56.050316 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:10:56.075923 kubelet[2762]: I0515 13:10:56.075890 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:10:56.076114 kubelet[2762]: I0515 13:10:56.076093 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076129 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076145 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076157 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076166 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076174 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076184 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076195 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076204 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:10:56.076204 kubelet[2762]: E0515 13:10:56.076213 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:10:56.076501 kubelet[2762]: E0515 13:10:56.076222 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:10:56.076501 kubelet[2762]: I0515 13:10:56.076233 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:10:56.214662 sshd[5909]: Connection closed by 139.178.89.65 port 51086 May 15 13:10:56.215967 sshd-session[5907]: pam_unix(sshd:session): session closed for user core May 15 13:10:56.224509 systemd[1]: sshd@27-172.236.109.179:22-139.178.89.65:51086.service: Deactivated successfully. May 15 13:10:56.231506 systemd[1]: session-27.scope: Deactivated successfully. May 15 13:10:56.233705 systemd-logind[1516]: Session 27 logged out. Waiting for processes to exit. May 15 13:10:56.235690 systemd-logind[1516]: Removed session 27. May 15 13:10:56.956968 kubelet[2762]: E0515 13:10:56.956929 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:11:00.672740 update_engine[1518]: I20250515 13:11:00.672571 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 13:11:00.673740 update_engine[1518]: I20250515 13:11:00.673132 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 13:11:00.673740 update_engine[1518]: I20250515 13:11:00.673579 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 13:11:00.674382 update_engine[1518]: E20250515 13:11:00.674312 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 13:11:00.674509 update_engine[1518]: I20250515 13:11:00.674393 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 13:11:01.279115 systemd[1]: Started sshd@28-172.236.109.179:22-139.178.89.65:56708.service - OpenSSH per-connection server daemon (139.178.89.65:56708). May 15 13:11:01.619472 sshd[5926]: Accepted publickey for core from 139.178.89.65 port 56708 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:01.621990 sshd-session[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:01.629165 systemd-logind[1516]: New session 28 of user core. May 15 13:11:01.635738 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 13:11:01.937756 sshd[5929]: Connection closed by 139.178.89.65 port 56708 May 15 13:11:01.938469 sshd-session[5926]: pam_unix(sshd:session): session closed for user core May 15 13:11:01.943634 systemd-logind[1516]: Session 28 logged out. Waiting for processes to exit. May 15 13:11:01.944630 systemd[1]: sshd@28-172.236.109.179:22-139.178.89.65:56708.service: Deactivated successfully. May 15 13:11:01.947288 systemd[1]: session-28.scope: Deactivated successfully. May 15 13:11:01.949248 systemd-logind[1516]: Removed session 28. May 15 13:11:03.957613 kubelet[2762]: E0515 13:11:03.956914 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:11:04.957928 containerd[1543]: time="2025-05-15T13:11:04.957859469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 13:11:06.118590 kubelet[2762]: I0515 13:11:06.118513 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:11:06.120010 kubelet[2762]: I0515 13:11:06.119324 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:11:06.121871 kubelet[2762]: I0515 13:11:06.121779 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:11:06.147680 kubelet[2762]: I0515 13:11:06.147319 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:11:06.147986 kubelet[2762]: I0515 13:11:06.147963 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:11:06.148148 kubelet[2762]: E0515 13:11:06.148131 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:11:06.148245 kubelet[2762]: E0515 13:11:06.148233 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:11:06.148384 kubelet[2762]: E0515 13:11:06.148297 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:11:06.148384 kubelet[2762]: E0515 13:11:06.148308 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:11:06.148384 kubelet[2762]: E0515 13:11:06.148317 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:11:06.148384 kubelet[2762]: E0515 13:11:06.148326 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:11:06.148384 kubelet[2762]: E0515 13:11:06.148337 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:11:06.148857 kubelet[2762]: E0515 13:11:06.148345 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:11:06.148857 kubelet[2762]: E0515 13:11:06.148623 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:11:06.148857 kubelet[2762]: E0515 13:11:06.148635 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:11:06.148857 kubelet[2762]: I0515 13:11:06.148645 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:11:06.230873 containerd[1543]: time="2025-05-15T13:11:06.230801044Z" level=error msg="failed to cleanup \"extract-802447572-nL1C sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 13:11:06.232650 containerd[1543]: time="2025-05-15T13:11:06.232300990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs/usr/bin/kube-controllers: no space left on device" May 15 13:11:06.232650 containerd[1543]: time="2025-05-15T13:11:06.232398511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 13:11:06.233250 kubelet[2762]: E0515 13:11:06.233137 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:11:06.233672 kubelet[2762]: E0515 13:11:06.233254 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:11:06.234877 kubelet[2762]: E0515 13:11:06.234791 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7p84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" May 15 13:11:06.236325 kubelet[2762]: E0515 13:11:06.236082 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/96/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:11:07.004364 systemd[1]: Started sshd@29-172.236.109.179:22-139.178.89.65:46232.service - OpenSSH per-connection server daemon (139.178.89.65:46232). May 15 13:11:07.357458 sshd[5946]: Accepted publickey for core from 139.178.89.65 port 46232 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:07.361840 sshd-session[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:07.370427 systemd-logind[1516]: New session 29 of user core. May 15 13:11:07.378923 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 13:11:07.692777 sshd[5948]: Connection closed by 139.178.89.65 port 46232 May 15 13:11:07.693518 sshd-session[5946]: pam_unix(sshd:session): session closed for user core May 15 13:11:07.698770 systemd-logind[1516]: Session 29 logged out. Waiting for processes to exit. May 15 13:11:07.699792 systemd[1]: sshd@29-172.236.109.179:22-139.178.89.65:46232.service: Deactivated successfully. May 15 13:11:07.703024 systemd[1]: session-29.scope: Deactivated successfully. May 15 13:11:07.705224 systemd-logind[1516]: Removed session 29. May 15 13:11:10.672829 update_engine[1518]: I20250515 13:11:10.672037 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 13:11:10.674058 update_engine[1518]: I20250515 13:11:10.673254 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 13:11:10.674058 update_engine[1518]: I20250515 13:11:10.673945 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 13:11:10.674838 update_engine[1518]: E20250515 13:11:10.674803 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 13:11:10.674902 update_engine[1518]: I20250515 13:11:10.674854 1518 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 13:11:10.674902 update_engine[1518]: I20250515 13:11:10.674877 1518 omaha_request_action.cc:617] Omaha request response: May 15 13:11:10.675021 update_engine[1518]: E20250515 13:11:10.674990 1518 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 13:11:10.675234 update_engine[1518]: I20250515 13:11:10.675212 1518 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 13:11:10.675234 update_engine[1518]: I20250515 13:11:10.675225 1518 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 13:11:10.675300 update_engine[1518]: I20250515 13:11:10.675235 1518 update_attempter.cc:306] Processing Done. May 15 13:11:10.675300 update_engine[1518]: E20250515 13:11:10.675284 1518 update_attempter.cc:619] Update failed. May 15 13:11:10.675514 update_engine[1518]: I20250515 13:11:10.675488 1518 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 13:11:10.675514 update_engine[1518]: I20250515 13:11:10.675495 1518 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 13:11:10.675514 update_engine[1518]: I20250515 13:11:10.675503 1518 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 13:11:10.675672 update_engine[1518]: I20250515 13:11:10.675650 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 13:11:10.675737 update_engine[1518]: I20250515 13:11:10.675720 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 13:11:10.675737 update_engine[1518]: I20250515 13:11:10.675731 1518 omaha_request_action.cc:272] Request: May 15 13:11:10.675737 update_engine[1518]: May 15 13:11:10.675737 update_engine[1518]: May 15 13:11:10.675737 update_engine[1518]: May 15 13:11:10.675737 update_engine[1518]: May 15 13:11:10.675737 update_engine[1518]: May 15 13:11:10.675737 update_engine[1518]: May 15 13:11:10.675922 update_engine[1518]: I20250515 13:11:10.675738 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 13:11:10.675922 update_engine[1518]: I20250515 13:11:10.675907 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 13:11:10.676126 update_engine[1518]: I20250515 13:11:10.676096 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 13:11:10.677427 update_engine[1518]: E20250515 13:11:10.676774 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 13:11:10.677427 update_engine[1518]: I20250515 13:11:10.676812 1518 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 13:11:10.677427 update_engine[1518]: I20250515 13:11:10.676820 1518 omaha_request_action.cc:617] Omaha request response: May 15 13:11:10.677427 update_engine[1518]: I20250515 13:11:10.676827 1518 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 13:11:10.677427 update_engine[1518]: I20250515 13:11:10.676833 1518 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 13:11:10.677427 update_engine[1518]: I20250515 13:11:10.676839 1518 update_attempter.cc:306] Processing Done. May 15 13:11:10.677427 update_engine[1518]: I20250515 13:11:10.676846 1518 update_attempter.cc:310] Error event sent. May 15 13:11:10.677427 update_engine[1518]: I20250515 13:11:10.676862 1518 update_check_scheduler.cc:74] Next update check in 44m34s May 15 13:11:10.677684 locksmithd[1566]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 13:11:10.677684 locksmithd[1566]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 13:11:10.956745 kubelet[2762]: E0515 13:11:10.956615 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:11:12.757670 systemd[1]: Started sshd@30-172.236.109.179:22-139.178.89.65:46234.service - OpenSSH per-connection server daemon (139.178.89.65:46234). May 15 13:11:12.956551 kubelet[2762]: E0515 13:11:12.956487 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:11:13.099689 sshd[5973]: Accepted publickey for core from 139.178.89.65 port 46234 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:13.102289 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:13.111641 systemd-logind[1516]: New session 30 of user core. May 15 13:11:13.118862 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 13:11:13.421488 sshd[5976]: Connection closed by 139.178.89.65 port 46234 May 15 13:11:13.421969 sshd-session[5973]: pam_unix(sshd:session): session closed for user core May 15 13:11:13.429868 systemd-logind[1516]: Session 30 logged out. Waiting for processes to exit. May 15 13:11:13.430823 systemd[1]: sshd@30-172.236.109.179:22-139.178.89.65:46234.service: Deactivated successfully. May 15 13:11:13.434197 systemd[1]: session-30.scope: Deactivated successfully. May 15 13:11:13.436595 systemd-logind[1516]: Removed session 30. May 15 13:11:14.956830 kubelet[2762]: E0515 13:11:14.956790 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:11:15.957144 kubelet[2762]: E0515 13:11:15.957048 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:11:16.179936 kubelet[2762]: I0515 13:11:16.179891 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:11:16.179936 kubelet[2762]: I0515 13:11:16.179937 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:11:16.184260 kubelet[2762]: I0515 13:11:16.184203 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:11:16.204061 kubelet[2762]: I0515 13:11:16.204032 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:11:16.204199 kubelet[2762]: I0515 13:11:16.204170 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204201 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204215 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204225 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204234 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204243 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204251 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204265 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204273 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:11:16.204274 kubelet[2762]: E0515 13:11:16.204281 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:11:16.204500 kubelet[2762]: E0515 13:11:16.204289 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:11:16.204500 kubelet[2762]: I0515 13:11:16.204299 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:11:16.957744 kubelet[2762]: E0515 13:11:16.957676 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:11:18.482475 systemd[1]: Started sshd@31-172.236.109.179:22-139.178.89.65:56832.service - OpenSSH per-connection server daemon (139.178.89.65:56832). May 15 13:11:18.828961 sshd[5994]: Accepted publickey for core from 139.178.89.65 port 56832 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:18.830774 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:18.838531 systemd-logind[1516]: New session 31 of user core. May 15 13:11:18.841780 systemd[1]: Started session-31.scope - Session 31 of User core. May 15 13:11:19.140100 sshd[5996]: Connection closed by 139.178.89.65 port 56832 May 15 13:11:19.140830 sshd-session[5994]: pam_unix(sshd:session): session closed for user core May 15 13:11:19.146913 systemd[1]: sshd@31-172.236.109.179:22-139.178.89.65:56832.service: Deactivated successfully. May 15 13:11:19.150059 systemd[1]: session-31.scope: Deactivated successfully. May 15 13:11:19.151261 systemd-logind[1516]: Session 31 logged out. Waiting for processes to exit. May 15 13:11:19.153517 systemd-logind[1516]: Removed session 31. May 15 13:11:19.812257 containerd[1543]: time="2025-05-15T13:11:19.812139729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"bbf6a9263a6694c7f158ee993bbaa4f89c6cb1be8863c0689f1594e6852ea299\" pid:6019 exited_at:{seconds:1747314679 nanos:811334572}" May 15 13:11:24.203919 systemd[1]: Started sshd@32-172.236.109.179:22-139.178.89.65:56836.service - OpenSSH per-connection server daemon (139.178.89.65:56836). May 15 13:11:24.540453 sshd[6032]: Accepted publickey for core from 139.178.89.65 port 56836 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:24.542318 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:24.548227 systemd-logind[1516]: New session 32 of user core. May 15 13:11:24.553702 systemd[1]: Started session-32.scope - Session 32 of User core. May 15 13:11:24.853982 sshd[6034]: Connection closed by 139.178.89.65 port 56836 May 15 13:11:24.855632 sshd-session[6032]: pam_unix(sshd:session): session closed for user core May 15 13:11:24.858770 systemd[1]: sshd@32-172.236.109.179:22-139.178.89.65:56836.service: Deactivated successfully. May 15 13:11:24.860953 systemd[1]: session-32.scope: Deactivated successfully. May 15 13:11:24.863395 systemd-logind[1516]: Session 32 logged out. Waiting for processes to exit. May 15 13:11:24.865069 systemd-logind[1516]: Removed session 32. May 15 13:11:26.237219 kubelet[2762]: I0515 13:11:26.237173 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:11:26.237219 kubelet[2762]: I0515 13:11:26.237218 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:11:26.239847 kubelet[2762]: I0515 13:11:26.239800 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:11:26.262490 kubelet[2762]: I0515 13:11:26.262432 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:11:26.262687 kubelet[2762]: I0515 13:11:26.262572 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:11:26.262687 kubelet[2762]: E0515 13:11:26.262604 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:11:26.262687 kubelet[2762]: E0515 13:11:26.262618 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:11:26.262687 kubelet[2762]: E0515 13:11:26.262629 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:11:26.262687 kubelet[2762]: E0515 13:11:26.262667 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:11:26.262687 kubelet[2762]: E0515 13:11:26.262679 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:11:26.262687 kubelet[2762]: E0515 13:11:26.262687 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:11:26.262924 kubelet[2762]: E0515 13:11:26.262698 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:11:26.262924 kubelet[2762]: E0515 13:11:26.262706 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:11:26.262924 kubelet[2762]: E0515 13:11:26.262713 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:11:26.262924 kubelet[2762]: E0515 13:11:26.262721 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:11:26.262924 kubelet[2762]: I0515 13:11:26.262731 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:11:28.959069 kubelet[2762]: E0515 13:11:28.958886 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:11:29.925014 systemd[1]: Started sshd@33-172.236.109.179:22-139.178.89.65:47100.service - OpenSSH per-connection server daemon (139.178.89.65:47100). May 15 13:11:30.274263 sshd[6046]: Accepted publickey for core from 139.178.89.65 port 47100 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:30.276179 sshd-session[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:30.282138 systemd-logind[1516]: New session 33 of user core. May 15 13:11:30.288697 systemd[1]: Started session-33.scope - Session 33 of User core. May 15 13:11:30.595909 sshd[6050]: Connection closed by 139.178.89.65 port 47100 May 15 13:11:30.596886 sshd-session[6046]: pam_unix(sshd:session): session closed for user core May 15 13:11:30.601782 systemd-logind[1516]: Session 33 logged out. Waiting for processes to exit. May 15 13:11:30.602713 systemd[1]: sshd@33-172.236.109.179:22-139.178.89.65:47100.service: Deactivated successfully. May 15 13:11:30.606150 systemd[1]: session-33.scope: Deactivated successfully. May 15 13:11:30.608339 systemd-logind[1516]: Removed session 33. May 15 13:11:35.661206 systemd[1]: Started sshd@34-172.236.109.179:22-139.178.89.65:47108.service - OpenSSH per-connection server daemon (139.178.89.65:47108). May 15 13:11:36.005808 sshd[6063]: Accepted publickey for core from 139.178.89.65 port 47108 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:36.007860 sshd-session[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:36.014815 systemd-logind[1516]: New session 34 of user core. May 15 13:11:36.021903 systemd[1]: Started session-34.scope - Session 34 of User core. May 15 13:11:36.307070 kubelet[2762]: I0515 13:11:36.306580 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:11:36.307070 kubelet[2762]: I0515 13:11:36.306631 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:11:36.309424 kubelet[2762]: I0515 13:11:36.309400 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:11:36.332286 kubelet[2762]: I0515 13:11:36.332257 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:11:36.332511 kubelet[2762]: I0515 13:11:36.332475 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:11:36.332628 kubelet[2762]: E0515 13:11:36.332515 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:11:36.332628 kubelet[2762]: E0515 13:11:36.332530 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:11:36.332628 kubelet[2762]: E0515 13:11:36.332540 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:11:36.332866 kubelet[2762]: E0515 13:11:36.332549 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:11:36.332866 kubelet[2762]: E0515 13:11:36.332663 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:11:36.332866 kubelet[2762]: E0515 13:11:36.332674 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:11:36.332866 kubelet[2762]: E0515 13:11:36.332696 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:11:36.332866 kubelet[2762]: E0515 13:11:36.332705 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:11:36.332866 kubelet[2762]: E0515 13:11:36.332713 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:11:36.332866 kubelet[2762]: E0515 13:11:36.332723 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:11:36.332866 kubelet[2762]: I0515 13:11:36.332752 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:11:36.335388 sshd[6065]: Connection closed by 139.178.89.65 port 47108 May 15 13:11:36.336759 sshd-session[6063]: pam_unix(sshd:session): session closed for user core May 15 13:11:36.341738 systemd[1]: sshd@34-172.236.109.179:22-139.178.89.65:47108.service: Deactivated successfully. May 15 13:11:36.346529 systemd[1]: session-34.scope: Deactivated successfully. May 15 13:11:36.349545 systemd-logind[1516]: Session 34 logged out. Waiting for processes to exit. May 15 13:11:36.352235 systemd-logind[1516]: Removed session 34. May 15 13:11:40.957387 kubelet[2762]: E0515 13:11:40.957331 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:11:41.406066 systemd[1]: Started sshd@35-172.236.109.179:22-139.178.89.65:46992.service - OpenSSH per-connection server daemon (139.178.89.65:46992). May 15 13:11:41.749423 sshd[6080]: Accepted publickey for core from 139.178.89.65 port 46992 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:41.750940 sshd-session[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:41.758312 systemd-logind[1516]: New session 35 of user core. May 15 13:11:41.763996 systemd[1]: Started session-35.scope - Session 35 of User core. May 15 13:11:42.078391 sshd[6082]: Connection closed by 139.178.89.65 port 46992 May 15 13:11:42.079760 sshd-session[6080]: pam_unix(sshd:session): session closed for user core May 15 13:11:42.084044 systemd-logind[1516]: Session 35 logged out. Waiting for processes to exit. May 15 13:11:42.084347 systemd[1]: sshd@35-172.236.109.179:22-139.178.89.65:46992.service: Deactivated successfully. May 15 13:11:42.086545 systemd[1]: session-35.scope: Deactivated successfully. May 15 13:11:42.088942 systemd-logind[1516]: Removed session 35. May 15 13:11:44.957221 kubelet[2762]: E0515 13:11:44.957183 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:11:46.359135 kubelet[2762]: I0515 13:11:46.359095 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:11:46.359135 kubelet[2762]: I0515 13:11:46.359140 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:11:46.361068 kubelet[2762]: I0515 13:11:46.361039 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:11:46.387247 kubelet[2762]: I0515 13:11:46.387191 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:11:46.387408 kubelet[2762]: I0515 13:11:46.387323 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:11:46.387408 kubelet[2762]: E0515 13:11:46.387357 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:11:46.387408 kubelet[2762]: E0515 13:11:46.387373 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:11:46.387408 kubelet[2762]: E0515 13:11:46.387392 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:11:46.387408 kubelet[2762]: E0515 13:11:46.387401 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:11:46.387408 kubelet[2762]: E0515 13:11:46.387410 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:11:46.387648 kubelet[2762]: E0515 13:11:46.387418 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:11:46.387648 kubelet[2762]: E0515 13:11:46.387430 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:11:46.387648 kubelet[2762]: E0515 13:11:46.387438 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:11:46.387648 kubelet[2762]: E0515 13:11:46.387447 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:11:46.387648 kubelet[2762]: E0515 13:11:46.387455 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:11:46.387648 kubelet[2762]: I0515 13:11:46.387463 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:11:47.144630 systemd[1]: Started sshd@36-172.236.109.179:22-139.178.89.65:38654.service - OpenSSH per-connection server daemon (139.178.89.65:38654). May 15 13:11:47.491858 sshd[6094]: Accepted publickey for core from 139.178.89.65 port 38654 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:47.493622 sshd-session[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:47.500176 systemd-logind[1516]: New session 36 of user core. May 15 13:11:47.514898 systemd[1]: Started session-36.scope - Session 36 of User core. May 15 13:11:47.869333 sshd[6096]: Connection closed by 139.178.89.65 port 38654 May 15 13:11:47.873380 sshd-session[6094]: pam_unix(sshd:session): session closed for user core May 15 13:11:47.880820 systemd-logind[1516]: Session 36 logged out. Waiting for processes to exit. May 15 13:11:47.882040 systemd[1]: sshd@36-172.236.109.179:22-139.178.89.65:38654.service: Deactivated successfully. May 15 13:11:47.890136 systemd[1]: session-36.scope: Deactivated successfully. May 15 13:11:47.896583 systemd-logind[1516]: Removed session 36. May 15 13:11:49.786544 containerd[1543]: time="2025-05-15T13:11:49.786494228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"a98ab4ef024c5f8430b70b0db47954c55851d4baff5c89a7f50b8729ea5f3e49\" pid:6121 exited_at:{seconds:1747314709 nanos:785870744}" May 15 13:11:52.936258 systemd[1]: Started sshd@37-172.236.109.179:22-139.178.89.65:38662.service - OpenSSH per-connection server daemon (139.178.89.65:38662). May 15 13:11:52.958430 kubelet[2762]: E0515 13:11:52.958202 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:11:53.291697 sshd[6133]: Accepted publickey for core from 139.178.89.65 port 38662 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:53.292328 sshd-session[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:53.297796 systemd-logind[1516]: New session 37 of user core. May 15 13:11:53.302701 systemd[1]: Started session-37.scope - Session 37 of User core. May 15 13:11:53.610616 sshd[6135]: Connection closed by 139.178.89.65 port 38662 May 15 13:11:53.611813 sshd-session[6133]: pam_unix(sshd:session): session closed for user core May 15 13:11:53.617967 systemd-logind[1516]: Session 37 logged out. Waiting for processes to exit. May 15 13:11:53.618922 systemd[1]: sshd@37-172.236.109.179:22-139.178.89.65:38662.service: Deactivated successfully. May 15 13:11:53.622582 systemd[1]: session-37.scope: Deactivated successfully. May 15 13:11:53.627247 systemd-logind[1516]: Removed session 37. May 15 13:11:56.443453 kubelet[2762]: I0515 13:11:56.443417 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:11:56.444669 kubelet[2762]: I0515 13:11:56.443607 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:11:56.447241 kubelet[2762]: I0515 13:11:56.446783 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:11:56.474962 kubelet[2762]: I0515 13:11:56.474891 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:11:56.475447 kubelet[2762]: I0515 13:11:56.475392 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:11:56.475520 kubelet[2762]: E0515 13:11:56.475435 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:11:56.475520 kubelet[2762]: E0515 13:11:56.475486 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:11:56.475520 kubelet[2762]: E0515 13:11:56.475496 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:11:56.475520 kubelet[2762]: E0515 13:11:56.475507 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:11:56.475520 kubelet[2762]: E0515 13:11:56.475515 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:11:56.476144 kubelet[2762]: E0515 13:11:56.475524 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:11:56.476144 kubelet[2762]: E0515 13:11:56.475584 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:11:56.476144 kubelet[2762]: E0515 13:11:56.475594 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:11:56.476144 kubelet[2762]: E0515 13:11:56.475607 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:11:56.476144 kubelet[2762]: E0515 13:11:56.475615 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:11:56.476144 kubelet[2762]: I0515 13:11:56.475625 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:11:58.673886 systemd[1]: Started sshd@38-172.236.109.179:22-139.178.89.65:48032.service - OpenSSH per-connection server daemon (139.178.89.65:48032). May 15 13:11:59.025813 sshd[6148]: Accepted publickey for core from 139.178.89.65 port 48032 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:11:59.028087 sshd-session[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:11:59.034512 systemd-logind[1516]: New session 38 of user core. May 15 13:11:59.040723 systemd[1]: Started session-38.scope - Session 38 of User core. May 15 13:11:59.342158 sshd[6150]: Connection closed by 139.178.89.65 port 48032 May 15 13:11:59.343818 sshd-session[6148]: pam_unix(sshd:session): session closed for user core May 15 13:11:59.348123 systemd[1]: sshd@38-172.236.109.179:22-139.178.89.65:48032.service: Deactivated successfully. May 15 13:11:59.350983 systemd[1]: session-38.scope: Deactivated successfully. May 15 13:11:59.352609 systemd-logind[1516]: Session 38 logged out. Waiting for processes to exit. May 15 13:11:59.354779 systemd-logind[1516]: Removed session 38. May 15 13:12:01.957262 kubelet[2762]: E0515 13:12:01.957191 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:12:04.405491 systemd[1]: Started sshd@39-172.236.109.179:22-139.178.89.65:48042.service - OpenSSH per-connection server daemon (139.178.89.65:48042). May 15 13:12:04.742373 sshd[6162]: Accepted publickey for core from 139.178.89.65 port 48042 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:04.744309 sshd-session[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:04.750617 systemd-logind[1516]: New session 39 of user core. May 15 13:12:04.758719 systemd[1]: Started session-39.scope - Session 39 of User core. May 15 13:12:05.048846 sshd[6164]: Connection closed by 139.178.89.65 port 48042 May 15 13:12:05.049604 sshd-session[6162]: pam_unix(sshd:session): session closed for user core May 15 13:12:05.054480 systemd-logind[1516]: Session 39 logged out. Waiting for processes to exit. May 15 13:12:05.055236 systemd[1]: sshd@39-172.236.109.179:22-139.178.89.65:48042.service: Deactivated successfully. May 15 13:12:05.058379 systemd[1]: session-39.scope: Deactivated successfully. May 15 13:12:05.060414 systemd-logind[1516]: Removed session 39. May 15 13:12:06.498883 kubelet[2762]: I0515 13:12:06.498850 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:12:06.499524 kubelet[2762]: I0515 13:12:06.498904 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:12:06.500371 kubelet[2762]: I0515 13:12:06.500341 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:12:06.519371 kubelet[2762]: I0515 13:12:06.519333 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:12:06.519947 kubelet[2762]: I0515 13:12:06.519918 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:12:06.520038 kubelet[2762]: E0515 13:12:06.519960 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:12:06.520038 kubelet[2762]: E0515 13:12:06.519975 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:12:06.520038 kubelet[2762]: E0515 13:12:06.519984 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:12:06.520038 kubelet[2762]: E0515 13:12:06.519993 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:12:06.520038 kubelet[2762]: E0515 13:12:06.520001 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:12:06.520038 kubelet[2762]: E0515 13:12:06.520010 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:12:06.520038 kubelet[2762]: E0515 13:12:06.520021 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:12:06.520038 kubelet[2762]: E0515 13:12:06.520029 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:12:06.520227 kubelet[2762]: E0515 13:12:06.520037 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:12:06.520227 kubelet[2762]: E0515 13:12:06.520067 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:12:06.520227 kubelet[2762]: I0515 13:12:06.520077 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:12:07.958611 kubelet[2762]: E0515 13:12:07.958517 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:12:10.111079 systemd[1]: Started sshd@40-172.236.109.179:22-139.178.89.65:46926.service - OpenSSH per-connection server daemon (139.178.89.65:46926). May 15 13:12:10.451904 sshd[6178]: Accepted publickey for core from 139.178.89.65 port 46926 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:10.454357 sshd-session[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:10.462388 systemd-logind[1516]: New session 40 of user core. May 15 13:12:10.466932 systemd[1]: Started session-40.scope - Session 40 of User core. May 15 13:12:10.765963 sshd[6180]: Connection closed by 139.178.89.65 port 46926 May 15 13:12:10.767815 sshd-session[6178]: pam_unix(sshd:session): session closed for user core May 15 13:12:10.773600 systemd-logind[1516]: Session 40 logged out. Waiting for processes to exit. May 15 13:12:10.776453 systemd[1]: sshd@40-172.236.109.179:22-139.178.89.65:46926.service: Deactivated successfully. May 15 13:12:10.779674 systemd[1]: session-40.scope: Deactivated successfully. May 15 13:12:10.783417 systemd-logind[1516]: Removed session 40. May 15 13:12:13.708112 systemd[1]: Started sshd@41-172.236.109.179:22-218.92.0.204:59166.service - OpenSSH per-connection server daemon (218.92.0.204:59166). May 15 13:12:13.932207 sshd[6193]: Unable to negotiate with 218.92.0.204 port 59166: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] May 15 13:12:13.935544 systemd[1]: sshd@41-172.236.109.179:22-218.92.0.204:59166.service: Deactivated successfully. May 15 13:12:15.831023 systemd[1]: Started sshd@42-172.236.109.179:22-139.178.89.65:46940.service - OpenSSH per-connection server daemon (139.178.89.65:46940). May 15 13:12:16.181441 sshd[6198]: Accepted publickey for core from 139.178.89.65 port 46940 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:16.183150 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:16.188990 systemd-logind[1516]: New session 41 of user core. May 15 13:12:16.194710 systemd[1]: Started session-41.scope - Session 41 of User core. May 15 13:12:16.498159 sshd[6200]: Connection closed by 139.178.89.65 port 46940 May 15 13:12:16.498991 sshd-session[6198]: pam_unix(sshd:session): session closed for user core May 15 13:12:16.503868 systemd-logind[1516]: Session 41 logged out. Waiting for processes to exit. May 15 13:12:16.504780 systemd[1]: sshd@42-172.236.109.179:22-139.178.89.65:46940.service: Deactivated successfully. May 15 13:12:16.507538 systemd[1]: session-41.scope: Deactivated successfully. May 15 13:12:16.509609 systemd-logind[1516]: Removed session 41. May 15 13:12:16.545523 kubelet[2762]: I0515 13:12:16.545484 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:12:16.546614 kubelet[2762]: I0515 13:12:16.545538 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:12:16.547747 kubelet[2762]: I0515 13:12:16.547706 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:12:16.566933 kubelet[2762]: I0515 13:12:16.566894 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:12:16.567097 kubelet[2762]: I0515 13:12:16.567048 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:12:16.567097 kubelet[2762]: E0515 13:12:16.567080 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:12:16.567097 kubelet[2762]: E0515 13:12:16.567093 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:12:16.567606 kubelet[2762]: E0515 13:12:16.567103 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:12:16.567606 kubelet[2762]: E0515 13:12:16.567114 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:12:16.567606 kubelet[2762]: E0515 13:12:16.567122 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:12:16.567606 kubelet[2762]: E0515 13:12:16.567130 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:12:16.567606 kubelet[2762]: E0515 13:12:16.567142 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:12:16.567606 kubelet[2762]: E0515 13:12:16.567151 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:12:16.567606 kubelet[2762]: E0515 13:12:16.567160 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:12:16.567606 kubelet[2762]: E0515 13:12:16.567168 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:12:16.567606 kubelet[2762]: I0515 13:12:16.567176 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:12:18.959087 kubelet[2762]: E0515 13:12:18.958634 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:12:19.795241 containerd[1543]: time="2025-05-15T13:12:19.795189241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"dd18f3ee135c1da02337ec830ee2b6f86eb3262aff8af8eed2be857b5f68fd81\" pid:6223 exited_at:{seconds:1747314739 nanos:794796418}" May 15 13:12:19.964232 kubelet[2762]: E0515 13:12:19.964191 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:12:21.561098 systemd[1]: Started sshd@43-172.236.109.179:22-139.178.89.65:39602.service - OpenSSH per-connection server daemon (139.178.89.65:39602). May 15 13:12:21.894063 sshd[6241]: Accepted publickey for core from 139.178.89.65 port 39602 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:21.895856 sshd-session[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:21.902250 systemd-logind[1516]: New session 42 of user core. May 15 13:12:21.906691 systemd[1]: Started session-42.scope - Session 42 of User core. May 15 13:12:21.959079 kubelet[2762]: E0515 13:12:21.958680 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:12:22.199754 sshd[6243]: Connection closed by 139.178.89.65 port 39602 May 15 13:12:22.200617 sshd-session[6241]: pam_unix(sshd:session): session closed for user core May 15 13:12:22.204805 systemd-logind[1516]: Session 42 logged out. Waiting for processes to exit. May 15 13:12:22.205772 systemd[1]: sshd@43-172.236.109.179:22-139.178.89.65:39602.service: Deactivated successfully. May 15 13:12:22.207923 systemd[1]: session-42.scope: Deactivated successfully. May 15 13:12:22.210809 systemd-logind[1516]: Removed session 42. May 15 13:12:23.958367 kubelet[2762]: E0515 13:12:23.958164 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:12:24.053479 containerd[1543]: time="2025-05-15T13:12:24.053173023Z" level=warning msg="container event discarded" container=69804abbb9c6955efcabc5b03c4f387f4565c5657259003ff3b0a71a237244bb type=CONTAINER_CREATED_EVENT May 15 13:12:24.054112 containerd[1543]: time="2025-05-15T13:12:24.053439774Z" level=warning msg="container event discarded" container=69804abbb9c6955efcabc5b03c4f387f4565c5657259003ff3b0a71a237244bb type=CONTAINER_STARTED_EVENT May 15 13:12:24.074668 containerd[1543]: time="2025-05-15T13:12:24.074600870Z" level=warning msg="container event discarded" container=a15b7e7d653fb79896c8f3631c82f9654a1675fd087bc9bf78caf746d5f1cd48 type=CONTAINER_CREATED_EVENT May 15 13:12:24.074952 containerd[1543]: time="2025-05-15T13:12:24.074920072Z" level=warning msg="container event discarded" container=a15b7e7d653fb79896c8f3631c82f9654a1675fd087bc9bf78caf746d5f1cd48 type=CONTAINER_STARTED_EVENT May 15 13:12:24.074952 containerd[1543]: time="2025-05-15T13:12:24.074939742Z" level=warning msg="container event discarded" container=0906db8c1f880c0d28fa9a424ea2c542d95e85acace31e90e7a0a9ff8c7e358b type=CONTAINER_CREATED_EVENT May 15 13:12:24.074952 containerd[1543]: time="2025-05-15T13:12:24.074952482Z" level=warning msg="container event discarded" container=0906db8c1f880c0d28fa9a424ea2c542d95e85acace31e90e7a0a9ff8c7e358b type=CONTAINER_STARTED_EVENT May 15 13:12:24.098374 containerd[1543]: time="2025-05-15T13:12:24.098288299Z" level=warning msg="container event discarded" container=6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666 type=CONTAINER_CREATED_EVENT May 15 13:12:24.098374 containerd[1543]: time="2025-05-15T13:12:24.098339160Z" level=warning msg="container event discarded" container=e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf type=CONTAINER_CREATED_EVENT May 15 13:12:24.110754 containerd[1543]: time="2025-05-15T13:12:24.110712447Z" level=warning msg="container event discarded" container=4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad type=CONTAINER_CREATED_EVENT May 15 13:12:24.230242 containerd[1543]: time="2025-05-15T13:12:24.230068579Z" level=warning msg="container event discarded" container=e8895f5b0d2f159b65cb8c6adb03cd727cde51892c30207ab36e3f801b68d0cf type=CONTAINER_STARTED_EVENT May 15 13:12:24.246421 containerd[1543]: time="2025-05-15T13:12:24.246339039Z" level=warning msg="container event discarded" container=6bd3f1b38e976e3f57c379aeabe9ca86edb513af3e697273d90b0fadb11c7666 type=CONTAINER_STARTED_EVENT May 15 13:12:24.295763 containerd[1543]: time="2025-05-15T13:12:24.295690659Z" level=warning msg="container event discarded" container=4a65f1d062db945fa6d906deb6bf42bf594bd92acd37a0c32fba53c9d8e50fad type=CONTAINER_STARTED_EVENT May 15 13:12:26.599305 kubelet[2762]: I0515 13:12:26.599260 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:12:26.599305 kubelet[2762]: I0515 13:12:26.599313 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:12:26.599933 kubelet[2762]: I0515 13:12:26.599471 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599510 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599539 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599617 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599637 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599650 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599663 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599684 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599697 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599708 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:12:26.599933 kubelet[2762]: E0515 13:12:26.599720 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:12:26.599933 kubelet[2762]: I0515 13:12:26.599735 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:12:27.266876 systemd[1]: Started sshd@44-172.236.109.179:22-139.178.89.65:35618.service - OpenSSH per-connection server daemon (139.178.89.65:35618). May 15 13:12:27.620634 sshd[6255]: Accepted publickey for core from 139.178.89.65 port 35618 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:27.621760 sshd-session[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:27.631612 systemd-logind[1516]: New session 43 of user core. May 15 13:12:27.636806 systemd[1]: Started session-43.scope - Session 43 of User core. May 15 13:12:27.946973 sshd[6257]: Connection closed by 139.178.89.65 port 35618 May 15 13:12:27.947954 sshd-session[6255]: pam_unix(sshd:session): session closed for user core May 15 13:12:27.953368 systemd-logind[1516]: Session 43 logged out. Waiting for processes to exit. May 15 13:12:27.954465 systemd[1]: sshd@44-172.236.109.179:22-139.178.89.65:35618.service: Deactivated successfully. May 15 13:12:27.958106 systemd[1]: session-43.scope: Deactivated successfully. May 15 13:12:27.963582 systemd-logind[1516]: Removed session 43. May 15 13:12:29.982335 kubelet[2762]: I0515 13:12:29.982275 2762 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=100 highThreshold=85 amountToFree=410277478 lowThreshold=80 May 15 13:12:29.982335 kubelet[2762]: E0515 13:12:29.982342 2762 kubelet.go:1474] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 410277478 bytes, but only found 0 bytes eligible to free." May 15 13:12:32.957486 kubelet[2762]: E0515 13:12:32.957443 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:12:33.010742 systemd[1]: Started sshd@45-172.236.109.179:22-139.178.89.65:35626.service - OpenSSH per-connection server daemon (139.178.89.65:35626). May 15 13:12:33.344803 sshd[6273]: Accepted publickey for core from 139.178.89.65 port 35626 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:33.346790 sshd-session[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:33.352726 systemd-logind[1516]: New session 44 of user core. May 15 13:12:33.358697 systemd[1]: Started session-44.scope - Session 44 of User core. May 15 13:12:33.656735 sshd[6275]: Connection closed by 139.178.89.65 port 35626 May 15 13:12:33.657830 sshd-session[6273]: pam_unix(sshd:session): session closed for user core May 15 13:12:33.665141 systemd[1]: sshd@45-172.236.109.179:22-139.178.89.65:35626.service: Deactivated successfully. May 15 13:12:33.666996 systemd-logind[1516]: Session 44 logged out. Waiting for processes to exit. May 15 13:12:33.670247 systemd[1]: session-44.scope: Deactivated successfully. May 15 13:12:33.675047 systemd-logind[1516]: Removed session 44. May 15 13:12:33.959665 containerd[1543]: time="2025-05-15T13:12:33.958365610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 13:12:35.961109 kubelet[2762]: E0515 13:12:35.959926 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:12:36.635720 kubelet[2762]: I0515 13:12:36.635685 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:12:36.635720 kubelet[2762]: I0515 13:12:36.635725 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:12:36.636034 kubelet[2762]: I0515 13:12:36.635843 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635871 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635885 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635895 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635903 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635911 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635919 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635930 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635939 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635947 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:12:36.636034 kubelet[2762]: E0515 13:12:36.635956 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:12:36.636034 kubelet[2762]: I0515 13:12:36.635964 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:12:37.305358 containerd[1543]: time="2025-05-15T13:12:37.305249707Z" level=warning msg="container event discarded" container=92f5427d4ca5000cf8150be355f0f67c410eeaad2f108db48f13f1dd0154f937 type=CONTAINER_CREATED_EVENT May 15 13:12:37.305358 containerd[1543]: time="2025-05-15T13:12:37.305320237Z" level=warning msg="container event discarded" container=92f5427d4ca5000cf8150be355f0f67c410eeaad2f108db48f13f1dd0154f937 type=CONTAINER_STARTED_EVENT May 15 13:12:37.330937 containerd[1543]: time="2025-05-15T13:12:37.330859166Z" level=warning msg="container event discarded" container=5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f type=CONTAINER_CREATED_EVENT May 15 13:12:37.426280 containerd[1543]: time="2025-05-15T13:12:37.426221748Z" level=warning msg="container event discarded" container=5d2bbddc046a095743b44d4b1fedbfa2341b8fbd203128c946ac9572aebc612f type=CONTAINER_STARTED_EVENT May 15 13:12:37.915158 containerd[1543]: time="2025-05-15T13:12:37.915048245Z" level=warning msg="container event discarded" container=4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d type=CONTAINER_CREATED_EVENT May 15 13:12:37.915158 containerd[1543]: time="2025-05-15T13:12:37.915125345Z" level=warning msg="container event discarded" container=4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d type=CONTAINER_STARTED_EVENT May 15 13:12:37.922534 containerd[1543]: time="2025-05-15T13:12:37.922464503Z" level=error msg="failed to cleanup \"extract-870805948-nU1T sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 13:12:37.923983 containerd[1543]: time="2025-05-15T13:12:37.923870700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/kube-controllers: no space left on device" May 15 13:12:37.924084 containerd[1543]: time="2025-05-15T13:12:37.924066741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 13:12:37.924465 kubelet[2762]: E0515 13:12:37.924419 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:12:37.925022 kubelet[2762]: E0515 13:12:37.924480 2762 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 13:12:37.925324 kubelet[2762]: E0515 13:12:37.925253 2762 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7p84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f97f99f64-zpxjv_calico-system(627c03e7-e267-48fe-b4ed-2069e33dcd5c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" May 15 13:12:37.926873 kubelet[2762]: E0515 13:12:37.926628 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:12:38.725674 systemd[1]: Started sshd@46-172.236.109.179:22-139.178.89.65:39588.service - OpenSSH per-connection server daemon (139.178.89.65:39588). May 15 13:12:39.099484 sshd[6293]: Accepted publickey for core from 139.178.89.65 port 39588 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:39.101769 sshd-session[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:39.110062 systemd-logind[1516]: New session 45 of user core. May 15 13:12:39.115767 systemd[1]: Started session-45.scope - Session 45 of User core. May 15 13:12:39.413581 sshd[6295]: Connection closed by 139.178.89.65 port 39588 May 15 13:12:39.414848 sshd-session[6293]: pam_unix(sshd:session): session closed for user core May 15 13:12:39.419425 systemd[1]: sshd@46-172.236.109.179:22-139.178.89.65:39588.service: Deactivated successfully. May 15 13:12:39.422757 systemd[1]: session-45.scope: Deactivated successfully. May 15 13:12:39.425694 systemd-logind[1516]: Session 45 logged out. Waiting for processes to exit. May 15 13:12:39.428116 systemd-logind[1516]: Removed session 45. May 15 13:12:39.609824 containerd[1543]: time="2025-05-15T13:12:39.609537438Z" level=warning msg="container event discarded" container=7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248 type=CONTAINER_CREATED_EVENT May 15 13:12:39.751075 containerd[1543]: time="2025-05-15T13:12:39.750941184Z" level=warning msg="container event discarded" container=7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248 type=CONTAINER_STARTED_EVENT May 15 13:12:41.958681 kubelet[2762]: E0515 13:12:41.958620 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" May 15 13:12:44.057216 containerd[1543]: time="2025-05-15T13:12:44.057108794Z" level=warning msg="container event discarded" container=e9a974d0d3697d496d25111fd3100468eb230419767e826c8d53ffda97a915df type=CONTAINER_CREATED_EVENT May 15 13:12:44.057899 containerd[1543]: time="2025-05-15T13:12:44.057289965Z" level=warning msg="container event discarded" container=e9a974d0d3697d496d25111fd3100468eb230419767e826c8d53ffda97a915df type=CONTAINER_STARTED_EVENT May 15 13:12:44.057899 containerd[1543]: time="2025-05-15T13:12:44.057301705Z" level=warning msg="container event discarded" container=1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5 type=CONTAINER_CREATED_EVENT May 15 13:12:44.057899 containerd[1543]: time="2025-05-15T13:12:44.057314185Z" level=warning msg="container event discarded" container=1f7d1d72543f15493da50e14240c79a3fbef5f35d0db44b28801a73dc89f1fb5 type=CONTAINER_STARTED_EVENT May 15 13:12:44.476708 systemd[1]: Started sshd@47-172.236.109.179:22-139.178.89.65:39598.service - OpenSSH per-connection server daemon (139.178.89.65:39598). May 15 13:12:44.823103 sshd[6308]: Accepted publickey for core from 139.178.89.65 port 39598 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:44.825102 sshd-session[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:44.830401 systemd-logind[1516]: New session 46 of user core. May 15 13:12:44.841714 systemd[1]: Started session-46.scope - Session 46 of User core. May 15 13:12:45.148194 sshd[6310]: Connection closed by 139.178.89.65 port 39598 May 15 13:12:45.149798 sshd-session[6308]: pam_unix(sshd:session): session closed for user core May 15 13:12:45.154350 systemd-logind[1516]: Session 46 logged out. Waiting for processes to exit. May 15 13:12:45.155355 systemd[1]: sshd@47-172.236.109.179:22-139.178.89.65:39598.service: Deactivated successfully. May 15 13:12:45.158216 systemd[1]: session-46.scope: Deactivated successfully. May 15 13:12:45.159992 systemd-logind[1516]: Removed session 46. May 15 13:12:46.488535 containerd[1543]: time="2025-05-15T13:12:46.488288338Z" level=warning msg="container event discarded" container=93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c type=CONTAINER_CREATED_EVENT May 15 13:12:46.667863 kubelet[2762]: I0515 13:12:46.667828 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:12:46.667863 kubelet[2762]: I0515 13:12:46.667868 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:12:46.671210 kubelet[2762]: I0515 13:12:46.671168 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:12:46.693329 kubelet[2762]: I0515 13:12:46.693285 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:12:46.693730 kubelet[2762]: I0515 13:12:46.693502 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-ftdbf","kube-system/coredns-6f6b679f8f-xfdz2","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:12:46.693730 kubelet[2762]: E0515 13:12:46.693536 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:12:46.693730 kubelet[2762]: E0515 13:12:46.693666 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:12:46.693730 kubelet[2762]: E0515 13:12:46.693705 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:12:46.693730 kubelet[2762]: E0515 13:12:46.693718 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:12:46.693730 kubelet[2762]: E0515 13:12:46.693728 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:12:46.693730 kubelet[2762]: E0515 13:12:46.693737 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:12:46.694045 kubelet[2762]: E0515 13:12:46.693751 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:12:46.694045 kubelet[2762]: E0515 13:12:46.693760 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:12:46.694045 kubelet[2762]: E0515 13:12:46.693769 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:12:46.694045 kubelet[2762]: E0515 13:12:46.693776 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:12:46.694045 kubelet[2762]: I0515 13:12:46.693787 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:12:46.708916 containerd[1543]: time="2025-05-15T13:12:46.708844199Z" level=warning msg="container event discarded" container=93257f3c05bf1e79aa7d292f41eaa46aba46799b45053a15da32a53b5b14e30c type=CONTAINER_STARTED_EVENT May 15 13:12:47.589407 containerd[1543]: time="2025-05-15T13:12:47.589330217Z" level=warning msg="container event discarded" container=e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80 type=CONTAINER_CREATED_EVENT May 15 13:12:47.760896 containerd[1543]: time="2025-05-15T13:12:47.760817877Z" level=warning msg="container event discarded" container=e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80 type=CONTAINER_STARTED_EVENT May 15 13:12:47.890248 containerd[1543]: time="2025-05-15T13:12:47.890156716Z" level=warning msg="container event discarded" container=e03323e98f748ca247e299c9c383053b537b902cccf041a90e47bf4ef257fc80 type=CONTAINER_STOPPED_EVENT May 15 13:12:49.796123 containerd[1543]: time="2025-05-15T13:12:49.796083121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4debd9cd295df7525b9cab5331c6b1a5b44c4896dbfe15305c4d49b3b2eedd5\" id:\"d216537fb6478824a4145063ad488720c5669e886448a5a48cc39b777f08d41d\" pid:6342 exited_at:{seconds:1747314769 nanos:795717619}" May 15 13:12:50.214815 systemd[1]: Started sshd@48-172.236.109.179:22-139.178.89.65:40862.service - OpenSSH per-connection server daemon (139.178.89.65:40862). May 15 13:12:50.559776 sshd[6356]: Accepted publickey for core from 139.178.89.65 port 40862 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:50.561705 sshd-session[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:50.570598 systemd-logind[1516]: New session 47 of user core. May 15 13:12:50.577716 systemd[1]: Started session-47.scope - Session 47 of User core. May 15 13:12:50.876864 sshd[6358]: Connection closed by 139.178.89.65 port 40862 May 15 13:12:50.878415 sshd-session[6356]: pam_unix(sshd:session): session closed for user core May 15 13:12:50.884420 systemd-logind[1516]: Session 47 logged out. Waiting for processes to exit. May 15 13:12:50.885952 systemd[1]: sshd@48-172.236.109.179:22-139.178.89.65:40862.service: Deactivated successfully. May 15 13:12:50.889698 systemd[1]: session-47.scope: Deactivated successfully. May 15 13:12:50.893910 systemd-logind[1516]: Removed session 47. May 15 13:12:52.957748 kubelet[2762]: E0515 13:12:52.957669 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c" May 15 13:12:55.137389 containerd[1543]: time="2025-05-15T13:12:55.137273191Z" level=warning msg="container event discarded" container=eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7 type=CONTAINER_CREATED_EVENT May 15 13:12:55.439197 containerd[1543]: time="2025-05-15T13:12:55.439015159Z" level=warning msg="container event discarded" container=eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7 type=CONTAINER_STARTED_EVENT May 15 13:12:55.940775 systemd[1]: Started sshd@49-172.236.109.179:22-139.178.89.65:40876.service - OpenSSH per-connection server daemon (139.178.89.65:40876). May 15 13:12:56.285227 sshd[6375]: Accepted publickey for core from 139.178.89.65 port 40876 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:12:56.286731 sshd-session[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:12:56.293881 systemd-logind[1516]: New session 48 of user core. May 15 13:12:56.301692 systemd[1]: Started session-48.scope - Session 48 of User core. May 15 13:12:56.606643 sshd[6377]: Connection closed by 139.178.89.65 port 40876 May 15 13:12:56.607477 sshd-session[6375]: pam_unix(sshd:session): session closed for user core May 15 13:12:56.612807 systemd-logind[1516]: Session 48 logged out. Waiting for processes to exit. May 15 13:12:56.614965 systemd[1]: sshd@49-172.236.109.179:22-139.178.89.65:40876.service: Deactivated successfully. May 15 13:12:56.618191 systemd[1]: session-48.scope: Deactivated successfully. May 15 13:12:56.623343 systemd-logind[1516]: Removed session 48. May 15 13:12:56.717255 kubelet[2762]: I0515 13:12:56.717201 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:12:56.717255 kubelet[2762]: I0515 13:12:56.717251 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:12:56.718892 kubelet[2762]: I0515 13:12:56.718726 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:12:56.742174 kubelet[2762]: I0515 13:12:56.742134 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:12:56.742302 kubelet[2762]: I0515 13:12:56.742278 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742307 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742321 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742330 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742339 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742347 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742355 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742366 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742376 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742384 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:12:56.742443 kubelet[2762]: E0515 13:12:56.742391 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:12:56.742443 kubelet[2762]: I0515 13:12:56.742400 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:12:57.388348 containerd[1543]: time="2025-05-15T13:12:57.388250377Z" level=warning msg="container event discarded" container=eabf709e75905f19276bc51b79359fea76f1f60cdf0aab074e0f37a6da08f6a7 type=CONTAINER_STOPPED_EVENT May 15 13:13:01.671825 systemd[1]: Started sshd@50-172.236.109.179:22-139.178.89.65:57530.service - OpenSSH per-connection server daemon (139.178.89.65:57530). May 15 13:13:02.017166 sshd[6388]: Accepted publickey for core from 139.178.89.65 port 57530 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:13:02.020231 sshd-session[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:13:02.031191 systemd-logind[1516]: New session 49 of user core. May 15 13:13:02.037728 systemd[1]: Started session-49.scope - Session 49 of User core. May 15 13:13:02.331968 sshd[6390]: Connection closed by 139.178.89.65 port 57530 May 15 13:13:02.333015 sshd-session[6388]: pam_unix(sshd:session): session closed for user core May 15 13:13:02.337395 systemd[1]: sshd@50-172.236.109.179:22-139.178.89.65:57530.service: Deactivated successfully. May 15 13:13:02.340408 systemd[1]: session-49.scope: Deactivated successfully. May 15 13:13:02.342498 systemd-logind[1516]: Session 49 logged out. Waiting for processes to exit. May 15 13:13:02.344986 systemd-logind[1516]: Removed session 49. May 15 13:13:02.392023 systemd[1]: Started sshd@51-172.236.109.179:22-139.178.89.65:57540.service - OpenSSH per-connection server daemon (139.178.89.65:57540). May 15 13:13:02.735690 sshd[6402]: Accepted publickey for core from 139.178.89.65 port 57540 ssh2: RSA SHA256:P949gk/CxRRNiRDkvkt5syLtdb1/vUWprMwjcIOJpIY May 15 13:13:02.737805 sshd-session[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:13:02.743819 systemd-logind[1516]: New session 50 of user core. May 15 13:13:02.749707 systemd[1]: Started session-50.scope - Session 50 of User core. May 15 13:13:03.049406 containerd[1543]: time="2025-05-15T13:13:03.048935307Z" level=warning msg="container event discarded" container=7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248 type=CONTAINER_STOPPED_EVENT May 15 13:13:03.092400 sshd[6404]: Connection closed by 139.178.89.65 port 57540 May 15 13:13:03.094030 sshd-session[6402]: pam_unix(sshd:session): session closed for user core May 15 13:13:03.100087 systemd-logind[1516]: Session 50 logged out. Waiting for processes to exit. May 15 13:13:03.100519 systemd[1]: sshd@51-172.236.109.179:22-139.178.89.65:57540.service: Deactivated successfully. May 15 13:13:03.105074 systemd[1]: session-50.scope: Deactivated successfully. May 15 13:13:03.109523 systemd-logind[1516]: Removed session 50. May 15 13:13:03.129969 containerd[1543]: time="2025-05-15T13:13:03.129892406Z" level=warning msg="container event discarded" container=4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d type=CONTAINER_STOPPED_EVENT May 15 13:13:03.954983 containerd[1543]: time="2025-05-15T13:13:03.954203094Z" level=warning msg="container event discarded" container=7cc5d8735936849806b6feeacdb4f866762884061e7577164e33a27ed985c248 type=CONTAINER_DELETED_EVENT May 15 13:13:04.171335 containerd[1543]: time="2025-05-15T13:13:04.171168664Z" level=warning msg="container event discarded" container=4a0d31b3c6a1e7d66684e03d47c46b24155eeea4ee554dc8c345f119dc26692d type=CONTAINER_DELETED_EVENT May 15 13:13:06.781777 kubelet[2762]: I0515 13:13:06.781735 2762 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 13:13:06.781777 kubelet[2762]: I0515 13:13:06.781781 2762 container_gc.go:88] "Attempting to delete unused containers" May 15 13:13:06.783985 kubelet[2762]: I0515 13:13:06.783960 2762 image_gc_manager.go:431] "Attempting to delete unused images" May 15 13:13:06.804790 kubelet[2762]: I0515 13:13:06.804748 2762 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 13:13:06.804926 kubelet[2762]: I0515 13:13:06.804901 2762 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6f97f99f64-zpxjv","calico-system/calico-typha-8d889846f-9b2wr","kube-system/coredns-6f6b679f8f-xfdz2","kube-system/coredns-6f6b679f8f-ftdbf","calico-system/calico-node-h5k9z","kube-system/kube-controller-manager-172-236-109-179","calico-system/csi-node-driver-fxxht","kube-system/kube-proxy-cwjrl","kube-system/kube-apiserver-172-236-109-179","kube-system/kube-scheduler-172-236-109-179"] May 15 13:13:06.805012 kubelet[2762]: E0515 13:13:06.804937 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" May 15 13:13:06.805012 kubelet[2762]: E0515 13:13:06.804952 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-8d889846f-9b2wr" May 15 13:13:06.805012 kubelet[2762]: E0515 13:13:06.804964 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-xfdz2" May 15 13:13:06.805012 kubelet[2762]: E0515 13:13:06.804972 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-ftdbf" May 15 13:13:06.805012 kubelet[2762]: E0515 13:13:06.804983 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-h5k9z" May 15 13:13:06.805012 kubelet[2762]: E0515 13:13:06.804991 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-109-179" May 15 13:13:06.805012 kubelet[2762]: E0515 13:13:06.805010 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-fxxht" May 15 13:13:06.805335 kubelet[2762]: E0515 13:13:06.805018 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cwjrl" May 15 13:13:06.805335 kubelet[2762]: E0515 13:13:06.805026 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-109-179" May 15 13:13:06.805335 kubelet[2762]: E0515 13:13:06.805034 2762 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-109-179" May 15 13:13:06.805335 kubelet[2762]: I0515 13:13:06.805043 2762 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 13:13:07.959948 kubelet[2762]: E0515 13:13:07.959717 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6f97f99f64-zpxjv" podUID="627c03e7-e267-48fe-b4ed-2069e33dcd5c"